content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
shape identifying robot CMU CAM
I am planning to build a robot that able to identify simple shapes(circle, triangle, rectangle ). I am using CMU cam.
I think it will correct if i use Line mode. Can anyone help me with some source codes and methods to do this. I am a little bit confused in line mode. i didnt understand how can i manage the data
sending in line mode of CMU cam.
thank you | {"url":"http://www.societyofrobots.com/robotforum/index.php?topic=8290.0","timestamp":"2014-04-18T05:31:59Z","content_type":null,"content_length":"49601","record_id":"<urn:uuid:c17c5116-948c-4f50-8501-fd4ff9f3f983>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
FIPer: A modified FIP with even less dependence on defense and luck
If you follow baseball at all, you are probably familiar with the concept of ERA as a summary statistic of a pitcher’s performance. You may also be familiar with work over the last couple of decades
developing the concept of Defense Independent Pitching Statistics (DIPS) such as Fielding Independent Pitching (FIP). The basic idea is that ERA can be unreliable both in isolating a pitcher's
performance from his defense and luck, and (as a result) in projecting future performance. FIP addresses these problems by focusing on the outcomes over which a pitcher has the most control
(strikeouts, walks, and home runs) and ignoring everything else. The result is a metric with some nice properties:
1. FIP only includes outcomes which are independent of the defense
2. FIP has a stronger year-to-year (Y2Y) correlation than ERA
3. FIP predicts the following year's ERA better than ERA itself does
4. FIP is easy to compute with access to basic pitching stats
Put together, these four properties have made FIP one of the most-used “advanced stats” for evaluating pitchers, with (2) and (3) recommending it over ERA, and (4) recommending it over
similarly-intentioned stats such as tERA, xFIP, and SIERA, which have more complicated formulations. There’s just one problem with all of this: (1) isn’t actually true.
The fielding-dependence of FIP isn’t obvious at first, because it doesn’t come from the home runs, walks, or strikeouts. It comes from the normalization factor used to convert those three stats into
rates, innings pitched:
Innings pitched is outs-recorded divided by three, and there are lots of different ways to record outs. Most of those ways are dependent on the defense. As a result, if you take two hypothetical
pitchers who perform identically to each other but give one a good defense and the other a team full of Skip Schumakers*, the former will accrue more outs over time, and thus have a lower FIP.
Given all that, I decided to try squeezing a little more of the defense out of the equation. Rather than using IP, I normalized each rate by the number of plate appearances that resulted in one of
the three pitcher-controlled outcomes. In other words, if we only consider PAs that resulted in a walk, strikeout, or home run, what percentage of that subset of PAs yielded each of the three
outcomes. The result is a metric with even less-dependence on the defense and BABIP luck than FIP, which I’m referring to as FIPer (pending a better name):
This is a small tweak of the original FIP formula, but given that it is a purer embodiment of the goal of FIP (eliminating things outside of the pitchers control), and is no more difficult to compute
(it actually requires one fewer component stat, as IP need not be known), even a small improvement with respect to the other desired properties would be relevant. I also performed the same tweak to
xFIP, which replaces HR with the "expected" number of home runs given their fly ball tendencies, (computed by multiplying the number of fly balls given up by a pitcher by the league average HR/FB
With FIPer in hand I, like many others, turned to the internet for validation. In so doing, I happened across this post from Beyond the Boxscore looking at the Y2Y correlation of basically every
pitching statistic you could think of. The analysis considered all pitchers from 2004-2011 who recorded at least 162 innings in two consecutive years (so, basically, healthy starting pitchers). I
decided to piggyback on this great study and see how FIPer stacks up.
As referenced in the numbered points at the top of this post, there are two basic things to look at for the metric: Y2Y correlation with itself, and Y2Y correlation with ERA. On both counts, FIPer
compares favorably with FIP.
As you can see, FIPer shows a considerably higher Y2Y correlation than FIP. Similarly, xFIPer shows improvement over of xFIP. This means that a player’s FIPer and xFIPer are more stable over time
than his FIP and xFIP, respectively, consistent with the metrics being a better representation of the true talent level. FIPer also shows a stronger Y2Y correlation than FIP with ERA, meaning that it
better predicts the following year’s ERA. In fact, for this particular data set, FIPer outperformed all the other metrics in the study, with xFIPer placing second.
I'd like to stress that these results are only considering a single set of data for which someone else had already done the work, so the comparisons to the full slate of stats is far from definitive.
I have, however, looked at a number of different date ranges and IP cutoffs in comparing FIP, FIPer, xFIP, and xFIPer, and the modified metrics consistently resulted in better Y2Y correlation with
self and with ERA.
If we take a look back at the four key features of FIP mentioned above, I’d argue that FIPer shows advantages over FIP on all counts. By avoiding the use of IP, it should be even less influenced by a
pitcher’s defense than FIP, and it is just as easy to calculate. The Y2Y correlation with itself is stronger, as is the Y2Y correlation with ERA and even FIP.
*While one could easily construct a more defensively-inept team than one full of Skip Schumakers, I doubt any of them are as likely to actually be constructed.
Posted on May 4, 2012
Filed under | {"url":"https://www.lessannoyingcrm.com/blog/2012/05/261/fiper:+a+modified+fip+with+even+less+dependence+on+defense+and+luck","timestamp":"2014-04-21T12:08:17Z","content_type":null,"content_length":"36788","record_id":"<urn:uuid:9bcab698-9d16-4f22-b12c-489360cd2d2c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
HPL_rand random number generator.
#include "hpl.h"
double HPL_rand();
HPL_rand generates the next number in the random sequence. This function ensures that this number lies in the interval (-0.5, 0.5]. The static array irand contains the information (2 integers)
required to generate the next number in the sequence X(n). This number is computed as X(n) = (2^16 * irand[1] + irand[0]) / d - 0.5, where the constant d is the largest 32 bit positive integer. The
array irand is then updated for the generation of the next number X(n+1) in the random sequence as follows X(n+1) = a * X(n) + c. The constants a and c should have been preliminarily stored in the
arrays ias and ics as 2 pairs of integers. The initialization of ias, ics and irand is performed by the function HPL_setran.
See Also
HPL_ladd, HPL_lmul, HPL_setran, HPL_xjumpm, HPL_jumpit. | {"url":"http://www.netlib.org/benchmark/hpl/HPL_rand.html","timestamp":"2014-04-20T03:15:50Z","content_type":null,"content_length":"1595","record_id":"<urn:uuid:c8d2de1e-e0fd-4157-9774-484db7df74f8>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] help with translating some matlab
Neal Becker ndbecker2@gmail....
Fri Feb 18 10:04:45 CST 2011
I got the following matlab code from a colleage:
h =zeros(1, N); %% initial filter coefficients
lambda =1;
delta =0.001;
P =eye(N)/delta;
z =P*(x1');
g =z/(lambda+ x1*z);
y = h*x1'; %% filter output
e = ph_cpm_out(n) - y; %% error
h = h + e*g'; %% adaptation
P =Pnext;
So it looks to me:
z is a vector
The step g=z/(lambda+x1*z) seems to be a vector division.
How do I translate this to numpy?
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2011-February/054984.html","timestamp":"2014-04-16T05:38:00Z","content_type":null,"content_length":"3062","record_id":"<urn:uuid:222d0633-8037-4353-b83d-9527caf51390>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Linear Algebra Survival Guide for Quantum Mechanics on Chemiotics II
An old year’s resolution
One of the things I thought I was going to do in 2012 was learn about relativity. For why see http://luysii.wordpress.com/2012/09/11/why-math-is-hard-for-me-and-organic-chemistry-is-easy/. Also my
cousin’s new husband wrote a paper on a new way of looking at it. I’ve been putting him off as I thought I should know the old way first.
I knew that general relativity involved lots of math such as manifolds and the curvature of space-time. So rather than read verbal explanations, I thought I’d learn the math first. I started
reading John M. Lee’s two books on manifolds. The first involves topological manifolds, the second involves manifolds with extra structure (smoothness) permitting calculus to be done on them.
Distance is not a topological concept, but is absolutely required for calculus — that’s what the smoothness is about.
I started with “Introduction to Topological Manifolds” (2nd. Edition) by John M. Lee. I’ve got about 34 pages of notes on the first 95 pages (25% of the text), and made a list of the definitions I
thought worth writing down — there are 170 of them. Eventually I got through a third of its 380 pages of text. I thought that might be enough to help me read his second book “Introduction to Smooth
Manifolds” but I only got through 100 of its 600 pages before I could see that I really needed to go back and completely go through the first book.
This seemed endless, and would probably take 2 more years. This shouldn’t be taken as a criticism of Lee — his writing is clear as a bell. One of the few criticisms of his books is that they are so
clear, you think you understand what you are reading when you don’t.
So what to do? A prof at one of the local colleges, James J. Callahan, wrote a book called “The Geometry of Spacetime” which concerns special and general relativity. I asked if I could audit the
course on it he’d been teaching there for decades. Unfortunately he said “been there, done that” and had no plans ever to teach the course again.
Well, for the last month or so, I’ve been going through his book. It’s excellent, with lots of diagrams and pictures, and wide margins for taking notes. A symbol table would have been helpful, as
would answers to the excellent (and fairly difficult) problems.
This also explains why there have been no posts in the past month.
The good news is that the only math you need for special relativity is calculus and linear algebra. Really nothing more. No manifolds. At the end of the first third of the book (about 145 pages)
you will have a clear understanding of
l. time dilation — why time slows down for moving objects
2. length contraction — why moving objects shrink
3. why two observers looking at the same event can see it happening at different times.
4. the Michelson Morley experiment — but the explanation of it in the Feynman lectures on physics 15-3, 15-4 is much better
5. The Kludge Lorentz used to make Maxwell’s equations obey the Galilean principle of relativity (e.g. Newton’s first law)
6. How Einstein derived Lorentz’s kludge purely by assuming the velocity of light was constant for all observers, never mind how they were moving relative to each other. Reading how he did it, is
like watching a master sculptor at work.
Well, I’ll never get through the rest of Callahan by the end of 2012, but I can see doing it in a few more months. You could conceivably learn linear algebra by reading his book, but it would be
tough. I’ve written some fairly simplistic background linear algebra for quantum mechanics posts — you might find them useful. https://luysii.wordpress.com/category/
One of the nicest things was seeing clearly what it means for different matrices to represent the same transformation, and why you should care. I’d seen this many times in linear algebra, but seeing
how simple reflection through an arbitrary line through the origin can be when you (1) rotate the line to the x axis by tan(y/x) radians (2) change the y coordinate to – y – by an incredibly simple
matrix (3) rotate it back to the original angle .
That’s why any two n x n matrices X and Y represent the same linear transformation if they are related by the invertible matrix Z in the following way X = Z^-1 * Y * Z
Merry Christmas and Happy New Year (none of that Happy Holidays crap for me)
Recent Comments
• luysii on At the Alumni Day
• Handles on At the Alumni Day
• luysii on The prions within us
• luysii on Was I the last to find out?
• luysii on Short and Sweet
By luysii | Also posted in Math | Tagged general relativity, James J. Callahan, John M. Lee, linear algebra, manifolds, Special relativity, The geometry of spacetime | Comments (0)
Willock pp. 51 – 104
This is a continuation of my notes, as I read Molecular Symmetry” by David J. Willock. As you’ll see, things aren’t going particularly well. Examples of concepts are great once they’ve been
defined, but in this book it’s examples first, definitions later (if ever).
p. 51 — Note all the heavy lifting required to produce an object with only (italics) C4 symmetry (figure 3.6) First, you need 4 objects in a plane (so they rotate into each other), separated by 90
degrees. That’s far from enough objects as there are multiple planes of symmetry for 4 objects in a plane (I count 5 — how many do you get?) So you need another 4 objects in a plane parallel to the
first. These objects must be a different distance from the symmetry axis, otherwise the object will have A C2 axis of symmetry, midway between the two planes. Lastly no object in the second plane
can lie on a line parallel to the axis of symmetry which contains an object in the first plane — e.g. the two groups of 4 must be staggered relative to each other. It’s even more complicated for
S4 symmetry.
p. 51 — The term classes of operation really hasn’t been defined (except by example). Also this is the first example of (the heading of) a character table — which hasn’t been defined at this point.
p. 52 — Note H2O2 has C2 symmetry because it is not (italics) planar. Ditto for 1,2 (S, S) dimethyl cyclopropane (more importantly this is true for disulfide bonds between cysteines forming
cystines — a way of tying parts of proteins to each other.
p. 55 — Pay attention to the nomenclature: Cnh means that an axis of degree n is present along with a horizontal plane of symmetry. Cnv means that, instead, a vertical plane of symmetry is present
(along with the Cn axis)
p. 57 — Make sure you understain why C4h doesn’t have vertical planes of symmetry.
p. 59 — A bizarre pedagogical device — defining groups whose first letter is D by something they are not (italics) — which itself (cubic groups) is at present undefined.
Willock then regroups by defining what Dn actually is.
It’s a good exercise to try to construct the D4 point group yourself.
p. 61 — “It does form a subgroup” — If subgroup was ever defined, I missed it. Subgroup is not in the index (neither is group !). Point group is in the index, and point subgroup is as well
appearing on p. 47 — but point subgroup isn’t defined there.
p. 62 — Note the convention — the Z direction is perpendicular to the plane of a planar molecule.
p. 64 — Why are linear molecules called Cinfinity ? — because any rotation around the axis of symmetry (the molecule itself) leaves the molecule unchanged, and there are an infinity of such
p. 67 — Ah, the tetrahedron embedded in a cube — exactly the way an organic chemist should think of the sp3 carbon bonds. Here’s a mathematical problem for you. Let the cube have sides of 1, the
bonds as shown in figure 3.27, the carbon in the very center of the cube — now derive the classic tetrahedral bond angle — answer at the end of this post.
p. 67 — 74 — The discussions of symmetries in various molecules is exactly why you should have the conventions for naming them down pat.
p. 75 — in the second paragraph affect should be effect (at least in American English)
p. 76 — “Based on the atom positions alone we cannot tell the difference between the C2 rotation and the sigma(v) reflection, because either operation swaps the positions of the hydrogen atoms.” Do
we ever want to actually do this (for water that is)? Hopefully this will turn out to be chemically relevant.
p. 77 — Note that the definition of character refers to the effect of a symmetry operation on one of an atom’s orbitals (not it’s position). Does this only affect atoms whose position is not
(italics) changed by the symmetry operation? Very important to note that the character is -1 only on reversal of the orbital — later on, non-integer characters will be seen. Note also that each
symmetry operation produces a character (number) for each orbital, so there are (number of symmetry operations) * (number of orbital) characters in a character table
p. 77 – 78 — Note that the naming of the orbitals is consistent with what has gone on before. p(z) is in the plane of the molecule because that’s where the axis of rotation is.
Labels are introduced for each of the possible standard sets of characters (but standard set really isn’t defined). A standard set (of sets of characters??) is an irreducible representation for the
Is one set of characters an irreducible representation by itself or is it a bunch of them? The index claims that this is the definition of irreducible represenation, but given the amiguity about what
a standard set of characters actually is (italics) we don’t really know what an irreducible representation actually is. This is definition by example, a pedagogical device foreign to math, but
possibly a good pedagogical device — we’ll see. But at this point, I’m not really clear what an irreducible represenation actually is.
p. 77 — In a future edition, it would be a good idea to lable the x, y and z axes (and even perhaps draw in the px, py and pz orbitals), and, if possible, put figure 4.2 on the same page as table
4.2. Eventually things get figured out but it takes a lot of page flipping.
p. 79 — Further tightening of the definition of a representation — it’s one row of a character table.
p. 79 — Nice explanation of orbital phases, but do electrons in atoms know or care about them?
p. 80 — Note that in the x-y axes are rotated 90 degrees in going from figure 4.4a to figure 4.4b (why?). Why talk about d orbitals? — they’re empty in H20 but possibly not in other molecules with
C2v symmetry.
p. 80 — Affect should be effect (at least in American English)
p. 81 — B1 x B2 = A2 doesn’t look like a sum to me. If you actually summed them you’d get 2 for E, -2 for C2, and 0 for the other two. It does look like the product though.
pp. 81 – 82 — Far from sure what is going on in section 4.3
p.82 — Table 4.4b does look like multiplication of the elements of B1 by itself.
p. 82 — Not sure when basis vectors first made their appearance, possibly here. I slid over this on first reading since basis vectors were quite familiar to me from linear algebra (see the category
http://luysii.wordpress.com/category/linear-algebra-survival-guide-for-quantum-mechanics/ ). But again, the term is used here without really being defined. Probably not to confuse, the first basis
vectors shown first are at 90 degrees to each other (x and y), but later on (p. 85 they don’t have to be — the basis 0vectors point along the 3 hydrogens of ammonia).
p. 83 — Very nice way to bring in matrices, but it’s worth nothing that each matrix stands for just one symmetry operation. But each matrix lets you see what happens to all (italics) the basis
vectors you’ve chosen.
p. 84 — Get very clear in your mind that when you see an expression of the form
symmetry_operation1 symmetry_operation2
juxtaposed to each other — that you do symmetry_operation2 FIRST.
p. 87 – Notice that the term character is acquiring a second meaning here — it no longer is the effect of a symmetry operation on one of an atom’s orbitals (not the atom’s position), it’s the effect
of a symmetry operation on a whole set of basis elements.
p. 88 — Notice that in BF3, the basis vectors no longer align with the bonds (as they did in NH3), meaning that you can choose the basis vectors any way you want.
p.89 — Figure 4.9 could be markedly improved. One must distinguish between two types of lines (interrupted and continuous), and two types of arrowheads (solid and barbed), making for confuion in the
diagrams where they all appear together (and often superimposed).
Given the orbitals as combinations of two basis vectors, the character of symmetry operation and a basis vector, acquires yet another meaning — how much of the original orbital is left after the
symmetry operation.
p. 91 — A definition of irreducible representations as the ‘simplest’ symmetry behavior. Simplest is not defined. Also for the first time it is noted that symmetries can be of orbitals or
vibrations. We already know they can be of the locations of the atoms in a molecule.
Section 4.8 is extremely confusing.
p. 92 — We now find out that what was going on with a character sum of 2 on p. 81 — The sums were 2 and 0 because the representations were reducible.
p. 93 (added 29 Jan ’12) — We later find out (p. 115) that the number of reducible representations of a point group is the number of classes. The index says that class is defined an ‘equivalent set
of operations’ — but how two distinct operations are equivalent is never defined, just illustrated.
p. 100 — Great to have the logic behind the naming of the labels used for irreducible representations (even if they are far from intuitive)
p. 101 — There is no explanation of the difference between basis vector and basis function.
All in all, a very difficult chapter to untangle. I’m far from sure I understand from p. 92 – 100. However, hope lies in future chapters and I’ll push on. I think it would be very difficult to
learn from this book (so far) if you were totally unfamiliar with symmetry.
Answer to the problem on p. 67. Let the sides of the cube be of length 1. The bonds are all the same length, so the carbon must be in the center of the cube. Any two of the bonds point to the
opposite corners of a square of length 1. Therefore the ends of the bonds are sqrt(2) apart. Now drop a perpendicular to the middle of this line to get to the carbon in the center. This has
length 1/2. So we have a right triangle of side 1/2 and ( sqrt(2))/2. So the answer is 2 * arctan(1.414). Arctan(1.414 is) 54.731533 degrees giving the angle as 109.46 degrees.
By luysii | Also posted in Math, Willock: "Molecular Symmetry" | Tagged point groups, representation theory | Comments (0)
Linear Algebra survival guide for Quantum Mechanics -IX
The heavy lifting is pretty much done. Now for some fairly spectacular results, and then back to reading Clayden et. al. To make things concrete, let Y be a 3 dimensional vector with complex
coefficients c1, c2 and c3. The coefficients multiply a set of basis vectors (which exist since all finite and infinite vector spaces have a basis). The glory of abstraction is that we don’t
actually have worry about what the basis vectors actually are, just that they exist. We are free to use their properties, one of which is orthogonality (I may not have proved this, you should if I
haven’t). So the column vector is
and the corresponding row vector (the conjugate transpose) is
c1* c2* c3*
Next, I’m going to write a corresponding hermitian matrix M as follows where Aij is an arbitrary complex number.
A11 A12 A13
A21 A22 A23
A31 A32 A33
Now form the product
A11 A12 A13
A21 A22 A23
A31 A32 A33
c1* c2* c3* X Y Z
The net effect is to form another row vector with 3 components. All we need for what I want to prove is an explicit formula for X
X = c1*(A11) + c2*(A21) + c3*(A31)
When we multiply the row vector obtained by the column vector on the right we get
c1 [ c1*(A11) + c2*(A21) + c3*(A31) ] + c2 [ Y ] + c3 [ Z ] – which by assumption must be a real number
Next, form the product of M with the column vector
A11 A12 A13 X’
A21 A22 A23 Y’
A31 A32 A33 Z’
This time all we need is X’ which is c1(A11) + c2(A12) + c3(A13)
When we multiply the column vector obtained by the row vector on the left we get
c1* [ c1(A11) + c2(A12) + c3(A13) ] + c2* Y’ + c3* Z’ — the same number as
c1 [ c1*(A11) + c2*(A21) + c3*(A31) ] + c2 [ Y ] + c3 [ Z ]
Notice that c1, c2, c3 can each be any of the infinite number of complex numbers, without disturbing the equality. The ONLY way this can happen is if
c1*[c1(A11)] = c1[c1*(A11)] – this is obviously true
and c1*[c2[A12)] = c1[c2*(A21)] – something fishy
and c1*[c3[A13)] = c1[c3*(A31)] ditto
The last two equalities look a bit strange. If you go back to LASGFQM – II , you will see that c1*(c2) does NOT equal c1(c2*). However
c1*(c2) does equal [ c1 (c2* ) ]*. They aren’t the same, but at least they are the complex conjugates of the other. This means that to make
c1*[c2[A12)] = c1[c2*(A21)], A12 = A21* or A12* = A21 which is the same thing.
So just by following the postulate of quantum mechanics about the type of linear transformation (called Hermitian) which can result in a measurement, we find that the matrix representing the linear
transformation, the Hermitian matrix, has the property that Mij = Mji* (the first letter is the row index and the second is the column index). This also means that the diagonal elements of any
Hermitian matrix are real. Now when I first bumped up against Hermitian matrices they were DEFINED this way, making them seem rather magical. Hermitian matrices are in fact natural, and they do
just what quantum mechanics wants them to do.
Some more nomenclature: Mij = Mji* means that a Hermitian matrix equals its conjugate transpose (which is another even more obscure way to define them). The conjugate transpose of a matrix is
called the adjoint. This means that the row vector as we’ve defined it is the adjoint of the column vector. This also is why Hermitian matrices are called self-adjoint.
That’s about it. Hopefully when you see this stuff in the future, you won’t be just mumbling incantations. But perhaps you are wondering, where are the eigenvectors, where are the eigenvalues in
all this? What happened to the Schrodinger equation beloved in song and story? That’s for the course you’re taking, but briefly and without explanation, the basis vectors I’ve been talking about
(without explictly describing them) all result as follows:
Any Hermitian operator times wavefunction = some number times same wavefunction. [1]
Several points: many Hermitian operators change one wave function into another, so [ 1 ] doesn’t always hold.
IF [1] does hold the wavefunction is called an eigenfunction, and ‘some number’ is the eigenvalue.
There is usually a set of eigenfunctions for a given Hermitian operator — these are the basis functions (basis vectors of the infinite dimensional Hilbert space) of the vector space I was describing.
You find them by finding solutions of the Schrodinger equation H Psi = E Psi, but that’s for your course, but at least now you know the lingo. Hopefully, these last few words are less frustrating
than the way Tom Wolfe ended “The Bonfire of the Vanities” years ago — the book just stopped rather than ended.
I thought the course I audited was excellent, but we never even got into bonding. Nonetheless, I think the base it gave was quite solid and it’s time to find out. Michelle Francl recommended
“Modern Quantum Chemistry” by Atilla (yes Atilla ! ) Szabo and Neil Ostlund as the next step. You can’t beat the price as it’s a Dover paperback. I’ve taken a brief look at ‘”Molecular Quantum
Mechanics” by Atkins and Friedman — it starts with the postulates and moves on from there. Plenty of pictures and diagrams, but no idea how good it is. Finally, 40 years ago I lived across the
street from a Physics grad student (whose name I can’t recall), and the real hot stuff back then was a book by Prugovecki called “Quantum Mechanics in Hilbert Space”. Being a pack rat, I still have
it. We’ll see.
One further point. I sort of dumped on Giancoli”s book on Physics, which I bought when the course was starting up 9/09 — pretty pictures and all that. Having been through the first 300 pages or so
(all on mechanics), I must say it’s damn good. The pictures are appropriate, the diagrams well thought out, the exposition clear and user friendly without being sappy.
Time to delve.
Amen Selah
By luysii | Comments (5)
Linear Algebra survival guide for Quantum Mechanics – VIII
Quantum mechanics has never made an incorrect prediction. What does it predict? Numbers basically, and real numbers at that. When you read a dial, or measure an energy in a spectrum you get a
(real) number. Imaginary currents exist, but I don’t know if you can measure them (I”ll ask the EE who just married into the family this weekend). So couple the real number output of a measurement
with the postulate of quantum that tells you how to get them and out pop Hermitian matrices.
A variety of equivalent postulate systems for QM exist (Atkins uses 5, our instructor used 4). All of them say that the state of the system is described by a wavefunction (which we’re going to
think of as a vector, since we’re in linear algebra land). In LASGFQM – V the equivalence of the integral of a function and a vector in infinite dimensional space was explained. LASGFOM – VII
explained why every linear transformation could be represented by a matrix, and why every matrix represents a linear transformation.
An operator is just a linear transformation of a vector space to itself. This means that if we’re dealing with a finite dimensional vector space, the matrix representing the operator will be square.
Recalling the rules for matrix multiplication (LASGFQM – IV), this means that you can do things like this
x x x
x x x
x x x
y y y giving the row vector xy xy xy
and things like this
x x x giving the column vector xz
x x x xz
x x x xz
Of course way back at the beginning it was explained why the inner product of a V vector with itself, had to make one the complex conjugate (V*) of the other (so the the inner product of a vector
with itself was a real number), and in LASGFQM - VI it was explained why multiplying a row vector by a column vector gives a number . Here it is
y y y yz
So given that < V | V > really means < V* | V > to physicists, the inner product can be regarded as just another form of matrix multiplication, with the row vector being the conjugate transpose of
the column vector.
If you reverse the order of multiplication (column vector first, row vector second), you get an n x n matrix, not a number. It should be pretty clear by now that you can multiply all 3 matrices
together (row vector, n x n matrix, column vector) as long as you keep the order correct. After all this huffing an puffing, you wind up with — drum roll — a number, which is complex because the
vectors of quantum mechanics have complex coefficients (another one of the postulates).
We’re at a fairly high level of abstraction here. We haven’t chosen a basis, but all vector spaces have one (even infinite vector spaces). We’ll talk about them in the next (and probably final)
Call the column vector Y, the row vector X, and the matrix M. We have Y M X = some number. It should be clear that it doesn’t matter which two matrices we multiply together first e.g. (Y M) X = Y
(M X).
Recall that differentiation and integration are linear operators, so they can be represented by matrices. The wavefunction is represented by a column vector. Various things you want to know
(kinetic energy, position) are represented by linear operators in QM.
Here’s the postulate: For a given wavefunction Y, any measurement on it (given by a linear operator M ) is always a REAL number and is given by the
conjugate transpose of Y times M times Y (the column vector).
You have to accept the postulate (because it works ! ! !) as the QM instructor said many times. Don’t ask how it can be like that (Feynman).
This postulate is all that it takes to make the linear transformation M a very special one — e.g. a Hermitian matrix, with all sorts of interesting properties. Hermite described these matrices in
1855, long before QM. I’ve tried to find out what he was working on without success. More about the properties of Hermitian matrices next time, but to whet your appetite, if an element of M is
written Mij, where i is the row and j is the column, and Mij is a complex number, then Mji is the complex conjugate of Mij. Believe it or not, this all follows from the postulate.
By luysii | Comments (3)
Linear Algebra survival guide for Quantum Mechanics – VII
In linear algebra all the world’s a matrix (even vectors). Everyone (except me in the last post) numbers matrix elements by the following subscript convention — the row always comes first, then the
columns (mnemonic Roman Catholic). Similarly matrix size is always written a x b where a is the number of rows and b the number of columns. Vectors in quantum mechanics are written both ways, as
column vectors 1 x n, or as row vectors (n x 1).
Vectors aren’t usually called matrices, but matrices they are when it comes to multiplication. Vectors can be multiplied by a matrix (or multiply a matrix) using the usual matrix multiplication
rules. That’s one reason the example in LASGFQM – VI was so tedious — I wanted to show how matrices of different sizes could be multiplied together. The order of the matrices is crucial. The first
matrix A must have the same number of columns that the second matrix (B) has rows — otherwise it just doesn’t work. The product matrix has the number of rows of matrix A and the columns of matrix
So it is possible to form A B where A is 3 x 4 and B is 4 x 5 giving a 3 x 5 matrix, but B A makes no sense. If you get stuck use the Hubbard method of writing them out (see the last post). Here
is a 3 x 3 matrix (A) multiplying a 3 x 1 matrix (vector B)
A11 A12 A13 A11*B11 + A12 B21 + A13 * B31 – this is a single number
A21 A22 A23 A21*B11 + A22*B21 + A23* B31 — ditto
A31 A32 A33 etc.
AB is just another 3 x 1 vector. So the matrix just transforms one 3 dimensional vector into another
You should draw a similar diagram and see why B A is impossible. What about
C (3 x 1) times D (3 x 3)? You get CD a 3 x 1 matrix (row vector) back .
D11 D12 D13
D21 D22 D23
D31 D32 D33
C11 C12 C13 What is CD12?
Suppose we get concrete and make B into a column vector of the following type
A11 A12 A13 A11
A21 A22 A23 A21
A31 A32 A33 A31
The first time I saw this, I didn’t understand it. I thought the mathematicians were going back to the old Cartesian system of standard orthonormal vectors. They weren’t doing this at all.
Recall that we’re in a vector space and the column vector is really the 3 coefficients multiplying the 3 basis vectors (which are not specified). So you don’t have to mess around with choosing a
basis, the result is true for ALL bases of a 3 dimensional vector space. The power of abstraction. The first column of A shows what the first basis vector goes to (in general), the second column
shows what the second basis goes to. Back in LSQFQM – IV, it was explained why any linear transformation (call it T) of a basis vector (call it C1) to another vector space must look like this
T(C1) = t11 * D1 + t12 * D2 + . .. for however many basis vectors vector space D has.
Well, in the above example we’re going from a 3 dimensional vector space to another, and the first row of matrix A tells us what basis vector #1 is going to. This is why every linear transformation
can be represented by a matrix and every matrix represents a linear transformation. Sometimes abstraction saves a lot of legwork.
A more geometric way to look at all this is to regard an n x n matrix multiplying an n x 1 vector as moving it around in n dimensional space (keeping one end fixed at the origin — see below). So
just multiplies the third basis vector by 2 leaving the other two alone.
The notation is consistent. Recall that any linear transformation must leave the zero vector unchanged (see LSQFQM – I for a proof). Given the rules for multiplying a matrix times a vector, this
happens with a column vector which is all zeros.
The geometrically inclined can start thinking about what the possible linear transformations can do to three dimensional space (leaving the origin fixed). Rotations about the origin are one
possibility, expansion or contraction along a single basis vector are two more, projections down to a 2 dimensional plane or a 1 dimensional line are two more. There are others (particularly when
we’re in a vector space with complex numbers for coefficients — e.g. all of quantum mechanics).
Up next time, eigenvectors, adjoints, and (hopefully) Hermitian operators. That will be about it. The point of these posts (which are far more extensive than I thought they would be when I started
out) is to show you how natural the language of linear algebra is, once you see what’s going on under the hood. It is not to teach quantum mechanics, which I’m still learning to see how it is used
in chemistry. QM is far from natural (although it describes the submicroscopic world — whether it can ever describe the world we live in is another question), but, if these posts are any good at
all, you should be able to understand the language in which QM is expressed.
Linear Algebra survival guide for Quantum Mechanics – VI
Why is linear algebra like real estate? Well, in linear algebra the 3 most important things are notation, notation, notation. I’ve shown how two sequential linear transformations can be melded
into one, but you’ve seen nothing about the matrix representation of a linear transformation.
Here’s the playing field from LASGFQM – IV again. There are 3 vector spaces, A, B and C of dimensions 3, 4, and 5, with bases {A1, A2, A3}, {B1, B2, B3, B4} and {C1, C2, C3, C4, C5}. Then there is
linear transformation T which transforms A into B, and linear tranformation S which transforms B into C.
We have T(A1) = AB11 * B1 + AB12 * B2 + AB13 *B3 + AB14*B4
S(B1) = BC11 *C1 + BC12 *C2 + BC13 *C3 + BC14 * C4 + BC15 * C5
S(B2) = BC21 *C1 + BC22 *C2 + BC23 *C3 + BC24 * C4 + BC25 * C5
S(B3) = BC31 *C1 + BC32 *C2 + BC33 *C3 + BC34 * C4 + BC35 * C5
S(B4) = BC41 *C1 + BC42 *C2 + BC43 *C3 + BC44 * C4 + BC45 * C5
To see the symmetry of what is going on you may have to make the print size smaller so the equations don’t slop over the linebreak.
So after some heavy lifting we eventually arrived at:
T(A1) = AB11 * ( BC11 * C1 + BC12 * C2 + BC13 * C3 + BC14 * C4 + BC15 * C5 ) +
AB12 * ( BC21 * C1 + BC22 * C2 + BC23 * C3 + BC24 * C4 + BC25 * C5 ) +
AB13 * ( BC31 * C1 + BC32 * C2 + BC33 * C3 + BC34 * C4 + BC35 * C5 ) +
AB14 * ( BC41 * C1 + BC42 * C2 + BC43 * C3 + BC44 * C4 + BC45 * C5 )
So that
A1 = (AB11 * BC11 + AB12 * BC21 + AB13 * BC31 + AB14 * BC41) C1 +
(AB11 * BC12 + AB12 *BC22 + AB13 * BC32 + AB14 * BC42) C2 +
etc. etc.
All very open and above board, and obtained just by plugging the B”s in terms of the C’s into the A’s in terms of the B’s to get the A’s in terms of the C’s.
Notice that what we could call AC11 is just AB11 * BC11 + AB12 * BC21 + AB13 * BC31 + AB14 * BC41 and AC12 is just AB11 * BC12 + AB12 *BC22 + AB13 * BC32 + AB14 * BC42. We need another 13 such sums
to be able to express a vector in A (which is a unique linear combination of A1, A2, A3 because the three of them are a basis) in terms of the 5 C’ basis vectors. It’s dreary but it can be done, and
you just saw part of it.
You don’t want to figure this out all the time. So represent T as a rectangular array with 4 rows and 3 columns
AB11 AB21 AB31
AB12 AB22 AB32
AB13 AB23 AB33
AB14 AB24 AB34
Represent S as a rectangular array with 5 rows and 4 columns
BC11 BC21 BC31 BC41
BC12 BC22 BC32 BC42
BC13 BC23 BC33 BC43
BC14 BC24 BC34 BC44
BC15 BC25 BC34 BC45
Now plunk the array of AB’s on top of (and to the right) of the array of BC’s
AB11 AB21 AB31
AB12 AB22 AB32
AB13 AB23 AB33
AB14 AB24 AB34
BC11 BC21 BC31 BC41 AC11
BC12 BC22 BC32 BC42
BC13 BC23 BC33 BC43
BC14 BC24 BC34 BC44
BC15 BC25 BC34 BC45
Recall that (after much tedious algebra) we obtained that
AC11 was just AB11 * BC11 + AB12 * BC21 + AB13 * BC31 + AB14 * BC41
But AC11 is just the as if the first row of the BC array was a vector and the first column of the AB array was also a vector and you formed the dot product. Well they are and you did just that to
find element AC11 of the array representing the linear transformation from A to C. Do this 14 more times to get all 15 possible combinations of 3 As and 5 Cs and you get an array of numbers with 5
rows and 3 columns. This is the AC matrix and this is why matrix multiplication is the way it is.
Note: we have multiplied a 5 row times 4 column array by a 4 row 3 column array. Recall that you can only form the inner product of vectors with the same numbers of components (e.g. they have to be
in vector spaces of the same dimension).
We have T: A to B (dimension 3 to dimension 4)
S: B to C (dimension 4 to dimension 5)
This is written as ST (convention has it that the transformation on the right is always done first — this takes some getting used to, but at least everyone follows it, so it’s like medical
school — the appendix is on the right, just remember it). Notice that TS makes absolutely no sense. S takes you to a vector space of dimension 5, then T tries to start with a different vector
space. This is why when multiplying arrays (matrices) the number of rows of the matrix on the left must match the number of columns of the matrix on the right (or the top as I’ve drawn it — thanks
to John and Barbara Hubbard and their great book on Vector Calculus). If the two matrices are rectangular (as we have here), only one way of multiplication is possible.
More notation, and an apology. Matrix T is a 4 row by 3 column matrix — this is always written as a 4 x 3 matrix. Similarly for the coefficients of each element which I have in some way screwed up
(but at least I did so consistently). Invariably the matrix element (just a number) in the 3rd column of the fourth row is written element43 — If you look at what I’ve written everything is
bassackwards. Sorry, but the principles are correct. The mnemonic for the order of the coefficients is Roman Catholic (row column), a nonscatological mnemonic for once.
That’s a lot of tedium, but it does explain why matrix multiplication is the way it is. Notice a few other things. The matrices you saw were 4 x 3 and 5 x 4, but 3 x 1 matrices are possible as
well. Such matrices are called column vectors. Similarly 1 x 3 matrices exist and are called row vectors. So what do you get if you multiply a 1 x 3 vector by a 3 x 1 vector?
You get a 1 x 1 vector or a number. This is another way to look at the inner product of two vectors. Usually vectors are written as column vectors ( n x 1 ) with n rows and 1 columns. 1 x n row
vectors are known as the transpose of the column vector.
That’s plenty for now. Hopefully the next post will be more interesting. However, physics needs to calculate things and see if the numbers they get match up with experiment. This means that they
must choose a basis for each vector space, and express each vector as an array of coefficients of that basis. Mathematicians avoid this where possible, just using the properties of vector space
bases to reason about linear transformations, and the properties of various linear transformations to reason about bases. You’ll see the power of this sort of thinking in the next post. If you ever
study differentiable manifolds you’ll see it in spades.
Linear Algebra survival guide for Quantum Mechanics – V
We’ve established a pretty good base camp for the final assault. It’s time to acclimate to the altitude, look around and wax a bit philosophic. What’s happened to the integrals and derivatives in
all of this? A vector is a vector and its components can be differentiated, but linear algebra never talks about integrating vectors. During the QM course, I was constantly bombarding the
instructor with questions about things I didn’t understand. Finally, he said that he wished the students were asking those sorts of questions. I told him they were just doing what most people do on
their first exposure to QM — trying to survive. That’s certainly the way I was the first time around QM. True for calculus as well. I quickly learned to ignore what a Riemann integral really is
— the limit of an infinite sum of products. Cut the baloney, to integrate something just find the antiderivative. We all know that. Well, that’s pretty much true for continuous functions and the
problems you meet in Calculus I.
Well you’re not in Kansas anymore, and to understand why an infinite dimensional vector is like an integral, you’ve got to go back to Riemann’s definition of the integral of a function. You start
with some finite interval (infinite intervals come later). Then you chop it up into many (say 100) smaller nonoverlapping but contiguous subintervals (each of which has a finite nonzero length).
Then you pick one value of the function in each of the intervals (which can’t be infinite or the process fails), multiply it by the length of each subinterval and form the sum of all 100 products
(which is just a number after all). Then you chop each of the subintervals into subsubintervals and repeat the process obtaining a new number. If the series of numbers approaches a limit as the
process proceeds than the integral exists and is a number. Purists will note that I’ve skipped all sorts of analysis, such that each interval is a compact (closed and bounded) set of real numbers,
that the function is continuous on the intervals, so that it reaches a maximum and a minimum on each interval, and that if the integral exists, the sums of the maxima times the interval length on
each interval and the sums of the minima times interval length approach each other etc. etc. Parenthetically, the best analysis book I’ve met is “Understanding Analysis” by Stephen Abbott.
As you subdivide, the length of the sub-sub- .. . sub intervals get smaller and smaller (and of course more numerous). What if you call each of the subintervals a dimension rather than an interval
and the value of the function, the coefficient of the vector on that dimension? Then as the number of subintervals increases, the plot of the function values you chosen for each interval get closer
and closer, so that plotting a high dimension vector looks just like the continuous function you started with. This is why an infinite dimensional vector looks like the integral of a function (and
why quantum mechanics uses them).
Now imagine a linear transformation of this vector into another vector in the same infinite dimensional space, and you’re almost to what quantum mechanics means by an operator. Inner products of
infinite dimensional vectors can be defined (with just a minor bit of heavy lifting). Just multiply the coefficients of the vectors in each dimension together and form their sum. This won’t be
impossible. Let the nth coefficient of vector #1 be 1/2^n, that of vector #2 1/3^n. The sum of even an infinite number of such products is finite. This implies that to be of use in QM the
coefficients of any of its infinite vectors must form a convergent series.
Now, what if some (or all) of the coefficients are complex numbers? No problem, because of the way inner product of vectors with complex coefficients was defined in the second post of the series.
The inner product of (even an infinite dimensional ) complex vector with itself is guaranteed to be a real number. You’re almost in the playing field of QM, e.g. Hilbert space — an infinite
dimensional space with an inner product defined on it. The only other thing needed for Hilbert space is something called completeness, something I don’t understand well enough to explain, but it
means something like plugging up the holes in the space, in the same way that the real numbers plug the holes in the rational numbers.
Certainly not in Kansas anymore, and apparently barely in Physics either. It’s time to respond to Wavefuntion’s comment on the last post. “It’s interesting that if you are learning “practical”
quantum mechanics such as quantum chemistry, you can get away with a lot without almost any linear algebra. One has to only take a look at popular QC texts like Levine, Atkins, or Pauling and Wilson;
almost no LA there.” So what’s the point of all these posts?
It’s back to Feynman and another of his famous quotes “I think I can safely say that nobody understands quantum mechanics.” This from 1965. A look a Louisa Gilder’s recent book “The Age of
Entanglement” should convince you that, on a deep level, no one still does. Feynman also warns us not to start thinking ‘how can it be like that’ (so did the instructor in the QM course). So why
all this verbiage?
Because all QM follows from a few simple postulates, and these postulates are written in linear algebra. Hopefully at the end of this, you’ll understand the language in which QM is written, so any
difficulty will be with the underlying structure of QM (which is plenty), not the way QM is expressed (or why it is expressed the way it is).
Next up, vector and matrix notation and what the adjoint is, and why it’s important. If you begin thinking hard about the inner product of two different complex vectors (even the finite ones) you’ll
see that usually a complex number will result. How does QM avoid this (since all measurable values must be real — one of the postulates). Adjoints and Hermitian operators are the way out. There’s
still some pretty hard stuff ahead.
Linear Algebra survival guide for Quantum Mechanics – IV
The point of this post is to show from whence the weird definition of matrix multiplication comes, and why it simply MUST be the way it is. Actually matrices don’t appear in this post, just the
underlying equations they represent. We’re dealing with spaces of finite dimension at this point (infinite dimensional spaces come later). Such spaces have a basis — meaning a collection of
elements (basis vectors) which are enough to describe every element of the space UNIQUELY, as a linear combination.
To make things a bit more concrete, think of good old 3 dimensional space with basis vectors E1 = (1,0, 0) aka i, E2 = (0,1,0) aka j, and E3 = (0,0,1) aka j. Every point in this space is uniquely
described as a1 * E1 + a2 * E2 + a3 * E3 — e. g. a linear combination of the 3 basis vectors. You can also think of each point as a vector from the origin (0,0,0) to the point (a1,a2,a3). Once you
establish what the basis is each vector is specified by its (unique) triple of numerical coordinates (a1, a2, a3). Choose a different basis and you get a different set of coordinates, but you always
get no more and no less than 3 coordinates — that’s what dimension is all about. Note that the combination of basis vectors is linear (no powers greater than 1).
So now we’re going to consider several spaces, namely A, B and C of dimensions 3, 4 and 5. Their basis vectors are the set {A1, A2, A3 } for A, {B1, B2, B3, B4 } for B — fill in the dots for C.
What does a linear transformation from A to B look like? Because of the way things have been set up, there is really no choice at all.
Consider any vector of A — it must be of the form a1 * A1 + a2 * A2 + a3 * A3 , e.g. a linear combination of the basis vectors {A1, A2, A3} – where the { } notation means set. For any given
vector in A, a1 a2 and a3 are uniquely determined. Sorry to stress this so much but uniqueness is crucial.
Similarly any vector of C must be of the form c1 * C1 + c2 * C2 + c3 * C3 + c4 * C4 + c5 * C5. Go back and fill in the dots for B.
Any linear function T from A to B must satisfy
T (X + Y) = T(X) + T(Y)
where X and Y are vectors in A and T(X), T(Y) are vectors in B. So what? A lot. We only have to worry about what T does to A1, A2 and A3. Why ? ? Because the {Ai} are basis vectors, and because
of the second thing a linear function must satisfy
T ( number * X) = number * (T ( X)) so combining both properties
T (a1 * A1 + a2 * A2 + a3 * A3) = a1 * T(A1) + a2 * T(A2) + a3 * T(A3)
All we have to worry about is what T does to the 3 basis vectors of A. Everything else follows easily enough.
So what is T(A1) ? Well, it’s a vector in B. Since B has a basis T(A1) is a unique linear combination of them. Now the nomenclature will shift a bit. I’m going to write T(A1) as follows.
T(A1) = AB11 * B1 + AB12 * B2 + AB13 * B3 + AB14 * B4
AB signifies that the function is from space A to space B, the numbers after AB are to be taken as subscripts. Terms of art: linear functions between vector spaces are usually called linear
transformations. When the vectors spaces on either end of the transformation are the same the linear transformation is called a linear operator (or operator for short). Sound familiar? An example
of a linear operator in 3 dimensional space would just be a rotation of the coordinate axes, leaving the origin fixed. For why the origin has to be fixed if the transformation is to be linear see
the first post in the series.
Fill in the dots for T(A2) = AB21 * B1 + . . .
T(A3) = AB31 * B1 + . . .
Now for a blizzard of (similar and pretty simple) algebra. Consider the linear transformation from B to C. Call the transformation S. I’m going to stop putting the Bi’s and Ci’s in bold. you know
they are basis vectors. Also in what follows to get the equations to line up on top of each other you might have to make the characters smaller (say by holding down the Command and the minus key at
the same time — in the Apple world)
S(B1) = BC11 * C1 + BC12 * C2 + BC13 * C3 + BC14 * C4 + BC15 * C5
S(B2) = BC21 * C1 + BC22 * C2 + BC23 * C3 + BC24 * C4 + BC25 * C5
S(B3) = BC31 * C1 + BC32 * C2 + BC33 * C3 + BC34 * C4 + BC35 * C5
S(B4) = BC41 * C1 + BC42 * C2 + BC43 * C3 + BC44 * C4 + BC45 * C5
It’s pretty simple to plug S(Bi) into T(A1).
Recall that T(A1) = AB11 * B1 + AB12 * B2 + AB13 * B3 + AB14 * B4
So we get
T(A1) = AB11 * ( BC11 * C1 + BC12 * C2 + BC13 * C3 + BC14 * C4 + BC15 * C5 ) +
AB12 * ( BC21 * C1 + BC22 * C2 + BC23 * C3 + BC24 * C4 + BC25 * C5 ) +
AB13 * ( BC31 * C1 + BC32 * C2 + BC33 * C3 + BC34 * C4 + BC35 * C5 ) +
AB14 * ( BC41 * C1 + BC42 * C2 + BC43 * C3 + BC44 * C4 + BC45 * C5 )
So now we have a linear transformation of space A to space C, just by simple substitution. Do you see the pattern yet? If not just collect terms of A1 in terms of {C1, C2, C3, C4, C5}. It’s easy
to do as they are all above each other. If we write
S(T(A1)) = AC11 * C1 + AC12 * C2 + AC13 * C3 + AC14 * C4 + AC15 * C5
you can see that AC13 = AB11 * BC13 + AB12 * BC23 + AB13 * BC33 + AB14 * BC43. This is the sum of 4 terms of which are of the form AB1x * BCx, where x runs from 1 to 4
This should look very familiar if you know the formula for matrix multiplication. If not don’t sweat it, I’ll discuss matrices next time, but you’ve basically just seen them (they’re just a compact
way of representing the above equations). Linear transformations between (appropriately dimensioned) vector spaces can always be mushed together (combined) like this. Why? (1) all finite
dimensional vector spaces have a basis, with all that goes with them and (2) linear transformations are a very special type of function (according to an instructor in a graduate algebra course — the
only type of function mathematicians understand completely).
It is the very simple algebra of combining linear transformations between finite dimensional vector spaces that makes matrix multiplication exactly what it is. It simply can’t be anything else. Now
you know. Quantum mechanics is written in this language, the syntax of which is the linear transformation, the representation the matrix. Remarkably, when Heisenberg formulated quantum mechanics
this way, he knew nothing about matrices. A Hilbert trained mathematician and physicist (Max Born) had to tell him what he was really doing. So much for the notion that physicists shoehorn our view
of the world into a mathematical mold. Amazingly, the mathematics always seems to get there first (Newton excepted).
By luysii | Comments (6)
Linear Algebra survival guide for Quantum Mechanics – III
Before leaving the dot product, it should be noted that there are all sorts of nice geometric things you can do with it — such defining the angle between two vectors (and in a space with any finite
number of dimensions to boot). But these are things which are pretty intuitive (because they are geometric) so I’m not going to go into them. When the dot product of two vectors is zero they are
said to be orthogonal to each other (e.g. at right angles to each other). You saw this with the dot product of E1 = (1,0) and E2 = (0,1) in the other post. But it also works with any two vectors at
right angles, such as X = (1,1) and Y = (1,-1).
The notion of dimension seems pretty simple, until you start to think about it (consider fractals). We cut our vector teeth on vectors in 3 dimensional space (e.g. E1 = (1,0,0) aka i, E2 = (0,1,0)
aka j, and E3 = (0,0,1) aka k. Any point in 3 dimensional space can be expressed as a linear combination of them — e.g. (x, y, z) = x * E1 + y * E2 + z * E3. The crucial point about this way of
representing a given point is that the representation is unique. In math lingo, E1, E2, and E3 are said to be linearly independent, and if you study abstract algebra you will run up against the
following (rather obscure) definition – a collection of vectors is linearly independent if the only way to get them to add up to the zero vector (0, 0, . . .) is to multiply each of them by the real
number zero. X and Y by themselves are linearly independent , but X, Y and (1,0) = E1 are not, as 1 * X + 1 * Y + (-2) * E1 = (0, 0). This definition is used in lots of proofs in abstract algebra,
but it totally hides what is really going on. Given a linearly independent set of vectors, the representation of any other vector as a linear combination of them is UNIQUE. Given a set of vectors
V1, V2, . .. we can always represent the zero vector as 0 * V1 + 0 *V2 + . … If there is no other way to get the zero vector from them, then V1, V2, … are linearly independent. That’s where the
criterion comes from, but uniqueness is what is crucial.
It’s intuitively clear that you need two vectors to represent points in the plane, 3 to represent points in space, etc. etc. So the dimension of any vector space is the maximum number of linearly
independent vectors it contains. The number of pairs of linearly independent vectors in the plane is infinite (just consider rotating the x and y axes). But the plane has dimension 2 because 3 (non
co-linear) vectors are never linearly independent. Spaces can have any number of dimensions, and quantum mechanics deals with a type of infinite dimensional space called Hilbert space (I’ll show
how to get your mind around this in a later post). As an example of a space with a large number of dimensions, consider the stock market. Each stock in it occupies a separate dimension, with the
price (or the volume, or the total number of shares outstanding) as a number to multiple that dimension by. You don’t have a complete description of the stock market vector until you say what’s
going on with each stock (dimension).
Suppose you now have a space of dimension n, and a collection of n linearly independent vectors, so that any other n-dimensional vector can be uniquely expressed (can be uniquely represented) as a
linear combination of the n vectors. The collection of n vectors is then called a basis of the vector space. There is no reason the vectors of the basis have to be at right angles to each other
(in fact in “La Geometrie” of Descartes which gave rise to the term Cartesian coordinates, the axes were NOT at right angles to each other, and didn’t even go past the first quadrant). So (1,0) and
(1,1) is a perfectly acceptable basis for the plane. The pair are linearly independent — try getting them to add to (o, o) with nonzero coefficients.
Quantum mechanics wants things nicer than this. First, all the basis vectors are normalized — given a vector V’ we want to form a vector V pointing in the same direction such that < V | V > = 1.
Not hard to do — < V’ | V’ > is a just real number after all (call it x), so V is just V’/SQRT[x]. There was an example of this technique in the previous post in the series.
Second (and this is the hard part), Quantum mechanics wants all its normalized basis vectors to be orthogonal to each other — e.g. if I and J are vectors < I | J > = 1 if I = J, and 0 if I doesn’t
equal J. Such a function is called the Kroneker delta function (or delta(ij). How do you accomplish this? By a true algebraic horror known as Gram Schmidt orthogonalization. It is a ‘simple’
algorithm in which you take dot products of two vectors and then subtract them from another vector . I never could get the damn thing to work on problems years ago in grad school, and developed
another name for it which I’ll leave to your imagination (where is Kyle Finchsigmate when you really need him?). But work it does, so the basis vectors (the pure wavefunctions) of quantum mechanical
space are both normalized and orthogonal to each other (e.g. they are orthonormal). Since they are a basis, any other wave function has a UNIQUE representation in terms of them (these are the famous
mixed states or the superposition states of quantum mechanics).
If you’ve already studied a bit of QM, the basis vectors are the eigenvectors of the various quantum mechanical operators. If not, don’t sweat it, this will be explained in the next post. That’s a
fair amount of background and terminology. But it’s necessary for you to understand why matrix multiplication is the way it is, why matrices represent linear transformation, and why quantum
mechanical operators are basically linear transformations. That’s all coming up.
By luysii | Comments (2)
Linear Algebra survival guide for Quantum Mechanics – II
Before pushing on to the complexities of the dot product of two complex vectors, it’s worthwhile thinking about why the dot product isn’t a product as we’ve come to know products. Consider E1 = (1,
0 ) and E2 = (0, 1). Their dot product is 1 * 0 + 0 * 1 or zero. Not your father’s product. You’re not in Kansas any more. Abstract algebraists, love such things and call them zero divisors,
because neither of them is zero themselves yet when ‘multiplied’ together they produce zero.
This is not just mathematical trivia, as any two vectors we can dot together and get zero are called orthogonal. Such vectors are particularly important for quantum mechanics, because (to get ahead
a bit) all the eigenvectors we can interrogate by experiment to get any sort of measurement (energy, angular momentum etc. etc.) are orthogonal to each other. The dot product of V = 3 * E1 + 4 *
E2 with itself is 25. We can can make < V | V > = 1 by multiplying V by 1/SQRT(25) – check it out. Such a vector is said to be normalized. Any vector you meet in quantum mechanics can and
should be normalized, and usually is, except on your homework, where you forgot to do it and got the wrong answer. Vectors which are both orthogonal to each other and normalized are called
(unsurprisingly) orthonormal.
I’d love to be able to put subscripts on the variables, but at this point I can’t, so here are the naming conventions once again.
x^2 means x times x (or x squared)
x1 means x with subscript 1 (when x is a small letter)
x57 (note two integers follow the x not one) means a matrix element with the first number for the Row and the second for the Column — mnemonic Roman Catholic
X, V, etc. etc. are to be taken as vectors (I’ve got no way to put an arrow on top of them)
E1, E2, are the standard basis vectors — E1 = (1, 0 , 0 . . ), E2 = (0, 1, 0, .. ), En = (0, 0, … 1), Ei stands for any of them
# stands for any number (which can be real or complex)
i (in italics) always stands for the SQRT[-1]
* has two meanings. When separated by spaces such as x * x it means multiply e.g. x^2
When next to a vector V* or a letter x* it means the complex conjugate of the vector or the number (see later)
The dot product of a vector V can be written 3 ways V.V < V,V> and < V| V >. Since physicists use the last one, that’s what I’ll stick to (mostly).
Recall that to get a real number from the dot product of a complex vector with itself, one must multiply the vector V by its complex conjugate V*. Here what the complex conjugate is again. Given a
complex number z = a + ib, its complex conjugate (written z*) is a – ib.
z * z* (note the different uses of *) = a^2 + b^2, which is a real nonNegative number because a and b are both real. Note that conjugating a complex number twice doesn’t change it –e.g. z** = z.
This modification of the definition of dot product for complex vectors, leads to significant complications. Why? When V, W are vectors with complex coefficients < V | W > is NOT the same as < W | V >
unlike the case where the vectors have all real coefficients. Here’s why. No matter how many components a complex vector has, the dot product is only a sum of the products of just two complex
numbers with each other (see the previous post). The product of two complex numbers is just another one, as is the sum of any (finite) number of complex numbers. This means that multiplying a mere
two complex numbers together will be enough to see the problem. To avoid confusion with V and W which are vectors, I’ll call the complex numbers p and q. Remember that p1, p2, q1 and q2 are all real
numbers and i is just, well i (the number which when multiplied by itself gives – 1).
p = p1 + p2i, q = q1 + q2i
p* = p1 – p2i, q* = q1 – q2i
p times q* = (p1 + p2i) * (q1 – q2i) = (p1 * q1 + p2 * q2) + i (p2 * q1 – p1 * q2)
p* times q = (p1 – p2i) * (q1 + q2i) = (p1 * q1 + p2 * q2) + i(p1 * q2 – p2 * q1)
Note that the terms which multiply i are NOT the same (but they are the negative of each other). So what does < V | W > mean? Recall that
V = v1 * E1 + v2 * E2 + . … vn * En
W = w1 * E1 + . . + wn * En
< V | W > = v1 * w1 + v2 * w2 + . . . + vn * wn. ; here the * means multiplication not complex conjugation.
Remember that v1, w1, v2, etc. are now complex numbers, and you’ve just seen that v1* times w1 is NOT the same as v1 times w1*. Clearly a convention is called for. Malheureusement, physicists use
one convention and mathematicians use the other. Since this is about quantum mechanics, here’s what physicists mean by < V | W >. They mean the dot product of V* (whose coefficients are the
complex conjugates of v1, . . v2) with W. More explicitly they mean V* . W, but when written in physics notation < V | W >, the * isn’t mentioned (but never forget that it’s there).
Now v1 * w1 + v2 * w2 + . . . + vn * wn is just another complex number — say z = x + iy. To form its complex conjugate we just negate the iy term to get z* = x – iy
Look at
p times q* = (p1 + p2i) * (q1 – q2i) = (p1 * q1 + p2 * q2) + i (p2 * q1 – p1 * q2)
p* times q = (p1 – p2i) * (q1 + q2i) = (p1 * q1 + p2 * q2) + i(p1 * q2 – p2 * q1)
once again. Notice that p times q* is just the complex conjugate of p* times q
So if < V | W > = v1* * w1 + v2* * w2 + . . . + vn* * wn = x + iy ; here * means 2 different things, complex conjugation when next to vi and multiplication when between vi and wi (sorry for the
horrible notation, hopefully someone knows how to get subscripts into all this).
By the physics convention < W | V > is w1* * v1 + w2* * v2 + . . . + wn* * vn. Since p times q* is just the complex conjugate of p* times q, w1* * v1 is the complex conjugate of w1 * v1*. This
means w1* * v1 + w2* * v2 + . . . + wn* * vn = x – iy.
In shorthand < V | W > = < W | V >*, something you may have seen and puzzled over. It’s all a result of wanting the dot product of a complex vector to be a real number. Not handed down on tablets
of stone, but the response to a problem.
Next up, vector spaces, linear transformations on them (operators) and their matrix representation. I hope to pump subsequent posts out one after the other, but I’m having some minor surgery on the
6th, so there may be a lag.
By luysii | Comments (2) | {"url":"http://luysii.wordpress.com/category/linear-algebra-survival-guide-for-quantum-mechanics/","timestamp":"2014-04-19T06:51:45Z","content_type":null,"content_length":"112857","record_id":"<urn:uuid:3d096c4b-43e7-4b6c-afe3-3068419f17bb>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equilibrium Revenue
October 21st 2007, 07:34 PM #1
Oct 2007
Equilibrium Revenue
Please Help. I have a midterm, and this is an example question of what might be on it.
The manager of a CD store has found that if the price of CD is p(x)=80-x/6, then x CDs will be sold. Find an expression for the total revenue from the sale of x CDs (hint: revenue= demand x
price). Use your expression to determine the maximum revenue.
Revenue is price*demand, so the price(x) here is 80-x/6 and the demand is x, so the expression would be x*(80-x/6)= 80x - x^2/6. Max revenue, in terms of microeconomics, is usually at equilibrium
or the midpoint on the demand curve. If thats given, then you just plug it into the expression
Please Help. I have a midterm, and this is an example question of what might be on it.
The manager of a CD store has found that if the price of CD is p(x)=80-x/6, then x CDs will be sold. Find an expression for the total revenue from the sale of x CDs (hint: revenue= demand x
price). Use your expression to determine the maximum revenue.
is the formula: $p(x) = \frac {80 - x}6$ ? if so, you should use parentheses to indicate that.
Revenue = demand * Price, that is, it is the number of items sold times the price each item is sold for.
thus the revenue function is given by:
$R(x) = x \cdot \frac {80 - x}6 = \frac {40}3x - \frac 16 x^2$
this is a parabola, it's maximum occurs at its vertex
the vertex of a parabola, $f(x) = ax^2 + bx + c$ is the point $\left( \frac {-b}{2a}, f \left( \frac {-b}{2a}\right)\right)$
the answer to your last question is the y-coordinate of that formula
can you find it?
(you could also complete the square to get the vertex, which method do you prefer?)
It is not (80-x)/6, it is actually 80-(x/6). Sorry, for not adding the parenthesis in my problem before. If the problem is actually 80-(x/6), now what do I do to solve it?
do exactly what we said before: multiply through by x, and solve for the vertex of the resulting parabola by whatever method you feel comfortable with
October 21st 2007, 07:55 PM #2
October 21st 2007, 07:55 PM #3
October 21st 2007, 08:05 PM #4
Oct 2007
October 21st 2007, 08:15 PM #5 | {"url":"http://mathhelpforum.com/business-math/21028-equilibrium-revenue.html","timestamp":"2014-04-19T05:24:07Z","content_type":null,"content_length":"43994","record_id":"<urn:uuid:df29eaeb-e4da-4b9d-ae18-f24c84bef936>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Loo-Keng Hua
Born: 12 November 1910 in Jintan, Jiangsu Province, China
Died: 12 June 1985 in Tokyo, Japan
Click the picture above
to see two larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Loo-Keng Hua was one of the leading mathematicians of his time and one of the two most eminent Chinese mathematicians of his generation, S S Chern being the other. He spent most of his working life
in China during some of that country's most turbulent political upheavals. If many Chinese mathematicians nowadays are making distinguished contributions at the frontiers of science and if
mathematics in China enjoys high popularity in public esteem, that is due in large measure to the leadership Hua gave his country, as scholar and teacher, for 50 years.
Hua was born in 1910 in Jintan in the southern Jiangsu Province of China. Jintan is now a flourishing town, with a high school named after Hua and a memorial building celebrating his achievements;
but in 1910 it was little more than a village where Hua's father managed a general store with mixed success. The family was poor throughout Hua's formative years; in addition, he was a frail child
afflicted by a succession of illnesses, culminating in typhoid fever that caused paralysis of his left leg; this impeded his movement quite severely for the rest of his life. Fortunately Hua was
blessed from the start with a cheerful and optimistic disposition, which stood him in good stead then and during the many trials ahead.
Hua's formal education was brief and, on the face of it, hardly a preparation for an academic career - the first degree he would receive was an honorary doctorate from the University of Nancy in
France in 1980; nevertheless, it was of a quality that did help his intellectual development. The Jintan Middle School that opened in 1922 just when he had completed elementary school had a
well-qualified and demanding mathematics teacher who recognized Hua's talent and nurtured it. In addition, Hua learned early on to make up for the lack of books, and later of scientific literature,
by tackling problems directly from first principles, an attitude that he maintained enthusiastically throughout his life and encouraged his students in later years to adopt.
Next, Hua gained admission to the Chinese Vocational College in Shanghai, and there he distinguished himself by winning a national abacus competition; although tuition fees at the college were low,
living costs proved too high for his means and Hua was forced to leave a term before graduating. After failing to find a job in Shanghai, Hua returned home in 1927 to help in his father's store. In
that same year also, Hua married Xiaoguan Wu; the following year a daughter, Shun, was born and their first son, Jundong, arrived in 1931.
By the time Hua returned to Jintan he was already engaged in mathematics and his first publication Some Researches on the Theorem of Sturm, appeared in the December 1929 issue of the Shanghai
periodical Science. In the following year Hua showed in a short note in the same journal that a certain 1926 paper claiming to have solved the quintic was fundamentally flawed. Hua's lucid analysis
caught the eye of a discerning professor at Quing Hua University in Beijing, and in 1931 Hua was invited, despite his lack of formal qualification and not without some reservations on the part of
several faculty members, to join the mathematics department there. He began as a clerk in the library, and then moved to become an assistant in mathematics; by September 1932 he was an instructor and
two years later came promotion to the rank of lecturer. By that time he had published another dozen papers and in some of these one could begin to find intimations of his future interests; thanks to
his natural talent and dedication, Hua was now, at the age of 24, a professional mathematician.
At this time Quing Hua University was the leading Chinese institution of higher education, and its faculty was in the forefront of the endeavour to bring the country's mathematics and science abreast
of knowledge in the West, a formidable task after several hundred years of stagnation. During 1935-36 Hadamard and Norbert Wiener visited the university; Hua eagerly attended the lectures of both and
created a good impression. Wiener visited England soon afterward and spoke of Hua to G H Hardy. In this way Hua received an invitation to come to Cambridge, England, and he arrived in 1936 to spend
two fruitful years there. By now he had published widely on questions within the orbit of Waring's problem (also on other topics in diophantine analysis and function theory) and he was well prepared
to take advantage of the stimulating environment of the Hardy-Littlewood school, then at the zenith of its fame. Hua lived on a $1,250 per annum scholarship awarded by the Culture and Education
Foundation of China; it is interesting to recall that this foundation derived its funds from reparations paid by China to the United States following wars waged in China by the United States and
several other nations in the previous century. The amount of the grant imposed on him a Spartan regime. Hardy assured Hua that he could gain a PhD in two years with ease, but Hua could not afford the
registration fee and declined; of course, he gave quite different reasons for his decision.
During the Cambridge period Hua became friendly with Harold Davenport and Hans Heilbronn, then two young research fellows of Trinity College - one a former student of Littlewood and the other Landau
's last assistant in Göttingen - with whom he shared a deep interest in the Hardy-Littlewood approach to additive problems akin to Waring's. They helped to polish the English in several of Hua's
papers, which now flowed from his pen at a remarkable rate; more than 10 of his papers date from this time, and many of these appeared in due course in the publications of the London Mathematical
About the only easy thing about Waring's problem is its statement: In 1770 Waring asserted without proof (and not in these words) that for each integer k ≥ 2 there exists an integer s = s(k)
depending only on k such that every positive integer N can be expressed in the form
N = x[1]^k + x[2]^k + ... +x[s]^k
where the x[j](j = 1, 2, ... , s) are non-negative integers. In that same year Lagrange had settled the case k = 2 by showing that s(2) = 4, a best possible result; after that, progress was painfully
slow, and it was not until 1909 that Hilbert solved Waring's problem in its full generality. His argument rested on the deployment of intricate algebraic identities and yielded rather poor admissible
values of s(k). In 1918 Hardy and Ramanujan returned to the case k = 2 in order to determine the number of representations of an integer as the sum of s squares by means of Fourier analysis, an
approach inspired by their famous work on partitions, and they succeeded. This encouraged Hardy and Littlewood in 1920 to apply a similar method for general k, and they devised the so-called circle
method to tackle the general Hilbert-Waring theorem and a host of other additive problems, Goldbach's problem among them. During the next 20 years the machinery of the circle method came to be
regarded about as difficult as anything in the whole of mathematics; even today, after numerous refinements and much progress, the intricacies of the method remain formidable.
This is the background against which Hua set to work as a young man, and it is probably fair to say that it is for his contributions in this area that Hua's name will remain best remembered: notably
for his seminal work on the estimation of trigonometric sums, singly or on average.
Hua might well have remained in England longer, but home was never far from his thoughts and the Japanese invasion of China in 1937 caused him much anxiety. He left Cambridge in 1938 to return to his
old university, now as a full professor. However, Quing Hua University was no longer in Beijing; with vast portions of China under Japanese occupation, it had migrated to Kunming, the capital of the
southern province of Yunan, where it combined with several other institutions to form the temporary Associated University of the South West. There Hua and his family remained through the World War II
years, until 1945, in circumstances of poverty, physical privation, and intellectual isolation. Despite these hardships Hua maintained the level of intensity of his Cambridge period and even exceeded
it; by the end of 1945 he had more than 70 publications to his name. During this time he studied Vinogradov's seminal method of estimating trigonometric sums and reformulated it, even in sharper
form, in what is now known universally as Vinogradov's mean value theorem. This famous result is central to improved versions of the Hilbert-Waring theorem, and has important applications to the
study of the Riemann zeta function. Hua wrote up this work in a booklet that was accepted for publication in Russia as early as 1940, but owing to the war, did not appear (in expanded form) until
1947 as a monograph of the Steklov Institute.
Hua spent three months in Russia in the spring of 1946 at Vinogradov's invitation. Mathematical interaction apart, he was impressed by the organization of scientific activity there, and this
experience influenced him when later he reached a position of authority in the new China. In the years ahead, even though Hua's scientific activities branched out in other directions, Hua was always
ready to return to Waring's problem, to number theory in general and especially to questions involving exponential sums; thus as late as 1959 he published an important monograph on Exponential Sums
and Their Applications in Number Theory for the Enzyklopädie der Matematischen Wissenschaften. His instinct for what was important and his marvellous command of technique make his papers on number
theory even now virtually an index to the major activities in that subject during the first half of the twentieth century.
In the closing years of the Kunming period Hua turned his interests to algebra and to analysis, as much as anything for the benefit of his students in the first instance, and soon began to make
original contributions in these subjects too. Thus Hua became interested in matrix algebra and wrote several substantial papers on the geometry of matrices. He had been invited to visit the Institute
for Advanced Study in Princeton, but because C L Siegel was working there along somewhat similar lines, Hua declined, at first in order to develop his ideas independently. In September 1946, shortly
after returning from Russia, Hua did depart for Princeton, bringing with him projects not only in matrix theory but also in functions of several complex variables and in group theory. At this time
civil war was raging in China and it was not easy to travel; therefore, the Chinese authorities assigned Hua the rank of general in his passport for the "convenience of travel."
According to his biographer, Hua's "most significant and rewarding research work" during his stay in the United States was on the topic of skew fields, that is, on (non-commutative) division rings,
of which the quaternions are a classic example.
There was much else, of course, to distinguish this last major creative period of his life. Hua wrote several papers with H S Vandiver on the solution of equations in finite fields and with I Reiner
on automorphisms of classical groups. Much of his algebraic work later provided the basis for the monograph Classical Groups by Wan Zhe Xian and Hua (published by the Shanghai Scientific Press in
Chinese in 1963).
On the personal side, in the spring of 1947 Hua underwent an operation at the Johns Hopkins University on his lame leg that much improved his gait thereafter, to his and his family's delight. Also in
1947 their daughter Su was born; two more sons had arrived earlier, Ling and Guang, the latter in 1945 and one more daughter, Mi, was born a little later. In the spring of 1948 Hua accepted
appointment as a full professor at the University of Illinois in Urbana-Champaign. There he directed the thesis of R Ayoub, later a professor at Pennsylvania State University; continued his work with
I Reiner; and influenced the thinking of several young research workers, L Schoenfeld and J Mitchell among them. His stay in Illinois was all too brief, exciting developments were taking place in
China, and Hua watched them eagerly, wanting to be part of the new epoch. Although he had brought his wife and three younger children to Urbana and they had settled in quite well, the urge to return
was too great; on March 16, 1950, he was back in Beijing at his alma mater, Quing Hua University, ready to add his contribution to the brave new world. He was then at the peak of his mathematical
powers and, as he wrote to me many years later, the 1940s had been to him in retrospect the golden years of his life. Despite the trials that he would face, he did not at any subsequent time regret
his decision to return.
Back in China, Hua threw himself into educational reform and the organization of mathematical activity at the graduate level, in the schools, and among workers in the burgeoning industry. In July
1952 the Mathematical Institute of the Academia Sinica came into being, with Hua as its first director. The following year he was one of a 26-member delegation from the Academia Sinica to visit the
Soviet Union in order to establish links with Russian science. At this time Hua entertained doubts whether the Communist Party at home trusted him, and it came as an agreeable surprise to him to
learn in Moscow that the Chinese government had agreed to a proposal by the Soviet government to award Hua a Stalin Prize. Following Stalin's death the prize was discontinued, and Hua missed out; in
view of later developments, he told me, he had a double reason to be satisfied!
Despite his many teaching and administrative duties, Hua remained active in research and continued to write, not only on topics that had engaged him before but also in areas that were new to him or
had been only lightly touched on before. In 1956 his voluminous text, Introduction to Number Theory, appeared. (The preface to the 1975 Chinese edition was excised by government order because Hua was
out of favour during much of the Cultural Revolution); later this was published by Springer in English translation and is still in print. Harmonic Analysis of Functions of Several Complex Variables
in the Classical Domains came out in 1958 and was translated into Russian in the same year, followed by an English translation by the American Mathematical Society in 1963.
In 1958 he suffered a rude awakening from utopian dreams with the so-called Great Leap Forward, when a Mao-inspired, savage assault on intellectuals swept the country, implemented with enthusiasm by
a compliant bureaucracy inspired by Orwellian slogans like:-
... the lowliest are the smartest, the highest the most stupid.
Despite his eminence and some protection in high places, Hua had to suffer harassment, public abuse, and constant surveillance. Nevertheless, during this troubled period Hua developed, with Wang
Yuan, a broad interest in linear programming, operations research, and multidimensional numerical integration. In connection with the last of these, the study of the Monte Carlo method and the role
of uniform distribution led them to invent an alternative deterministic method based on ideas from algebraic number theory. Their theory was set out in Applications of Number Theory to Numerical
Analysis, which was published much later, in 1978, and by Springer in English translation in 1981. The newfound interest in applicable mathematics took him in the 1960s, accompanied by a team of
assistants, all over China to show workers of all kinds how to apply their reasoning faculty to the solution of shop-floor and everyday problems. Whether in ad hoc problem-solving sessions in
factories or open-air teachings, he touched his audiences with the spirit of mathematics to such an extent that he became a national hero and even earned an unsolicited letter of commendation from
Mao, this last a valuable protection in uncertain times. Hua had a commanding presence, a genial personality, and a wonderful way of putting things simply, and the impact of his travels spread his
fame and the popularity of mathematics across the land. When much later he travelled abroad, wherever he stayed Chinese communities of all political persuasions flocked to meet him and do him honour;
in 1984 when he organized a conference on functions of several complex variables in Hangzhou, colleagues from the West were astonished by the scale of the publicity accorded it by the Chinese media.
But all that was in the future. In 1966 Mao set in motion the next national calamity, which came to be known as the Cultural Revolution and would last 10 years. A pronouncement of Mao dated as early
as June 26, 1965, sent a dire signal of things to come to the intellectuals:-
The more you read, the more stupid you become.
Hua spent many of these years under virtual house arrest. He attributed his survival to the personal protection of Chou En-lai. Even so, he was exposed to harassing interrogations, some of his
manuscripts (on mathematical economics) were confiscated and are now irretrievably lost, and attempts were made to extract from his associates and former students damaging allegations against him.
(In 1978 the Chinese ambassador to the United Kingdom described one such occasion to me; Chen Jing-run, then probably the best known Chinese mathematician of the next generation, was made to stand in
a public place for several hours, surrounded by a mob, and exhorted to bear witness against Hua. Chen, present at this conversation, chimed in to say that, actually, he had quite enjoyed the
occasion, since no student could trouble him with silly questions and he had had time, uninterrupted, to think about mathematics!) It is surely no accident that the flow of Hua's publications came to
an untimely end in 1965. He continued to work, of course. There are several joint papers on numerical analysis (with Wang Yuan) and on optimisation (with Ke Xue Tong Bao) in the 1970s, but these are
probably based on work done earlier; there are also expository articles and texts derived from the vast teaching and consulting experience he accumulated over the years. As he would reminisce sadly
in a 1991 article:-
Upon entering [my] sixtieth year ... almost all energy and spirit were taken from me.
With the end of the Cultural Revolution in 1976 Hua entered upon the last period of his life. Honour was restored to him at home, and he became a vice-president of Academia Sinica, a member of the
People's Congress and science advisor to his government. In addition, Chinese Television (CCTV) produced a mini-series telling the story of Hua's life, which has been shown at least twice since then.
In 1980 he became a cultural ambassador of his country charged with re-establishing links with Western academics, and during the next five years he travelled extensively in Europe, the United States,
and Japan. In 1979 he was a visiting research fellow of the then Science Research Council of the United Kingdom at the University of Birmingham and during 1983-84 he was Sherman Fairchild
Distinguished Scholar at the California Institute of Technology. For much of this time he was tired and in poor health, but a characteristic zest for life and a quenchless curiosity never deserted
him; to a packed audience in a seminar in Urbana in the spring of 1984 he spoke about mathematical economics. One felt that he was driven to make up for all those lost years. In his last letter to
me, dated 21 May 1985, he reported that unfortunately most of his time now was devoted to:-
... non-mathematical activities, which are necessary for my country and my people.
He died of a heart attack at the end of a lecture he gave in Tokyo on 12 June 1985.
Hua received honorary doctorates from the University of Nancy (1980), the Chinese University of Hong-Kong (1983), and the University of Illinois (1984). He was elected a foreign associate of the
National Academy of Sciences (1982) and a member of the Deutsche Akademie der Naturforscher Leopoldina (1983), Academy of the Third World (1983), and the Bavarian Academy of Sciences (1985).
Professor Wang Yuan has written a fine biography [1] of Hua, and I am indebted to it for some of the information I have used. I have also drawn on the obituary notice I wrote for Acta Arithmetica [6
Article by: Heini Halberstam
Click on this link to see a list of the Glossary entries for this page
List of References (15 books/articles)
Mathematicians born in the same country
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © June 2004 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is: | {"url":"http://www-gap.dcs.st-and.ac.uk/~history/Biographies/Hua.html","timestamp":"2014-04-19T07:00:40Z","content_type":null,"content_length":"34119","record_id":"<urn:uuid:8fe3b79a-f94c-4981-bec5-6464c23aaaae>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can You Raed Tihs?
1427133 story
Posted by
from the you-have-for-years dept.
An aoynmnuos raeedr sumbtis:
"An interesting tidbit from Bisso's blog site: Scrambled words are legible as long as first and last letters are in place. Word of mouth has spread to other blogs, and articles as well. From the
languagehat site: 'Aoccdrnig to a rscheearch at an Elingsh uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht frist and lsat ltteer is at the
rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae we do not raed ervey lteter by it slef but the wrod as a wlohe. ceehiro.' Jamie Zawinski has also
written a perl script to convert normal text into text where letters excluding the first and last are scrambled."
This discussion has been archived. No new comments can be posted.
Can You Raed Tihs?
Comments Filter:
• Here you go (Score:5, Informative)
by JM Apocalypse (630055) * on Monday September 15, 2003 @07:13PM (#6969205)
No need to open the terminal ... Jeff comes to the rescue!
http://jeff.zoplionah.com/scramble.php [zoplionah.com]
• Re:Yes, a cat's got my tongue, OK? (Score:4, Informative)
by Anonymous Coward on Monday September 15, 2003 @07:18PM (#6969280)
Actually, does this work well with letter pairs like, "th ch wh sh qu?" I forget what those are called.
Digraphs? [reference.com]
• Re:Hmmm (Score:2, Informative)
by insanecarbonbasedlif (623558) <insanecarbonbase ... nOSPAM.gmail.com> on Monday September 15, 2003 @07:32PM (#6969470) Homepage Journal
The problem comes from words that have the same ending letters, but different middle letters: Like "car" and "cur", or (more confusingly) "from", "form", "firm", "film", "farm", etc. Context
would give us some cues, but it would definately require more thought to process.
• by adamsan (606899) on Monday September 15, 2003 @07:46PM (#6969594)
"They're called dipthongs (sic)"
No they ain't, diphthongs are pairs of vowels that merge together. Pairs of consonants are called err..consonant pairs.
• Re:Does this work for non native speakers? (Score:3, Informative)
by gdchinacat (186298) on Monday September 15, 2003 @07:53PM (#6969656)
I think it is actually cheerio.
WordNet (r) 1.7 [wn]
n : a farewell remark; "they said their good-byes" [syn: adieu,
adios, arrivederci, auf wiedersehen, au revoir,
bye, bye-bye, good-by, goodby, good-bye, goodbye,
good day, sayonara, so long]
• Re:Yes, a cat's got my tongue, OK? (Score:5, Informative)
by edwdig (47888) on Monday September 15, 2003 @08:01PM (#6969724)
By randomly scrambling the letters, you're eliminating a lot of the redundancy.
Huffman compression would be unaffected though, as it works on a per character basis.
• Compression worse... (Score:5, Informative)
by douglips (513461) on Monday September 15, 2003 @08:07PM (#6969776) Homepage Journal
That's easy. Let's say you have a text file that consists of 14,000 instances of the word "begat". This compresses to a file that simply indicates "repeat 14,000 'begat '".
Now, after you scrmable it, it's got equal quantities of begat, beagt, baget, baegt, bgeat, and bgaet. It's not so easy to compress any more.
Essentially, you're increasing the entropy of the file by a fair amount. Truly random data is not so easy to compress as english, because english has lots of order. Added disorder or entropy
means compression is just not as easy.
• Re:Yes, a cat's got my tongue, OK? (Score:1, Informative)
by T-Ranger (10520) <jeffw@chebuRASPcto.ns.ca minus berry> on Monday September 15, 2003 @08:08PM (#6969790) Homepage
Because english words are made up of some common components. 'i' always comes before 'e' in 'ie' pairs, for example. Compression is about rewriting common strings (of bits, not just strings of
characters) into shorter strings - uncommon strings may end up being longer post compression. If your effectivly randomizing most of the text then there wont be any common strings. Or at least
less then what occures in natural, ordered, prose. And there wont ever be whole words you can compress down.
• by stienman (51024) <adavis.ubasics@com> on Monday September 15, 2003 @08:10PM (#6969805) Homepage Journal
Yes, and it's quite simple. The script you used scrambles words randomly - again agian aagin aaign aigan aiagn - become seperate words to the compressor. Instead of changing every occurance of
the word again into a short binary string, it has to treat each iteration seperately with their own binary string (simplified - compression is more complex, but the basic idea is the same)
In other words, the scramble.pl adds machine randomness to a rather organized and non-random set of data. Humans can still parse it (meaning that the data is very redundant) but the machine
cannot compressed this 'more random' data.
• by Demodian (658895) on Monday September 15, 2003 @08:43PM (#6970057)
diphthongs [m-w.com] and triphthongs [m-w.com] are the vowel-only subsets of digraphs [m-w.com] and trigraphs [m-w.com].
• Re:Compression worse... (Score:2, Informative)
by InadequateCamel (515839) on Monday September 15, 2003 @09:10PM (#6970259)
>Essentially, you're increasing the entropy of the file by a fair amount.
Pardon me for being picky and off-topic, but this is a little peeve of mine...
Definition: Entropy
n 1: (thermodynamics) a measure of the amount of energy in a system that is available for doing work; entropy increases as matter and energy in the universe degrade to an ultimate state of inert
uniformity [ant: ectropy]
"Disorder" is a terrible way of describing entropy, and to use the word entropy to describe disorder is even worse. Having said that, in computing the word has long since been hijacked to mean
disorder (Shannon's formula?), so I must admit that your use is a little more valid than "My bedroom has a high degree of entropy".
Just my 2 cents! (sorry)
• Re:Hmmm (Score:2, Informative)
by ikkyikkyikkypikang (214791) on Monday September 15, 2003 @09:20PM (#6970346)
Hree's a cool ltitle scprit taht I use to sned emial to my mboil phnoe: email2sms [adamspiers.org]
• This might help (Score:3, Informative)
by pbox (146337) on Monday September 15, 2003 @09:23PM (#6970365) Journal
Because english words are made up of some common components. 'i' always comes before 'e' in 'ie' pairs, for example.
My neighbor weighed your argument. He used a beige scale, and decided it was probably the heinous act of a foreigner to make such a statement. And you're weird. So rein in yourself, and remove
the veil of ignorance, ye feisty cad!
Thou should forfeit karma, but that is neither here nor there.
• Re:Compression worse... (Score:2, Informative)
by HalB (127906) on Monday September 15, 2003 @10:28PM (#6971090)
Actually, entropy is the energy NOT available to do work...
Even though the original poster did misuse entropy, even in the information theory context... From www.dictionary.com:
2. A measure of the disorder or randomness in a closed system.
and Webster.com:
1 : a measure of the unavailable energy in a closed thermodynamic system that is also usually considered to be a measure of the system's disorder
Get over it. 8')
• Re:So in other words... (Score:3, Informative)
by Raffaello (230287) on Monday September 15, 2003 @10:49PM (#6971273)
No, that would be lisp.
• Re:Compression worse... (Score:3, Informative)
by CarlDenny (415322) on Tuesday September 16, 2003 @02:35AM (#6972699)
The first half dozen occurances of the definition you quoted also included:
2: (communication theory) a numerical measure of the uncertainty of an outcome; "the signal contained thousands of bits of information"
If it's a pet peeve of yours, perhaps you should make a study of statistical mechanics and information theory, where the concept and term are more clearly and quantitatively defined. With a
slightly deeper understanding of statistical mechanics, you will find that ther term is more fundamental than you thought, and that they are mathematically identical, applied to two separate
fields. With this understanding, your objection is similar to saying that length is defined by the distance between two ends of an object, and that talking about the length of a file, or a length
of time, is completely wrong.
While the term originated in thermodynamics, it was given a formal definition (even within the realm of physics) by Boltzmann with the development of statistical mechanics. Statistical mechanics
allow Boltzmann to formulate and discuss entropy well in advance of energy or temperature. When they do enter the picture, thermodynamic (dQ/dt) entropy is identical to the statistical
definition, with temperature defined by 1/t = d(Energy)/d(entropy) where those ds are partial derivatives. It's actually a fascinating topic, and a beautiful mathematical insight.
The description and definition used by Boltzmann for statistical mechanics are exactly the same as those used in information theory:
Entropy = Sum (-p(state)*ln(p(state)))
(over all possible states)
Or, with all states equally likely (the equipartition principle):
Entropy = ln( # of possible states)
Which is, of course, why Shannon used the term and the definition.
Sorry to contradict you, but misunderstandings and misuse of the term entropy are also pet peeves of mine, and this is not one of them. ;)
• Written by Mark Twain? (Score:2, Informative)
by Anonymous Coward on Tuesday September 16, 2003 @06:34AM (#6973526)
Some people have mentioned that they saw this years ago. Actually, it is usually said that Mark Twain originally wrote this!
http://www. i18nguy.com/twain.html
(Too lazy for HTML)
• Re:Yes, a cat's got my tongue, OK? (Score:1, Informative)
by Anonymous Coward on Tuesday September 16, 2003 @05:42PM (#6979823)
goaste.cx, micorsoft.com, ssdlhoat.org
Actually, it looks like there's more to it than ONLY getting the first and last letter. The first two are easily decipherable, but the last is insanity. It's easily the hardest to make out, which
is bizarre considering where we're reading it...
"slahsodt" is much easier, while ssdhalot is next to impossible for non anagram-lovers.
• A neuroscientist writes... (Score:1, Informative)
by Anonymous Coward on Wednesday September 24, 2003 @07:49AM (#7042290)
There have been various forms of this email doing the rounds - including one that mentioned Cmabrigde Uinervtisy (which is where I work doing research on how the brain processes written and
spoken language).
Since I thought I ought to know about this, I've written a page of notes on the science behind this meme, including a list of the factors that my colleagues and I think might be relevant for
reading this kind of transposed text. You can read more here:
http://www.mrc-cbu.cam.ac.uk/~matt.davis/Cmabrig de /
Related Links Top of the: day, week, month. | {"url":"http://science.slashdot.org/story/03/09/15/2227256/can-you-raed-tihs/informative-comments","timestamp":"2014-04-23T13:30:29Z","content_type":null,"content_length":"131015","record_id":"<urn:uuid:6a133cd2-8d6d-4c17-8c2d-f38ad0fa7d7c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Estimate the Final Maturity Value on Savings Bonds
Original post by Jonathan Langsdorf of Demand Media
A series EE U.S. savings bonds will earn interest for up to 30 years, at which point a bond reaches final maturity. Since May 2005, EE bonds have been issued earning a fixed rate of interest, so a
final value can be fairly accurately determined with a couple of easy calculations. Older series EE bonds earn variable rates, which can change twice a year. The projected value of one of these bonds
will be more of a ballpark figure.
Newer Series EE Bonds - Issued After April 2005
Step 1
Look up the interest rate for your savings bond if you do not already know the rate. Rates for new bonds are set every May 1st and November 1st by the U.S. Treasury. The rate for any savings bond can
be found in the Redemption Tables available on the TreasuryDirect.gov website. Find the link for the tables under the Tools tab in the blue menu bar across every page of the website.
Step 2
Determine the month when your bond will double in value. Current issue series EE bonds are guaranteed to double in no longer than 20 years. At the time of publication, no savings bond has been issued
earning a high enough interest rate to double sooner than 20 years. For example, a $1,000 savings bond with an initial cost of $500 issued in July 2006 will be worth the $1,000 in July 2026.
Step 3
Multiply the interest rate times 11 and add one to obtain a multiplication factor to calculate the bond growth from year 20 to year 30. For example, if the bond is earning 3.2 percent, the
multiplication is 0.032 times 11 equals 0.352 plus 1 equals 1.352.
Step 4
Multiply the factor times the bonds 20 year value to get an estimated 30 year value. The example $1,000 bond times the 1.352 gives an estimated maturity value of $1,352.
Estimating Older Savings Bonds
Step 1
Calculate the date the bond will at least double in value. Guarantee value dates are based on the month of issue. Here are the guarantee times for bonds issued back to March 1993: March 1993 through
April 1995: 18 years. May 1995 through May 2003: 17 years. June 2003 to the present: 20 years. For example, a bond issued in June 2000 will double its investment value -- reach the denomination value
-- in June 2017.
Step 2
Look up the current rate of interest being earned by your savings bond. At the time of publication, savings bonds issued from 1993 through April 2005 were earning between 3 and 4 percent.
Step 3
Multiply your bonds guarantee / face value times the appropriate factor to obtain an estimated 30 year value. If the interest rate is close to 3 percent us a factor of 1.5. If the rate is closer to
3.5 percent, use 1.6 and if the rate is near 4 percent, use 1.7. Using the June 2000 bond with a face value of $1,000, the current rate is 3.4 percent. Multiply $1,000 times the 1.6 factor and the
bond will be worth approximately $1,600 at the 30 year point.
Tips & Warnings
• At the time of publication, interest rates on savings bonds were at record low levels. If rates increase in the future, the values of savings bonds at maturity may be slightly higher than the
calculated estimates.
• Paper series EE savings bonds are purchased for one-half of the face value. For example, $1,000 bond initially cost $500.
About the Author
Jonathan Langsdorf has been writing financial, investment and trading articles and blogs since 2007. His work has appeared online at Seeking Alpha, Marketwatch.com and various other websites.
Langsdorf has a bachelor's degree in mathematics from the U.S. Air Force Academy. | {"url":"http://wiki.fool.com/How_to_Estimate_the_Final_Maturity_Value_on_Savings_Bonds","timestamp":"2014-04-16T13:06:15Z","content_type":null,"content_length":"49917","record_id":"<urn:uuid:d1aada52-dd6c-4098-b748-a5890ddd173d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
Natural statements independent from true $\Pi^0_2$ sentences
up vote 6 down vote favorite
I am looking for sentences in the language of first order arithmetic ($0,1,+,\cdot,\leq$) which are independent from $\Pi^0_2$ consequences of true arithmetic $\Pi^0_2\text{-}\mathsf{Th}(\mathbb{N})
$. I want natural statements, e.g. statements that have been studied in number theory or combinatorics for their own sake. The motivation comes from looking for true statements that are not provable
in $\mathsf{I}\Delta_0(L)$ where $L$ contains arbitrary fast growing (computable) functions.
lo.logic computability-theory independence-results forcing
1 I assume the $\Pi^0_2$ in the body of your question is what you intended and the $\Sigma^0_2$ in the title isn't. But just in case you're actually interested in the title question, I think the
Paris-Harrington theorem answers that. The point is that true $\Sigma^0_2$ sentences are consequences of true $\Pi^0_2$ ones. – Andreas Blass Nov 18 '11 at 0:29
@Andreas, yes, I fixed the title, thanks. – Kaveh Nov 18 '11 at 1:39
1 And if comments were editable, the $\Pi^0_2$ at the end of my previous comment would become $\Pi^0_1$. – Andreas Blass Nov 18 '11 at 2:51
Depending on how strict your definition of "natural" is, even Paris-Harrington might not be considered "natural." The condition of having as many elements as the least element was not "studied in
combinatorics for its own sake." – Timothy Chow Nov 21 '11 at 23:50
add comment
2 Answers
active oldest votes
I passed this question on to Harvey Friedman, who provided the following information. Friedman has shown that the following statement is equivalent to the 2-consistency of PA:
For every recursive function $f:{\mathbb N}^k \to {\mathbb N}^k$, there exists $n_1 < \cdots < n_{k+1}$ such that $f(n_1,\ldots,n_k) \le f(n_2, \ldots, n_{k+1})$
up vote 3 down Friedman also says that there are versions of Paris-Harrington and Kruskal's tree theorem that work. For example, "Every infinite recursive sequence of finite trees has a tree that
vote accepted is inf-preserving-embeddable into a later tree" is equivalent to the 2-consistency of $\Pi^1_2$ bar induction.
Friedman refers to the introduction of his forthcoming book Boolean Relation Theory and Incompleteness (downloadable from his website) for more information.
Thanks a lot for the example. I was guessing that Harvey may have an example. I guess that it is unlikely that I would get an answer which is more natural than this so I accept
it. – Kaveh Nov 22 '11 at 0:44
add comment
If computability counts, Turing famously showed that if M is a Turing machine equipped with an oracle for the regular halting problem, then M's own halting problem is undecidable by M. And
if M2 is a machine with an oracle for M, then M2 can't decide its own halting problem, and so on. If I'm not mistaken, that can be turned into independent statements at every level of the
up vote 0 arithmetic hierarchy. Having access to the true $\Pi_2^0$ sentences amounts to having M2. It doesn't help you with M3, etc.
down vote
Thanks, but your answer is essentially saying that the arithmetic hierarchy doesn't collapse, which I know, that is not what I want. Statements from logic/computability like Soundness,
Halting, ... are not what I am looking for. – Kaveh Nov 20 '11 at 21:28
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic computability-theory independence-results forcing or ask your own question. | {"url":"http://mathoverflow.net/questions/81190/natural-statements-independent-from-true-pi0-2-sentences","timestamp":"2014-04-21T04:51:52Z","content_type":null,"content_length":"61515","record_id":"<urn:uuid:428c98d0-16ec-4aa4-b63b-18ceaa2be81b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Koch curve
Koch curve
A Koch curve is a fractal generated by a replacement rule. This rule is, at each step, to replace the middle $1/3$ of each line segment with two sides of a right triangle having sides of length equal
to the replaced segment. Two applications of this rule on a single line segment gives us:
To generate the Koch curve, the rule is applied indefinitely, with a starting line segment. Note that, if the length of the initial line segment is $l$, the length $L_{K}$ of the Koch curve at the
$n$th step will be
This quantity increases without bound; hence the Koch curve has infinite length. However, the curve still bounds a finite area. We can prove this by noting that in each step, we add an amount of area
equal to the area of all the equilateral triangles we have just created. We can bound the area of each triangle of side length $s$ by $s^{2}$ (the square containing the triangle.) Hence, at step $n$,
the area $A_{K}$ “under” the Koch curve (assuming $l=1$) is
$\displaystyle A_{K}$ $\displaystyle<$ $\displaystyle\left(\frac{1}{3}\right)^{2}+3\left(\frac{1}{9}\right)^{2}+9\left% (\frac{1}{27}\right)^{2}+\cdots$
$\displaystyle=$ $\displaystyle\sum_{{i=1}}^{n}\frac{1}{3^{{i-1}}}$
A Koch snowflake is the figure generated by applying the Koch replacement rule to an equilateral triangle indefinitely.
Mathematics Subject Classification
no label found
no label found | {"url":"http://planetmath.org/KochCurve","timestamp":"2014-04-19T22:10:08Z","content_type":null,"content_length":"48123","record_id":"<urn:uuid:3c067cb3-048b-4401-b0a7-ba73b47f35fc>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multisource Bayesian sequential binary hypothesis testing problem
Dayanık, Savaş and Sezer, Semih Onur (2012) Multisource Bayesian sequential binary hypothesis testing problem. Annals of Operations Research, 201 (1). pp. 99-130. ISSN 0254-5330 (Print) 1572-9338
This is the latest version of this item.
Full text not available from this repository.
Official URL: http://dx.doi.org/10.1007/s10479-012-1217-z
We consider the problem of testing two simple hypotheses about unknown local characteristics of several independent Brownian motions and compound Poisson processes. All of the processes may be
observed simultaneously as long as desired before a fi nal choice between hypotheses is made. The objective is to find a decision rule that identifi es the correct hypothesis and strikes the optimal
balance between the expected costs of sampling and choosing the wrong hypothesis. Previous work on Bayesian sequential hypothesis testing in continuous time provides a solution when the
characteristics of these processes are tested separately. However, the decision of an observer can improve greatly if multiple information sources are available both in the form of continuously
changing signals (Brownian motions) and marked count data (compound Poisson processes). In this paper, we combine and extend those previous efforts by considering the problem in its multisource
setting. We identify a Bayes optimal rule by solving an optimal stopping problem for the likelihood ratio process. Here, the likelihood ratio process is a jump-diffusion, and the solution of the
optimal stopping problem admits a two-sided stopping region. Therefore, instead of using the variational arguments (and smooth-fit principles) directly, we solve the problem by patching the solutions
of a sequence of optimal stopping problems for the pure diffusion part of the likelihood ratio process. We also provide a numerical algorithm and illustrate it on several examples.
Available Versions of this Item
Repository Staff Only: item control page | {"url":"http://research.sabanciuniv.edu/21036/","timestamp":"2014-04-20T05:46:52Z","content_type":null,"content_length":"19083","record_id":"<urn:uuid:18cdfa6c-f320-4e6c-99c4-79cd66062696>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inverse of a complex function
October 29th 2012, 05:07 AM #1
Oct 2012
Inverse of a complex function
hi, i have been set the following problem, where z is a complex number:
What is the domain of the function f(z) = (3z+1)/(z+i) ?
Prove that f, defined in the domain of f, has an inverse function (f-1),
i.e. check all necessary properties for the existence of an inverse function.
Determine f-1 and its domain.
I have managed to find the domain and image of the function, and therefore the domain of f-1,
but the problem im having is showing that the function is 1-1 (or injective), in order to show that it has an inverse.
any help would be greatly appreciated!
Re: Inverse of a complex function
Straight algebra and the definition works:
$\text{If }z_1, z_2 \in \mathbb{C} \backslash \{-i\}, \text{ and } f(z_1) = f(z_2), \text{ then }$
$\frac{3z_1 + 1}{z_1 + i} = \frac{3z_2 + 1}{z_2 + i}, \text{ so }$
$(3z_1 + 1)(z_2 + i) = (3z_2 + 1)(z_1 + i), \text{ so }$
$3z_1z_2 + z_2 +3iz_1 + i = 3z_1z_2 + z_1 +3iz_2 + i, \text{ so }$
$z_2 +3iz_1 = z_1 +3iz_2, \text{ so }$
$z_2-z_1 = 3i(z_2-z_1), \text{ so }$
$0 = (-1+3i)(z_2-z_1), \text{ so }$
$0 = z_2-z_1, \text{ so }$
$z_1 = z_2. \text{ Thus f is injective on its domain.}$
Re: Inverse of a complex function
This message looks very similar to the coursework I set my Complex Analysis class. Paul please come see me in my office when I return. This is a clear violation of the academic code of conduct.
October 29th 2012, 10:29 AM #2
Super Member
Sep 2012
Washington DC USA
November 5th 2012, 03:03 PM #3
Nov 2012 | {"url":"http://mathhelpforum.com/advanced-math-topics/206307-inverse-complex-function.html","timestamp":"2014-04-25T09:26:32Z","content_type":null,"content_length":"38424","record_id":"<urn:uuid:1e956133-20b9-408b-853f-d766d2694122>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help need on math java program
June 29th, 2010, 05:26 PM
Help need on math java program
Hi I am trying to write a java program that lets me evaluate the value of x over the range x=0 to100, for the function y=a*(x*8))+b*((x*8)^-0.5))+c and then print the values of y into a file.
However the attempt I made below does not work can someone help me out and show me where I am going wrong and what to do to make the code work.
Code Java:
public class javatest {
public static void main(String[] args) {
// Stream to write file
FileOutputStream fout;
// Open an output stream
fout = new FileOutputStream ("output.txt");
double a = 5;
double b = 10;
double c = 15;
double y;
double x;
// for x returns y for 100 channels.
for (x=0; x < 100; x+) {
"The value of y is " y;
// Print a line of text
new PrintStream(fout).println (" The value of y is" y;
// Close our output stream
June 29th, 2010, 05:48 PM
Re: Help need on math java program
See post on other thread.
June 30th, 2010, 02:58 AM
Re: Help need on math java program
If you're serious about this, try to compile and at least try to fix the errors yourself, otherwise get back here and post the errors too.
Hint : I can see one error there which should yell a LOT when you try to compile that.
June 30th, 2010, 07:10 AM
Re: Help need on math java program
You've left a lot of mistakes as far as syntax is concerned; fix those or at least compile and see :S.
For the logic of your program, i guess you'll want to put the equation in that for loop.
June 30th, 2010, 02:53 PM
Re: Help need on math java program
To give a indication of what the others mean, I saw atleast 7 mistakes in syntax that should give huge errors.
June 30th, 2010, 09:56 PM
Re: Help need on math java program
very obvious syntax errors
July 6th, 2010, 07:41 PM
Re: Help need on math java program
My guess is this is your first program, so I'll help out as much as I can.
Here are some problems you may be facing:
1) Have you imported FileOutputStream and PrintStream?
import java.io.FileOutputStream;
import java.io.PrintStream;
2) You attempted to create a TRY block, but you failed to take account of an error occuring. After the block, you need to include either a CATCH or FINALLY block. I will include an example of a
CATCH block since that is the simplest and most useful for your situation.
CATCH Block: "catch(Exception e){System.out.println(e);}"
3) You did not initialize the variable "x". Give it a value before you attempt to use it in the formula.
4) You incorrectly attempted to use Math.POW(double d1,double d2).
Your wrote: "((b)*Math.pow((x*8)),-0.5)"
While it should be: "((b)*Math.pow((x*8),-0.5))"
Always make sure your brackets are in the correctly place. I also believe you have forgotten a plus ( + ) if the formula in your description is correct.
5) You incorrectly attempted to print out your results.
Your wrote: "System.out.println("The value of y is " y;"
While it should be: "System.out.println("The value of y is "+y);"
Your wrote: "new PrintStream(fout).println (" The value of y is" y;"
While it should be: "new PrintStream(fout).println (" The value of y is "+y);
Keep in mind to include a bracket to close the println() method and keep in mind that to add a number to a string, you have to close the string, include a plus ( + ) and then include the number
you want to add. It also doesnt hurt to include a space after "is" so there will be a space between "is" and the number during the output.
6) Lastly, you have forgotten an ending bracket ( } ). Put one at the end or where you are wanting to end a code block. | {"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/4674-help-need-math-java-program-printingthethread.html","timestamp":"2014-04-17T19:32:04Z","content_type":null,"content_length":"14358","record_id":"<urn:uuid:38dc95e2-ca99-4a81-b131-822958deccb5>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some inconsistencies and misnomers in probabilistic information retrieval
Results 1 - 10 of 31
- ACM Computing Surveys , 2002
"... The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last ten years, due to the increased availability of documents in
digital form and the ensuing need to organize them. In the research community the dominant approach to this p ..."
Cited by 1090 (20 self)
Add to MetaCart
The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last ten years, due to the increased availability of documents in digital
form and the ensuing need to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds a
classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual
definition of a classifier by domain experts) are a very good effectiveness, considerable savings in terms of expert labor power, and straightforward portability to different domains. This survey
discusses the main approaches to text categorization that fall within the machine learning paradigm. We will discuss in detail issues pertaining to three different problems, namely document
representation, classifier construction, and classifier evaluation.
- In Proceedings of SIGIR’94 , 1994
"... The 2–Poisson model for term frequencies is used to suggest ways of incorporating certain variables in probabilistic models for information retrieval. The variables concerned are within-document
term frequency, document length, and within-query term frequency. Simple weighting functions are develope ..."
Cited by 352 (12 self)
Add to MetaCart
The 2–Poisson model for term frequencies is used to suggest ways of incorporating certain variables in probabilistic models for information retrieval. The variables concerned are within-document term
frequency, document length, and within-query term frequency. Simple weighting functions are developed, and tested on the TREC test collection. Considerable performance improvements (over simple
inverse collection frequency weighting) are demonstrated. 1
- The Computer Journal , 1992
"... In this paper, an introduction and survey over probabilistic information retrieval (IR) is given. First, the basic concepts of this approach are described: the probability ranking principle
shows that optimum retrieval quality can be achieved under certain assumptions; a conceptual model for IR alon ..."
Cited by 104 (4 self)
Add to MetaCart
In this paper, an introduction and survey over probabilistic information retrieval (IR) is given. First, the basic concepts of this approach are described: the probability ranking principle shows
that optimum retrieval quality can be achieved under certain assumptions; a conceptual model for IR along with the corresponding event space clarify the interpretation of the probabilistic parameters
involved. For the estimation of these parameters, three different learning strategies are distinguished, namely query-related, document-related and description-related learning. As a representative
for each of these strategies, a specific model is described. A new approach regards IR as uncertain inference; here, imaging is used as a new technique for estimating the probabilistic parameters,
and probabilistic inference networks support more complex forms of inference. Finally, the more general problems of parameter estimation, query expansion and the development of models for advanced
document representations are discussed.
- In Proceedings of SIGIR-98, 21st ACM International Conference on Research and Development in Information Retrieval , 1998
"... The commas are the most useful and usable of all the stops. It is highly important to put them in place as you go along. If you try to come back after doing a paragraph and stick them in the
various spots that tempt you you will discover that they tend to swarm like minnows into all sorts of crevice ..."
Cited by 74 (3 self)
Add to MetaCart
The commas are the most useful and usable of all the stops. It is highly important to put them in place as you go along. If you try to come back after doing a paragraph and stick them in the various
spots that tempt you you will discover that they tend to swarm like minnows into all sorts of crevices whose existence you hadn t realized and before you know it the whole long sentence becomes
immobilized and lashed up squirming in commas. Better to use them sparingly, and with affection precisely when the need for one arises, nicely, by itself.
, 2001
"... This article surveys probabilistic approaches to modeling information retrieval. The basic concepts of probabilistic approaches to information retrieval are outlined and the principles and
assumptions upon which the approaches are based are presented. The various models proposed in the developmen ..."
Cited by 63 (14 self)
Add to MetaCart
This article surveys probabilistic approaches to modeling information retrieval. The basic concepts of probabilistic approaches to information retrieval are outlined and the principles and
assumptions upon which the approaches are based are presented. The various models proposed in the development of IR are described, classified, and compared using a common formalism. New approaches
that constitute the basis of future research are described
- IN PROCEEDINGS OF THE ACM SIGIR 2003 WORKSHOP ON MATHEMATICAL/FORMAL METHODS IN IR. ACM , 2003
"... This paper presents a novel probabilistic information retrieval framework in which the retrieval problem is formally treated as a statistical decision problem. In this framework, queries and
documents are modeled using statistical language models (i.e., probabilistic models of text), user preference ..."
Cited by 47 (1 self)
Add to MetaCart
This paper presents a novel probabilistic information retrieval framework in which the retrieval problem is formally treated as a statistical decision problem. In this framework, queries and
documents are modeled using statistical language models (i.e., probabilistic models of text), user preferences are modeled through loss functions, and retrieval is cast as a risk minimization
problem. We discuss how this framework can unify existing retrieval models and accommodate the systematic development of new retrieval models. As an example of using the framework to model
non-traditional retrieval problems, we derive new retrieval models for subtopic retrieval, which is concerned with retrieving documents to cover many different subtopics of a general query topic.
These new models differ from traditional retrieval models in that they go beyond independent topical relevance.
, 1996
"... This paper introduces the multinomial model of text classification and retrieval. One important feature of the model is that the tf statistic, which usually appears in probabilistic IR models as
a heuristic, is an integral part of the model. Another is that the variable length of documents is accoun ..."
Cited by 32 (0 self)
Add to MetaCart
This paper introduces the multinomial model of text classification and retrieval. One important feature of the model is that the tf statistic, which usually appears in probabilistic IR models as a
heuristic, is an integral part of the model. Another is that the variable length of documents is accounted for, without either making a uniform length assumption or using length normalization. The
multinomial model employs independence assumptions which are similar to assumptions made in previous probabilistic models, particularly the binary independence model and the 2-Poisson model. The use
of simulation to study the model is described. Performance of the model is evaluated on the TREC-3 routing task. Results are compared with the binary independence model and with the simulation
, 1994
"... We show that former approaches in probabilistic information retrieval are based on one or two of the three concepts abstraction, inductive learning and probabilistic assumptions, and we propose
a new approach which combines all three concepts. This approach is illustrated for the case of indexing ..."
Cited by 24 (1 self)
Add to MetaCart
We show that former approaches in probabilistic information retrieval are based on one or two of the three concepts abstraction, inductive learning and probabilistic assumptions, and we propose a new
approach which combines all three concepts. This approach is illustrated for the case of indexing with a controlled ...
- In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval , 2001
"... This paper develops a theoretical learning model of text classification for Support Vector Machines (SVMs). It connects the statistical properties of text-classification tasks with the
generalization performance of a SVM in a quantitative way. Unlike conventional approaches to learning text classifi ..."
Cited by 23 (0 self)
Add to MetaCart
This paper develops a theoretical learning model of text classification for Support Vector Machines (SVMs). It connects the statistical properties of text-classification tasks with the generalization
performance of a SVM in a quantitative way. Unlike conventional approaches to learning text classifiers, which rely primarily on empirical evidence, this model explains why and when SVMs perform well
for text classification. In particular, it addresses the following questions: Why can support vector machines handle the large feature spaces in text classification effectively? How is this related
to the statistical properties of text? What are sufficient conditions for applying SVMs to text-classification problems successfully? | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=60464","timestamp":"2014-04-19T13:46:03Z","content_type":null,"content_length":"36982","record_id":"<urn:uuid:d208c47c-b354-4097-a240-8d5c9716fd11>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US4799178 - Method and apparatus for detecting rotational speed
This invention relates to a method and apparatus for detecting the speed of a rotating member such as the speed of the wheel of a vehicle.
Typically in order to provide for the sensing of the rotational speed of a rotating member such as the wheel of a vehicle, a speed sensor is provided that generates a signal having a frequency
directly proportional to the rotational speed. The speed sensor usually takes the form of a speed ring rotated by the rotating member having teeth spaced around its periphery which are sensed by an
electromagnetic sensor. The electromagnetic sensor provides a pulse each time the speed ring rotates one/n of one revolution where n is the number of teeth on the speed ring. Each pulse may directly
comprise a speed signal or alternatively may be shaped into a squarewave speed signal. The frequency that the speed signals are generated is directly proportional to the speed of the rotating member.
A number of methods for determining the frequency of the speed signal and therefore the speed of the rotating member has been proposed. One such method counts the number of speed signals that occur
during a constant period of time. However, at low rotational speeds, the number of speed signals counted are small resulting in a low degree of resolution in the measurement of speed. In order to
increase the resolution of the speed measurement at low speeds, the period of time over which the speed signals are counted must be extended or the number of teeth spaced around the periphery of the
speed ring must be increased. Both of these solutions may be undesirable due to the increase in the time required to obtain a measurement of speed and because there are practical limitations in the
number of teeth that can be provided on the speed ring.
Another method proposed to determine the frequency of the speed signals and therefore the rotational speed of the rotating member counts high frequency clock pulses over the interval between two
consecutive speed signals. In the case where the speed signal is in the form of a squarewave signal, the clock pulses may be counted over a period defined by the leading or trailing edges of two
consecutive squarewave speed signals. The number of high frequency clock pulses counted is representative of the speed of rotation of the rotating member. However, at high rotational speeds, the
number of clock pulses becomes small between two consecutive speed signals resulting in poor resolution in the measurement of rotational speed.
To avoid the inaccuracies associated with the above speed measurement methods, it has been proposed to determine the frequency of the speed signals and therefore the rotational speed of a rotating
member based on the precise time required for a number of speed signals to be generated over each of successive sampling intervals. In this method of speed measurement, since the beginning and end of
each sampling interval does not generally coincide with the generation of a speed signal, precise speed measurement is provided by measuring the elapsed time between beginning and ending speed
signals as defined by the sampling interval. The precise time over which the determination of rotational speed is determined is generally provided by counting the pulse output of a high frequency
clock beginning at the speed signal just prior to the sampling interval to the last speed signal detected during the sampling interval. This measured time in conjunction with the total number of
speed signals generated over the sampling interval is used to calculate the speed of the rotating member. In this system of speed measurement, the accuracy of the determination of the angular speed
of the rotating member is limited by the accuracy in the placement of the teeth located on the speed ring from which the speed signals are generated.
In the foregoing speed measurement system where the end of one sampling interval constitutes the beginning of the next sampling interval, the speed signal that defines the end point of the measured
elapsed time in one sampling interval defines the beginning point for the measurement of the elapsed time in the subsequent sampling interval. This creates the potential for erroneous speed
measurement in two consecutive sampling intervals due to an error in the angular location of a single tooth on the speed ring. In other words, a misplaced speed signal may influence two successive
speed measurements.
In certain systems, a high degree of accuracy in the measurement of speed of a rotating member is required. For example, in automotive vehicle anti-lock braking systems, errors in the measurement of
wheel speeds will affect the performance of the system. In accord with this invention, the inaccuracy in the speed measurement of a rotating member, such as the wheel of a vehicle, that is associated
with the misplacement of a speed signal is minimized by minimizing the influence of a given speed signal on consecutive speed measurements. This is accomplished by preventing a speed signal defining
the end point of one timed period associated with one speed sampling interval from constituting the beginning point of the timed period associated with the subsequent speed sampling interval. This is
accomplished in accord with this invention without decreasing the total measured time period to a time less than the speed sampling interval so that accuracy of the rotational speed measurement is
The invention may be best understood by reference to a following description of a preferred embodiment and the drawings in which:
FIG. 1 is a series of speed signal timing diagrams illustrating the principles of this invention;
FIG. 2 is a general diagram of a digital computer in a vehicle anti-lock braking system responsive to the speed of the wheels of the vehicle for preventing wheel lockup during braking;
FIG. 3 is a schematic diagram illustrating buffer registers in the brake computer of FIG. 2 which are associated with the storing of time measurements in the determination of wheel speed; and
FIGS. 4 and 5 are diagrams illustrating the operation of the brake computer of FIG. 2 in carrying out the speed measurement principles of this invention.
The principles of this invention are first described with reference to FIG. 1. In the embodiment of the invention to be described, the speed of rotation of a vehicle wheel is repeatedly calculated at
predetermined intervals (such as 10 milliseconds) hereinafter referred to as sampling intervals, one such sampling interval being illustrated in each of the timing diagrams A thru D of FIG. 1. Each
timing diagram illustrates the repeated wheel speed signals in the form of squarewave signals that are generated as the vehicle wheel rotates. The frequency of the squarewave signals is directly
proportional to the wheel speed. Each period between consecutive leading or trailing edges of the squarewave signal is associated with the time between two consecutive teeth on a speed ring of a
wheel speed sensor as the wheel rotates.
Wheel speed is determined from the wheel speed signal based on the expression
ω=K/T[av] (1)
where ω is the wheel speed, K is a constant that is a function of the radius of the wheel and the number of teeth on the speed ring of the speed sensor and T[av] is the average time between teeth as
the wheel rotates.
The average time between teeth is determined in accord with this invention by using techniques that use at least one sampling interval worth of the most recent data. One of three methods are used in
determining the average time T[av] : (1) single edge detection where only one edge (leading or trailing) of each squarewave speed signal is used, (2) double edge detection where both leading and
trailing edges of each squarewave speed signal are used or (3) low speed estimation.
Single edge detection is associated with higher wheel speeds and is used whenever the last determined wheel speed is greater than a predetermined value. As indicated, only leading or trailing edges
of each squarewave speed signal provided by the wheel speed sensor is utilized in determining wheel speed. This single edge speed detection method is demonstrated in FIG. 1A. As illustrated in this
Figure, the leading edges of the squarewave signals are utilized in determining the average time between teeth to be utilized in calculating wheel speed.
The average time T[av] between teeth of the speed sensor to be used in equation (1) to determine wheel speed at the end of a sampling interval with the single edge detection method illustrated in
FIG. 1A is defined by the expression
T[av] =(T.sub.(N) -T.sub.(0))/N (2)
where T.sub.(0) is the time of occurrence of the next-to-last leading edge of the squarewave signal in the prior sampling interval, T.sub.(N) is the time of occurrence of the last leading edge of the
squarewave signals generated during the sampling interval and N is one greater than the number of leading edges of the squarewave signals generated during the sampling interval. The time interval
between times T.sub.(0) and T.sub.(N) comprises a wheel speed calculation interval over which N teeth of the speed ring were sensed.
From the foregoing, it can be seen that consecutive speed calculation intervals associated with two consecutive sampling intervals overlap so that they do not end and begin on the same edge of a
squarewave speed signal since the last leading edge of the squarewave speed signal that occurs during the prior sampling interval defines the end of the prior speed calculation interval and the
next-to-last leading edge of the squarewave speed signal that occurs during the prior sampling interval defines the beginning of the speed calculation interval associated with the next sampling
interval. This has the effect of minimizing the influence of a single edge of the squarewave speed signals on the measurement of wheel speed. Further, all of the speed information available during a
sampling interval is utilized in the calculation of the average time between the passing of teeth on the speed ring.
In the foregoing manner, if an error is associated with the time of occurrence of the leading edge of one of the squarewave signals such as at time T.sub.(N) due to an error in the angular position
of a tooth on the speed ring, the error is not introduced into two consecutive calculations of wheel speed. As a result, the overall accuracy of the speed measurement system is improved by minimizing
the influence of each speed signal in the repeated calculations of wheel speed.
Double edge detection is associated with lower wheel speeds and is used whenever the last determined wheel speed is less than the predetermined value. The use of double edge detection when fewer
teeth on the speed ring are sensed over the sampling interval improves the accuracy of the wheel speed calculation. As illustrated in FIG. 1B, both leading and trailing edges of the squarewave
signals are used in the double edge speed detection method.
When the double edge detection method is used to determine wheel speed, the average time T[av] between teeth of the wheel speed sensor to be used in equation (1) to determine wheel speed at the end
of a sampling interval is determined by the expression
T[av] =(T.sub.(N) +T.sub.(N-1) -T.sub.(1) -T.sub.(0))/(N-1) (3)
where T[0] is the time of occurrence of the next-to-last edge of the squarewave speed signal in the prior sampling interval, T.sub.(1) is the time of occurrence of the last edge of the squarewave
speed signal in the prior sampling interval, T.sub.(N-1) is the time of occurrence of the next-to-last edge of the squarewave speed signal in the present sampling interval, T.sub.(N) is the time of
occurrence of the last edge of the squarewave speed signals to occur in the present sampling interval and N is one greater than the number of edges (leading and trailing) of the squarewave speed
signals occurring during the sampling interval. The time interval from time T.sub.(0) to time T.sub.(N) comprises the wheel speed calculation interval. Equation (3) eliminates the requirement for
symmetry in the squarewave speed signal.
As in the case with the single edge detection method of FIG. 1A, the speed calculation intervals associated with two consecutive sampling intervals overlap so that they do not end and begin on the
same edge of a squarewave speed signal thereby minimizing the influence of a single edge of the speed signal on the measurement of wheel speed. Further, all of the wheel speed information available
during a sampling interval is utilized in the calculation of the average time between the passing of teeth on the speed ring.
To provide for a transition between single and double edge detection methods as represented in the timing charts of FIG. 1A and FIG. 1B so as to ensure that the same edge of a squarewave speed signal
is not used in the measurement of wheel speed in two consecutive calculation intervals, the method and system of this invention redefines the calculation interval for the transition calculation. The
edges and their times of occurrence relative to a sampling interval that are utilized for the transition from single to double edge detection when the vehicle speed decreases from above to below the
threshold level are illustrated in timing diagram of FIG. 1C. In this case, the time interval within a sampling interval from the time T.sub.(0) to the time T.sub.(N) comprises the speed calculation
interval. The edges and their times of occurrence relative to a sampling interval that are utilized for the transition from double to single edge detection when the vehicle speed decreases from below
to above the threshold level are illustrated in timing diagram of FIG. 1D. Again, the time interval within a sampling interval from the time T.sub.(0) to the time T.sub.(N) comprises the speed
calculation interval.
A low speed detection method to be described with respect to FIGS. 4 and 5 is used at very low wheel speeds when an edge of a squarewave speed signal does not occur during the sampling interval.
The speed sensing method and apparatus of this invention are illustrated in conjunction with an anti-lock braking system generally illustrated in FIG. 2. A brake computer 10 is responsive to the
speed of the vehicle wheels and, when an incipient wheel lockup condition is sensed, controls the brake pressure to the wheel brakes to prevent wheel lockup. When an incipient wheel lockup condition
is sensed based on wheel speed or parameters derived therefrom, the brake computer 10 issues signals to brake pressure control solenoids via solenoid drivers 11 to control the wheel brake pressures
to prevent a wheel lockup condition. The front wheel brakes are controlled by the brake computer 10 by control of the pressure release and hold solenoid pairs generally illustrated as 12 and 14 and
the rear brakes are controlled together via the brake pressure release and hold solenoid pair generally illustrated as 16. The method of sensing an incipient wheel lockup condition and controlling
the wheel brake pressure so as to prevent wheel lockup may be any known method and will not be described in greater detail.
The front and rear wheel speeds of the vehicle are detected by respective wheel speed sensors including speed rings 18a through 18d each being associated with a respective one of the front and rear
wheels of the vehicle. Each speed ring has teeth angularly spaced around its circumference. In one embodiment, the teeth are spaced at seven degree intervals. The teeth of the speed rings are sensed
by respective electromagnetic sensors 20a through 20d as the speed rings are rotated by the respective wheels. The output of each electromagnetic sensor is a sinusoidal waveform having a frequency
directly proportional to wheel speed as represented by the passing of the teeth in proximity to the electromagnetic sensor.
The sinusoidal waveforms from the electromagnetic sensors 20a through 20d are supplied to respective interface and squaring circuits 22a through 22d each of which provides a squarewave output having
a frequency directly proportional to the speed of a respective wheel. As is apparent, each squarewave signal has leading and trailing edges corresponding to the leading and trailing edges of a
respective tooth of a speed ring.
The brake computer 10 takes the form of a digital computer that is standard in form and includes a central processing unit (CPU) which executes an operating program permanently stored in a read-only
memory (ROM) which also stores tables and constants utilized in controlling the wheel brake pressure in response to a detected incipient wheel lockup condition. The computer also includes a random
access memory (RAM) into which data may be temporarily stored and from which data may be read at various address locations determined in accord with the program stored in the ROM. The brake computer
10 further includes a clock generating high frequency clock signals for timing and control purposes.
The computer 10 provides a periodic interrupt at predetermined intervals such as at 10 millisecond intervals at which time a program stored in the ROM for calculating the four wheel speeds is
executed. This interrupt interval comprises the sampling interval previously referred to with respect to FIG. 1. In addition, the computer responds to each selected edge of the wheel speed squarewave
signals and executes a wheel speed interrupt routine stored in the ROM during which the information required to calculate wheel speed is stored.
A timer system is provided in the brake computer that includes a programmable timer comprised of a free running counter clocked either directly by the high frequency clock signals or alternatively
via the output of a divider clocked by the clock signals. The brake computer includes an input capture associated with each of the wheel speed signal inputs thereto. Each input capture functions to
record the count of the free running counter in a read-only input capture register in response to a program selectable edge of the corresponding squarewave speed signal input from a respective wheel.
This count represents the time of occurrence of the respective edge of the squarewave speed signal. The edge of the squarewave speed signal utilized to transfer the count of the counter into the
respective input capture register is program selectable to be one or both edges of the input squarewave signal. A computer including the foregoing functions may take the form of the Motorola
microcomputer part number MC68HC11A8.
At the higher wheel speeds, large amounts of data must be handled by the brake computer 10 in order to determine the four wheel speeds. To enable the gathering of this large amount of wheel speed
data, the brake computer 10 utilizes two identical buffer registers for each wheel. These buffer registers are generally depicted in FIG. 3 as buffer 0 and buffer 1. These buffers are utilized to
store the times of occurrence of the various edges of the respective squarewave speed signal as depicted in FIG. 1. These times are obtained from the respective input capture register.
As illustrated in FIG. 3, each of the buffers includes a memory location for storing the times T.sub.(0), T.sub.(1), T.sub.(N-1) and T.sub.(N) in addition to a memory location for storing the number
of the selected edges of the squarewave wheel speed signal that occur during the sampling interval. While one buffer is active and being used to store new wheel speed data during a sampling interval,
the other buffer is static and contains the data from the prior sampling interval which is used to calculate wheel speed.
For example, assuming buffer 0 is the static buffer, buffer 1 is being utilized to continually update the stored time values T.sub.(N-1) and T.sub.(N) as new edges of the squarewave speed signal are
detected in addition to incrementing the count of the selected edges that occur. While this is taking place, the computer 10 utilizes the information in buffer 0 to calculate wheel speed in the
manner previously described with respect to FIG. 1. In addition, the times T.sub.(N-1) and T.sub.(N) in the static buffer are used to preset the times T.sub.(0) and T.sub.(1) in the active buffer.
During the next sampling interval, the buffer 0 becomes the active buffer for gathering wheel speed information and buffer 1 becomes the static buffer from which the wheel speed is calculated.
The 10 millisecond and wheel speed interrupt routines executed by the brake computer 10 for establishing the wheel speeds in accord with this invention are illustrated in FIGS. 4 and 5. FIG. 4
illustrates the wheel speed interrupt routine executed each time a selected edge of the squarewave speed signals occurs. In general, this routine provides for the recording of the various times in
the active buffer of FIG. 3 and the number of edges detected to enable calculation of wheel speed. FIG. 5 illustrates the interrupt routine executed at 10 millisecond intervals established such as by
the high frequency clock and a counter. This routine generally provides for the calculation of wheel speed. The 10 millisecond interval between consecutive interrupts comprises the sampling interval
previously referred to.
Referring first to FIG. 4, the wheel speed interrupt routine is entered at point 24 and proceeds to a step 26 where it determines which wheel speed signal caused the interrupt. This identifies which
pair of buffers to use to record wheel speed information. From step 26, the program proceeds to a step 28 where it determines which buffer of the identified pair is the active buffer by sampling the
state of a buffer flag that is controlled as will be described with reference to FIG. 5. If buffer 1 is determined to be the active buffer, the program proceeds to a step 30 where a pointer points to
buffer 1 as the active buffer. Conversely, if buffer 0 is determined to be the active buffer the program proceeds from step 28 to a step 32 where the pointer points to buffer 0 as the active buffer.
Hereinafter, the subscript A refers to information in the active buffer while the subscript S refers to information in the static buffer.
From step 30 or 32, the program proceeds to a step 34 where the program samples the edge count N[A] in the active buffer. As will be subsequently explained, this count will be zero or greater except
in those situations where a transition between single and double edge detection takes place. Assuming that the edge count N[A] is zero or greater, the program proceeds to a step 36 where the time
T.sub.(N-1)A stored in the active buffer register is set equal to the time T.sub.(N)A representing the time of occurrence of the previous detected edge of the squarewave speed signals. Then at step
38, the time T.sub.(N)A in the active buffer register is set equal to the time stored in the capture register which is the time of occurrence of the most recent selected edge of the squarewave speed
From step 38, the program proceeds to a step 40 where the count N[A] in the active register, representing the number of selected edges of the squarewave speed signals that have occurred during the
present sampling interval, is incremented. Following step 40, the program exits the routine at step 42.
As will be described, when the 10 millisecond interrupt routine determines that the conditions exist for a transition between double and single edge detection, the storage location in the active
register recording the edge count N[A] will be initially preset to a -2 for reasons to be described. This condition will be sensed at step 34 after which the program proceeds to a step 44 where the
value of the time T.sub.(0)A in the active buffer is set equal to the time stored in the capture register. Following this step, the time T.sub.(0)A stored in the active buffer is the time of
occurrence of the first selected edge of the squarewave speed signal during the current sampling interval. This time is illustrated in FIGS. 1C and 1D for both single and double edge detection.
During the next wheel speed interrupt in response to the occurrence of the next selected edge of the squarewave speed signal, the program proceeds from step 34 to a step 46 where the value of the
time T.sub.(1)A in the active buffer is set equal to the time stored in the capture register. Following this step, the time T.sub.(1)A stored in the active buffer is the time of occurrence of the
second selected edge of the squarewave speed signal during the current sampling interval. This time is illustrated in FIGS. 1C and 1D depending upon whether the single or double edge detection method
has been selected.
In the foregoing manner, when a transition between single edge and double edge detection is required, the values of T.sub.(0)A and T.sub.(1)A in the active register are preset to the times of
occurrence of the proper edges of the squarewave speed signals.
Referring to FIG. 5, the 10 millisecond interrupt routine is illustrated. This routine is entered at point 48 and proceeds to a step 50 where the buffer flag is toggled to interchange the active/
static condition of the buffers 0 and 1 of FIG. 3. At the next step 52, the count in the free running counter representing current time is saved for low speed estimation as will be described. This
time represents the time of occurrence of the 10 millisecond interrupt.
The remaining steps of FIG. 5 are executed once for each wheel. However, the routine is only demonstrated for a single wheel, it being understood that it is repeated in similar fashion for the
remaining three vehicle wheels to determine their individual speeds.
At step 54, the program samples the number N.sub.(S) stored in the static buffer. It will be recalled, that this buffer contains the most recent information concerning wheel speed gathered during the
just completed sampling interval. If N.sub.(S) is greater than 1 as will occur for all wheel speed conditions except for the very lowest wheel speed conditions, the program proceeds to a step 55
where the storage location in the active buffer storing the value T.sub.(0)A is preset to the time T.sub.(N-1)S of the static buffer. With reference to FIGS. 1A and 1B, this establishes the time of
T.sub.(0) of the calculation interval. Similarly, the time T.sub.(1)A of the active register is preset to the time T.sub.(N)S in the static buffer. Again with reference to FIGS. 1A and 1B, this
establishes the times T.sub.(1) of the calculation interval.
From step 55, the program proceeds to a step 56 where it determines if the single or double edge detection method is being used to determine wheel speed. If the single edge detection method is being
used, the program proceeds to a step 57 where the value of N.sub.(S) in the static buffer is incremented so that its value properly reflects the number of speed ring teeth within the calculation
interval to be used in the calculation of wheel speed according to equation (2).
From step 56 or 57, the program proceeds to a step 58 where the average time between teeth on the speed ring is determined in accord with equation (2) if the single edge detection method is being
used or equation (3) if the double edge detection method is being used. Both equations use the wheel speed information in the static register representing the wheel speed information gathered during
the most recent sampling interval. As previously described, the calculation interval beginning with the time T.sub.(0) in the static buffer overlaps the previous calculation interval so that they do
not end and begin on the same edge of the squarewave speed signals. From step 58, the program proceeds to a step 60 where wheel speed is calculated based on equation (1).
From step 60, the program determines whether or not a transition between single and double edge detection methods is required. This is accomplished beginning at step 62 where the wheel speed
calculated at step 60 is compared with a threshold value above which single edge detection is required and below which double edge detection is required. If the wheel speed is greater than the
threshold value, the program proceeds to a step 64 where the program is conditioned for single edge detection wherein the input capture functions and the wheel speed interrupt are conditioned to
respond only to alternating edges of the squarewave speed signals. Conversely, if the wheel speed is equal to or less than the threshold value, the program proceeds from step 62 to a step 66 where
the program is conditioned for double edge detection wherein the input capture functions and the wheel speed interrupt are conditioned to respond to all edges of the squarewave speed signals.
From step 64 or step 66, the program proceeds to a step 68 where it determines whether or not a transition has been made between single and double edge detection. If not, the value of N.sub.(S) in
the static buffer is preset to zero. However, if the program determines that a transition between single and double edge detection has resulted from step 64 or step 66, the program proceeds to a step
72 where the value of N.sub.(S) in the static register is set to a -2. In reference to steps 44 and 46 in the wheel speed interrupt routine of FIG. 4, this step conditions the wheel speed interrupt
routine to execute the steps 44 and 46 previously described.
From step 70 or step 72, the program then proceeds to a step 74 where a register in the RAM storing an old value of wheel speed is preset to the last measured value of wheel speed. As will be
described, this value of wheel speed will be utilized during the low speed estimation routine to be described.
Returning to step 54, if the value of N.sub.(S) in the static register is equal to 1, indicating that only one edge of the squarewave speed signals has been detected during the prior interrupt
interval (a condition that will occur only at low wheel speeds where the double edge detection method has been selected by steps 62 and 64), the program proceeds to a step 76 where the time T.sub.(0)
A in the active register is set equal to the value of T.sub.(1)S of the static register. Similarly, the time T.sub.(1)A of the active register is preset to the time T.sub.(N)S of the static register.
Step 76 is required when only a single edge of the squarewave wheel speed signal is detected during a sampling interval since the last two edges correspond to the times T.sub.(1)S and T.sub.(N)S of
the static register.
From step 76, the program proceeds to a step 78 where the average time T[av] between teeth on the speed ring is determined by subtracting the time T.sub.(0)S from the time T.sub.(N)S. From step 78,
the program proceeds to a step 80 where the wheel speed is calculated based on equation (1) using the value of T[av] determined at step 78. From step 80, the program executes the steps 62 thru 64 to
determine whether or not a transition is required between single and double edge detection as previously described.
At very low wheel speeds, the possibility exists that no edge of the wheel speed signals will be detected during a sampling interval between 10 millisecond interrupts. Even though no wheel speed
signals are received, there is still information upon which an estimation of wheel speed may be determined. In general, when the condition exists that no edges are detected during an interrupt
interval, the controller assumes that an edge was detected just at the end of the sampling interval. Thereafter, the controller calculates a maximum possible wheel speed based on the supposed
detection of a wheel speed signal at the end of the sampling interval. This maximum value is compared with the speed calculated at the end of the prior sampling interval. The minimum of the two wheel
speed values is then used as an estimation of the current wheel speed. Thereafter, when an actual edge is detected in the next or subsequent sampling interval, a true period measurement is made and
wheel speed is calculated in accord with the steps 76 through 80 as previously described or steps 56 through 60 depending upon the number of edges detected.
Assuming no edges of the squarewave speed signal were detected during the sampling interval just completed, the program proceeds from step 54 to a step 82 where it determines whether or not the
initial times T.sub.(0)S and T.sub.(1)S representing the time of the last two detected edges are valid. This step is required to accommodate the condition wherein the vehicle stops and long periods
of time elapse without the detection of a new edge of the wheel speed signal. If the time lapse is too great indicating the stored times are no longer valid, the program proceeds to a step 84 where
the wheel speed is set to zero and thereafter to step 86 where the value of N[S] in the static register is preset to -2. At step 88, the program conditions the brake computer for double edge
Returning to step 82, if the times T.sub.(0)S and T.sub.(1)S are determined to be valid, the program proceeds to a step 90 where the value of the time T.sub.(0)A in the active buffer is preset to the
time T.sub.(0)S in the static register. Similarly, the time T.sub.(1)A is preset to the time T.sub.(1)S. This step provides for the intialization of the active register to the times of the last two
detected edges of the squarewave signals. Thereafter, at step 92, the average speed between teeth is assumed to be the difference between the current time stored at step 52 and the time T.sub.(0)S
stored in the static register. Based on this time, the program calculates a temporary wheel speed signal at step 94 in accord with equation (1). At step 96, this temporary wheel speed is compared
with the last detected actual wheel speed saved at step 74. If the temporary wheel speed is less than the last actual measured wheel speed, the program proceeds to a step 98 where the actual wheel
speed is set to the temporary wheel speed. However, if the temporary wheel speed calculated at step 94 is greater than the last actual wheel speed calculated and saved at step 74, the program
proceeds to a step 100 where the actual wheel speed is set to the last actual wheel speed determined and saved at step 74. Steps 96 thru 100 functions to set the actual wheel speed, when no new wheel
speed edges are detected, to the lower of (1) the wheel speed based on the assumption of a wheel speed pulse occurring at the end of the sampling interval or (2) the last calculated wheel speed
value. From step 74, 88, 98 or 100, the program exits the routine at step 102.
In summary, the calculation periods based on consecutive sampling intervals overlap such that the end point of one calculation interval does not comprise the beginning point of the next calculation
interval. This minimizes the influence of a single wheel speed signal on the calculation of wheel speed so as to minimize the errors that may be associated with the angular position of a single wheel
speed signal. The foregoing is accomplished while yet utilizing wheel speed information over a complete sampling interval so as to maximize the accuracy of the wheel speed measurement.
The foregoing description of a preferred embodiment of the invention for the purpose of illustrating the invention is not to be considered as limiting or restricting the invention since many
modifications may be made by the exercise of skill in the art without departing from the scope of the invention. | {"url":"http://www.google.co.uk/patents/US4799178","timestamp":"2014-04-17T21:35:16Z","content_type":null,"content_length":"95306","record_id":"<urn:uuid:530e6df0-9507-4501-9005-446e187f3f5c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Ax=b, where A is an unknown Toeplitz matrix, x and b are known.
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
I am trying to solve an equation of the form $Ax=b$, where $A$ is an unknown Toeplitz matrix, while $x$ and $b$ are known.
If one knows corresponding Matlab procedure, it'll be great.
up vote 0 down vote favorite
matrices linear-algebra
add comment
I am trying to solve an equation of the form $Ax=b$, where $A$ is an unknown Toeplitz matrix, while $x$ and $b$ are known.
You have more unknowns ($2n-1$) than equations ($n$)
up vote 2 down vote
add comment | {"url":"http://mathoverflow.net/questions/93137/solving-ax-b-where-a-is-an-unknown-toeplitz-matrix-x-and-b-are-known","timestamp":"2014-04-19T10:05:24Z","content_type":null,"content_length":"49994","record_id":"<urn:uuid:14155ba2-31ed-4158-86c2-62fffec39da2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polynomial Curve Fitting, Matrices
February 4th 2010, 08:20 PM
Polynomial Curve Fitting, Matrices
I've done over 50 problems for a Linear Algebra class tonight and I'm sooo burnt out. I'm giving up on these ones.. if you can help me, that would be wonderful. Otherwise, I'm turning in what I
have. Strangely enough, it's the odd problems that I already have solutions to that I don't understand. Got the even ones already.
11) In the "Polynomial Curve Fitting" section:
The graph of a cubic polynomial function has horizontal tangents at (1, -2) and (-1,2). Find an equation for the cubic and sketch its graph.
Somehow the answer is p(x) = -3x + x^3. Just want to know the steps.
29) Use a system of equations to write the partial fraction decomposition of the rational expression. Then solve the system using matrices.
$<br /> \frac{4x^2}{(x+1)^2(x-1)} = \frac{A}{x-1}+\frac{B}{x+1}+\frac{C}{(x+1)^2}<br />$
And the final answer should be:
$<br /> \frac{1}{1-x}+\frac{3}{1+x}-\frac{2}{(x+1)^2}<br />$
47) Consider the matrix..
$<br /> A=\begin{bmatrix} 1 &k &2 \\ -3 &4 &1 \\ \end{bmatrix}<br />$
If A is the augmented matrix of a system of linear equations, find the value(s) of k such that the system is consistent.
(Answer is all real k not equal to -4/3. Just want to know how they got this so I understand it.
58) True or false: Every matrix has a unique reduced row-echelon form.
Thank you in advance. I appreciate it.
February 5th 2010, 04:54 AM
ref attachment
February 6th 2010, 05:13 AM
I've done over 50 problems for a Linear Algebra class tonight and I'm sooo burnt out. I'm giving up on these ones.. if you can help me, that would be wonderful. Otherwise, I'm turning in what I
have. Strangely enough, it's the odd problems that I already have solutions to that I don't understand. Got the even ones already.
11) In the "Polynomial Curve Fitting" section:
The graph of a cubic polynomial function has horizontal tangents at (1, -2) and (-1,2). Find an equation for the cubic and sketch its graph.
Somehow the answer is p(x) = -3x + x^3. Just want to know the steps.
Any cubic polynomial can be written in the form $f(x)= ax^3+ bx^2+ cx+ d$ and then $f'(x)= 3ax^2+ 2bx+ c$.
Saying that it has a horizontal tangent at (1, -2) tells you two things: its value at x= 1 is $f(1)= a(1)^3+ b(1)^2+ c(1)+ d= a+ b+ c+ d= -2$ and its derivative there is $f'(1)= 3a(1)^2+ 2b(1)+ c
= 0$. Do the same at x= -1 to get four equations for a, b, c, and d.
[quote]29) Use a system of equations to write the partial fraction decomposition of the rational expression. Then solve the system using matrices.
$<br /> \frac{4x^2}{(x+1)^2(x-1)} = \frac{A}{x-1}+\frac{B}{x+1}+\frac{C}{(x+1)^2}<br />$
Multiply both sides of the equation by $(x+1)^2(x-1)$ to get
$4x^2= A(x+1)^2+ B(x-1)(x+1)+ C(x-1)= Ax^2+ 2Ax+ A+ Bx^2- B+ Cx- C$
$4x^2= (A+ B)x^2+ (2A+ C)x+ (A- B+ C)$
Equating coefficients, A+ B= 4, 2A+ C= 0, and A- B+ C= 0.
Those correspond to the matrix equation
$\begin{bmatrix}1 & 1 & 0 \\ 2& 0 & 1 \\ 1 & -1 & 1\end{bmatrix}\begin{bmatrix}A \\ B \\ C\end{bmatrix}= \begin{bmatrix}4 \\ 0 \\ 0\end{bmatrix}$
And the final answer should be:
$<br /> \frac{1}{1-x}+\frac{3}{1+x}-\frac{2}{(x+1)^2}<br />$
47) Consider the matrix..
$<br /> A=\begin{bmatrix} 1 &k &2 \\ -3 &4 &1 \\ \end{bmatrix}<br />$
If A is the augmented matrix of a system of linear equations, find the value(s) of k such that the system is consistent.
(Answer is all real k not equal to -4/3. Just want to know how they got this so I understand it.
Row reduce the matrix just as you would to solve it. Since there are only two rows, that is simple: Add 3 times the first row to the second to get
$\begin{bmatrix} 1 & k & 2 \\0 & 4+ 3k & 7\end{bmatrix}$
That last row corresponds to (4+3k)y= 7. To solve that you must divide by 4+ 3k which you cannot do if 4+ 3k= 0.
58) True or false: Every matrix has a unique reduced row-echelon form.
True, of course. You can find the reduced row-echelon form by following a specific formula which, if done correctly, will always give the same result for the same matrix.
Thank you in advance. I appreciate it. | {"url":"http://mathhelpforum.com/advanced-algebra/127252-polynomial-curve-fitting-matrices-print.html","timestamp":"2014-04-18T14:38:26Z","content_type":null,"content_length":"12319","record_id":"<urn:uuid:aae1cae3-2064-4fd0-917d-17cf56ee70a3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Theoretical Condensed Matter Physics
Leo P. Kadanoff
Ph.D., Harvard, 1960.
John D. MacArthur Distinguished Srvc. Prof. Emeritus, Depts. Physics and Math., James Franck Inst., Enrico Fermi Inst., and the College
History of Science, Theoretical physics, hydrodynamics, statistical physics.
Leo Kadanoff's homepage
I do research connected with the history and philosophy of science, particularly aimed at describing and interpreting condensed matter theory. I am also interested in the connection between condensed
matter and particle physics.
My physics research of my group is aimed at non-linear systems with the aid of techniques coming from statistical physics. More specifically, we are studying how turbulent, chaotic, and stochastic
behavior arises in dynamical systems, particularly hydrodynamical and biological systems. For example, we have been extensively concerned with the development of simplified models for the development
of fractal patterns (Loewner evolution), turbulence, and biological systems. We have also studied the nature of mathematical infinities in the flow of fluids. We use both analytical and simulational
methods and try to use experimental data whenever possible. Our basic goal is to understand the nature of the complex motion that can arise in even very simple systems. This work has applications to
mathematics, astronomy, and chemical engineering. My most recent work has aimed at understanding the eigenvalue structure of singular Toeplitz matrices.
In the year 2007, I was President of the American Physical Society.
Selected Publications:
• More is the Same; Mean Field Theory and Phase Transitions, Journal of Statistical Physics. Volume 137, pp 777-797, (December 2009) arXiv:0906.0653.
• Theories of Matter: Infinities and Renormalization, to be published in The Oxford Handbook of the Philosophy of Physics editor Robert Batterman, Oxford University Press (2011). arXiv:1002.2985.
• Relating Theories via Renormalization, submitted to studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, (August 2010).
• Expansions for Eigenfunction and Eigenvalues of large-n Toeplitz Matrices, Papers in Physics, vol. 2 art 020003 (2010) arXiv:0906.0760.
• Hip Bone is Connected to... II, Leo P. Kadanoff, Physics Today (March 2009).
• Discrete Charges on a Two Dimensional Conductor, M.Kl. Berkenbusch, I. Claus, C. Dunn, L.P. Kadanoff, M. Nicewicz, and S.C. Venkataramani. Journal of Statistical Physics, 116, 5/6, (September
• Trace for the Loewner Equation with Singular Forcing, Leo P. Kadanoff and Marko Kleine Berkenbusch. Nonlinearity, 17 4, R41-R54 (2004).
• Stochastic Loewner evolution driven by Lévy processes, I. Rushkin, P. Oikonomou, L.P. Kadanoff and I.A. Gruzberg. J. Stat. Mech. P01001 (January 2006).
• The Loewner Equation: Maps and Shapes. Ilya A. Gruzberg and Leo P. Kadanoff. Journal of Statistical Physics, 114 5, 1183-1198 (March 2004).
• An educational moment, Leo P. Kadanoff, Physics Today, (September 2006).
• Pulled Fronts and the Reproductive Individual, Leo P. Kadanoff, Journal of Statistical Physics, p. 1-4 (April 2006).
Related Links:
Updated 2/2011
Kathryn Levin
Ph.D., Harvard, 1970.
Prof., Dept. Physics, James Franck Inst., and the College.
Theoretical physics, solid state physics.
Since 2003 (with the discovery of the fermionic atomic superfluids), our research has moved to the interface of condensed matter and AMO (atomic, molecular and optical) physics. We have been most
interested in what one can learn from these trapped atomic gases about high temperature superconductors. Others in the Physics community are interested in using these systems as prototypes for a very
strongly interacting fermionic system such as one finds in nuclear matter, in astrophysics or quark-gluon plasmas. This is an exciting time period where a range of different physics sub-disciplines
have come together to address some of their common interests. This seems in many ways all the more novel because the difference in energy scales between, say, the quark-gluon plasmas and the atomic
gases represents 21 orders of magnitude!
Superfluidity in fermions is ultimately driven by an attractive interaction, which effectively converts fermions into "bosons" (called Cooper pairs) which can then Bose condense. They will do so into
their lowest energy state which corresponds to a situation where the individual fermions are associated with time reversed states. The remarkable aspect of the ultracold gases is that one can with a
magnetic field tune the strength of the attractive interaction from weak (Bardeen Cooper Schrieffer or BCS) to strong (Bose Einstein condensation or BEC). That there is a connection between these
atomic Fermi superfluids and high temperature superconductors is due, we presume to the fact that the cuprates are mid-way between BCS and BEC. This can be justified by the small size of the Cooper
pairs and by the relatively high transition temperatures, both of which suggest the attraction or "glue" is stronger than in conventional superconductors.
Our work is based on many body quantum field theory and is closely tied to experiments. We have, on several occasions collaborated with experimentalists in the cold gas community. Recently we have
been exploring the commonalities of these two different systems via spectroscopic, scattering and transport probes. Of particular interest lately has been the question of "perfect fluidity" in the
atomic gases, associated with very low shear viscosity. This, we argue, is related to "bad metal" behavior in the cuprates, with very low conductivity. This perfect fluidity is also of great interest
to physicists who work on Quantum Chromodynamics (QCD).
Selected Publications:
• "Heat Capacity of a strongly-Interacting Fermi Gas." J. Kinast, A. Turlapov, J.E. Thomas, Qijin Chen, Jelena Stajic, Science 307, 1296 (2005).
• "Theory of Radio Frequency Spectroscopy Experiments in Ultracold Fermi Gases and Their Relation to Photoemission Experiments in the Cuprates", Qijin Chen, Yan He, Chih-Chun Chien and K. Levin,
Rep. Prog. Phys. 72 (2009) 122501.
• "Comparison of Different Pairing Fluctuation Approaches to BCS-BEC Crossover", K. Levin, Qijin Chen, Chih-Chun Chien and Yan He. Annals of Physics, 325, 233-264 (2010).
• "Establishing the Presence of Coherence in Atomic Fermi Superfluids: Spin Flip and Spin-Preserving Bragg Scattering at Finite Temperatures", Hao Guo, Chih-Chun Chien and K. Levin, Phys. Rev. Lett
105, 120401 (2010)
• "Microscopic Approach to Viscosities in Superfluid Fermi Gases: From BCS to BEC" H. Guo, D. Wulin, Chih-Chun Chien, K. Levin, ArXiv 1008.0423
• "Perfect fluids and Bad Metals: Transport Analogies Between Ultracold Fermi gases and High T_c superconductors". Guo, D. Wulin, Chih-Chun Chien and K. Levin, ArXiv 1009.4678
• "Conductivity in Pseudogapped Superconductors: The Role of the Fermi Arcs" Dan Wulin, Benjamin M. Fregoso, Hao Guo, Chih-Chun Chien and K. Levin, ArXiv 1012.4498
• "Spin Transport in Cold Fermi Gases: A Pseudogap Interpretation of Spin Diffusion Experiments at Unitarity" Dan Wulin, Hao Guo, Chih-Chun Chien and K. Levin, ArXiv 1102.0997
• "Nucleation of Spontaneous Vortices in Trapped Fermi Gases Undergoing a BCS-BEC Crossover" A. Glatz, H. Roberts, I.S. Aronson, K. Levin, ArXiv 1102.1792
Related Links:
Updated 2/2011
PH.D., Massachusetts Institute of Technology, 2006.
Assistant Prof., Dept. Physics, James Franck Inst., and the College
Theoretical physics, condensed matter physics
Recently, my research has focused on two areas of quantum condensed matter physics. The first area is the study of "topological phases" of matter, such as quantum Hall liquids and topological
insulators. These phases have a rich internal structure, but unlike conventional phases like magnets or superconductors, this structure has nothing to do with symmetry breaking or order parameters.
Instead, the defining features of these phases have a topological character. As a result, entirely new concepts and tools need to be constructed to understand these systems. Much of my research is
devoted to developing these new methods and approaches.
My second area of focus is at the intersection of quantum information theory and condensed matter physics. Here the fundamental problems are (1) to determine which quantum many-body systems can be
efficiently simulated on a classical computer and (2) to develop methods to accomplish this task. In addition to its potential practical implications, this problem is closely related to many basic
conceptual questions such as the nature of entanglement in many-body ground states and the classification of gapped quantum phases of matter.
Selected publications:
• M. Levin. Protected edge modes without symmetry. Phys. Rev. X 3, 021009 (2013).
• M. Levin and Z.-C. Gu. Braiding statistics approach to symmetry-protected topological phases. Phys. Rev. B 86, 115109 (2012).
• M. Levin and A. Stern. Fractional topological insulators. Phys. Rev. Lett. 103, 196803 (2009).
• M. Levin and C. P. Nave. Tensor renormalization group approach to 2D classical lattice models. Phys. Rev. Lett. 99, 120601 (2007).
• M. Levin and X.-G. Wen. Detecting topological order in a ground state wave function. Phys. Rev. Lett. 96, 110405 (2006).
Related Links:
Updated 10/2013
Peter B. Littlewood
Ph.D., Cambridge, 1980.
Prof., Dept. Physics, James Franck Inst., and the College; Associate Director - Physical Sciences & Engineering, Argonne Natl. Lab.
Theoretical physics, condensed matter physics.
Professor Littlewood's research has focused on the dynamics of collective transport; phenomenology and microscopic theory of high-temperature superconductors, transition metal oxides and other
correlated electronic systems; and optical properties of highly excited semiconductors. He has applied his methods to engineering, including holographic storage, optical fibers and devices.
Selected Publications (TBD):
• Band Structure of SnTe studies by Photoemission Spectroscopy, P.B. Littlewood, B. Mihaila, R.K. Schulze, D.J. Safarik, J.E. Gubernatis, A. Bostwick, E. Rotenberg, C.P. Opeil, T. Durakiewicz, J.L.
Smith, and J.C. Lashley, Physical Review Letters, 105, 086404 (2010).
• Polariton Condensates, D. Snoke and P.B. Littlewood, Physics Today, 63, 42 (August 2010).
Related Links:
Updated 8/2012
Gene F. Mazenko
Ph.D., Massachusetts Institute of Technology, 1971.
Prof., Dept. Physics, James Franck Inst., and the College.
Theoretical physics, statistical physics.
Various materials, for example magnets, superconductors, liquid crystals, diblock copolymers and conventional solids, when temperature quenched from a high to a low temperature grow over time into
ordered structures. In quenching a material from a temperature where it is a liquid down to a temperature corresponding to a solid we go from a material which is a uniform fluid to a final state
where we have a crystalline solid. In the kinetic process taking us from the fluid to the crystal one finds intermediate states where the order is broken up by defects. Examples are dislocations in
solids and vortices in magnets. We are interested in the appearance, motion and annihilation of these defects.
In the case of magnets and superfluids, where the final ordered state is uniform, the theory has been been developed to the state where we have been able to answer questions like: What is the
velocity distribution for these evolving defects.
We are currently interested in the fundamental question of the nature of defect structures in pattern forming systems. Our interest is in those structures which form naturally under experimental
circumstances. Our guide is to try and understand recent experiments on microphase separating diblock copolymer systems. Such systems grow a layered or striped phase. These systems are fundamentally
important as prototypical two dimensional ordering systems but also as building blocks on the nano scale. Previously we have developed numerical techniques for looking at the nature of kinetic models
proposed to describe systems of this type.
We are also working on the theoretical description of the kinetics of the liquid-glass transition. We have developed a new field theoretical model, called the hindered diffusion model, which leads
naturally, to characteristic times which are activated, grow as e^A/T as temperature T is lowered. Much remains to be worked out for this model.
Selected Publications:
• G.F. Mazenko, Vortex Velocities in the O(n) Symmetric TDGL Model. Phys. Rev. Lett 78, 401, 1997.
• H. Qian and G. F. Mazenko, Vortex Dynamics in a Coarsening Two Dimensional XY Model, Phys. Rev. E 68, 021109/4 (2003).
• H. Qian and G. F. Mazenko, Defect Structures in the Growth Kinetics of the Swift-Hohenberg Model, Phys. Rev. E 67, 036102/12 (2003).
Related Links:
Updated 8/2006
Dam T. Son
Ph.D., Institute for Nuclear Research - Moscow, 1995.
University Prof., Dept. Physics, Enrico Fermi Institute, James Franck Inst., and the College.
Theoretical physics
I have a broad research program encompassing several areas of theoretical physics.
String Theory: applications of gauge-gravity duality in the physics of the quark-gluon plasma and other strongly interacting systems.
Nuclear Physics: properties of the hot and dense states of matter, e.g., the quark gluon plasma and dense quark matter (color superconductors).
Condensed matter physics: physics of the quantum Hall system, graphene; applications of quantum field theory.
Atomic physics: many-body physics of cold trapped atoms, BCS-BEC crossover, applications of quantum field theoretical techniques.
Selected Publications:
• R. Baier, A.H. Mueller, D. Schiff, and D.T. Son, "Bottom-up" thermalization in heavy ion collisions, Phys. Lett. B 502, 51 (2001).
• P. Kovtun, D.T. Son, and A.O. Starinets, Viscosity in Strongly Interacting Quantum Field Theories from Black Hole Physics, Phys. Rev. Lett. 94, 111601 (2005).
• Y. Nishida and D.T. Son, Epsilon Expansion for a Fermi Gas at Infinite Scattering Length, Phys. Rev. Lett. 97, 050403 (2006).
• C. Hoyos and D.T. Son, Hall Viscosity and Electromagnetic Response, Phys. Rev. Lett. 108, 066805 (2012).
Related Links:
Updated 10/2012
Paul B. Wiegmann
Ph.D., Landau Inst., Moscow, 1978.
Robert W. Reneker Distinguished Service Professor, Dept. Physics, James Franck Inst., Enrico Fermi Inst., and the College.
Theoretical physics, condensed matter physics.
Condensed Matter Physics: Electronic Physics in Low Dimensions, Quantum Magnetism, Correlated Electronic Systems, Quantum Hall Effects, Topological aspects of Condensed Matter Theories, Electronic
systems far from equilibrium.
Statistical Mechanics: Non-equilibrium Statistical Mechanics, Critical phenomena governed by Conformal Symmetry, Conformal stochastic processes, Stochastic geometry, Random Matrix Theory.
Mathematical Physics: Integrable Models of Quantum Field Theory and Statistical Mechanics, Quantum Groups and Representation theory, Anomalies in Quantum Field Theory, Conformal Field Theory, Quantum
Nonlinear Physics: Driven non-equilibrium systems, Turbulence, Fractal aspects of Pattern Formation, Interface Dynamics, Incommensurate Systems, Integrable aspects of nonlinear physics, Quantum
Non-linear Phenomena.
Papers I've published since 1993 are available in x-arXiv/cond-mat and x-arXiv/hep-th.
Related Links:
Updated 2/2011
Thomas A. Witten
Ph.D., San Diego, 1971.
Prof. Emeritus, Dept. Physics, James Franck Inst., and the College
Theoretical condensed matter physics, weakly-connected matter.
Thomas Witten's homepage
My research concerns collective mechanisms for creating spontaneous structure in forms of conventional condensed matter such as polymer liquids, evaporating liquid drops, layer-forming surfactant
micelles and thin elastic sheets. All these materials when subjected to structureless external forces develop new forms of spontaneous structure at a fine length scale, such as the sharp folds of a
crumpled sheet or the thin ring stain left when a drop of dirty fluid has evaporated. These new forms of force-induced structure often arise from fundamental mechanical properties such as the
competition between bending and stretching energy in an elastic sheet or between evaporative flows and capillary forces in an evaporating drop. They may arise from fundamental statistical properties
such as the randomness of a chain polymer molecule or the random, tenuous structure of a colloidal aggregate. In either case the fundamental origins of the resulting structures mean that they can be
used and manipulated in a wide range of material realizations independent of the specific properties of the materials.
Selected Publications:
Related Links:
Updated 2/2011
Wendy Zhang
Ph.D., Harvard, 2001.
Associate Professor, Dept. of Physics, James
Franck Institute, and the College.
Theoretical physics, soft condensed matter.
I am interested in the formation of singularities, e.g. divergences in physical quantities such as pressure, on a fluid surface due to flow and surface tension effects. Two examples are the breakup
of a liquid drop and viscous entrainment. In studying how nonlinear interactions give rise to singularities, we hope to understand the kinds of simplification in dynamics that can result when a
physical process involves disparate length- and time-scales. We also hope that surface tension effects can be used to create structures which span a few molecules in one dimension but are macroscopic
in other dimensions. More generally, thin tendril-like structures which extend over large distances arise in many contexts and can often strongly influence the large-scale dynamics. Examples include
thermal and compositional convection, Coulomb fission and the formation of tether structure on a fluid surface due to optical radiation pressure. We use analytical methods, often based on asymptotic
analysis, and numerical simulations. Many of the work are inspired by, or happen in parallel with, experimental work.
Selected Publications
• Balance of actively generated contractile and resistive forces controls cytokinesis dynamics. W. W. Zhang & D. N. Robinson, PNAS 102, 2005.
• Drop Splashing on a Dry Smooth Surface. L. Xu, W. W. Zhang & S. R. Nagel, Phys. Rev. Lett. 94 2005.
• Viscous Entrainment from a Nozzle: Singular Liquid Spouts. W. W. Zhang, Phys. Rev. Lett. 93 2004.
• Persistence of Memory in Drop Breakup: The Breakdown of Universality. P. Doshi, I. Cohen, W. W. Zhang, P. Howell, M. Siegel, O. A. Basaran, & S. R. Nagel, Science, 302 2003.
• Shake-Gels: Shear-induced gelation of laponite-PEO mixtures. J. Zebrowski, V. Prasad, W. W. Zhang, L. M. Walker & D. A. Weitz, Colloid & Surface Sci. A 213 2003.
Related Links:
Updated 6/2008 | {"url":"http://physics.uchicago.edu/research/areas/condensed_t.html","timestamp":"2014-04-16T10:24:23Z","content_type":null,"content_length":"42633","record_id":"<urn:uuid:9566885f-9be7-4307-8012-f7a586322de6>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fig. 1 The bottom line
Euclid’s ‘endless breadth’
(and endless parallel lines)
under the SEM.
Fig. 3 Absolutely breathless!
Euclid’s ‘breadth-less length’ under the SEM again.
I would show you what a line is, but
my compass is not long enough. Do
you understand the problem we
mathematicians run into, Bill? | {"url":"http://youstupidrelativist.com/01Math/02Line/02Euclid.html","timestamp":"2014-04-19T17:01:22Z","content_type":null,"content_length":"131982","record_id":"<urn:uuid:3d8603a3-bcea-426b-a01c-98f7b84b4967>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Strong or weak force??
1. The problem statement, all variables and given/known data
So all I want to know is how you tell if it is a strong or weak force interaction.
e.g. K^+ = pi^0 + pi^0 + pi^+ (all mesons)
So i determined that the energy was concerved:
baryon and Lepton numbers are also conserved
Now having determined that the interaction is possible, how do I tell if it is via strong or weak force. | {"url":"http://www.physicsforums.com/showthread.php?t=585776","timestamp":"2014-04-18T10:48:22Z","content_type":null,"content_length":"30286","record_id":"<urn:uuid:fb5543a2-ed61-4068-8afb-850d931d1f24>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Explanation needed on a sequence proof
June 2nd 2011, 11:44 AM
Explanation needed on a sequence proof
The Theorem is every convergent sequence is bounded:
$(a_n)\rightarrow\alpha$ Then, by Theorem 2.1 (in my book), $(|a_n|)\rightarrow |\alpha|$.
Quite arbitrary, let us choose $\epsilon =1$ (so I could pick any number not just 1?), and then use the definition of a limit to conclude there exists N such that $\left| |a_n|-|\alpha|\right |<
1, \ \forall n>N$.
That is for n > N,
(I don't understand what is going with the max part)
Hence, $\forall n\geq 1$
$|a_n| \leq \text{max}\{|a_1|,|a_2|,\cdots , |a_N|, |\alpha|+1\}$
and so $(a_n)$ is bounded.
June 2nd 2011, 12:10 PM
The Theorem is every convergent sequence is bounded: $(a_n)\rightarrow\alpha$ Then, by Theorem 2.1 (in my book), $(|a_n|)\rightarrow |\alpha|$.
choose $\epsilon =1$ , and then use the definition of a limit to conclude there exists N such that $\left| |a_n|-|\alpha|\right |< 1, \ \forall n>N$.
That is for n > N,
(I don't understand what is going with the max part)
Hence, $\forall n\geq 1$
$|a_n| \leq \text{max}\{|a_1|,|a_2|,\cdots , |a_N|, |\alpha|+1\}$ and so $(a_n)$ is bounded.
Actually it is $n\ge N$ then $|a_n|<|\alpha|+1$.
That is, from $N$ on $|\alpha|+1$ bounds the sequence.
But we do not know what happens to $|a_j|$ if $1\le j<N$.
That is where the max comes in.
Here is the way I taught it: let $M = \sum\limits_{k = 1}^N {\left| {a_k } \right|}$.
Now $\left( {\forall n} \right)$ we have $|a_n|<|\alpha|+1+M$
June 2nd 2011, 12:22 PM
Actually it is $n\ge N$ then $|a_n|<|\alpha|+1$.
That is, from $N$ on $|\alpha|+1$ bounds the sequence.
But we do not know what happens to $|a_j|$ if $1\le j<N$.
That is where the max comes in.
Here is the way I taught it: let $M = \sum\limits_{k = 1}^N {\left| {a_k } \right|}$.
Now $\left( {\forall n} \right)$ we have $|a_n|<|\alpha|+1+M$
Why can you just add M at the end? It is understandable abs(a) + 1 + M is greater than a_n since a_n \leq M so anything added to M would be greater than a_n.
June 2nd 2011, 12:30 PM
That is a simple to answer.
If $1\le n\le N$ then $|a_n|\le M\le M+|\alpha|+1$
If $n\ge N$ then $|a_n|\le |\alpha|+1\le M+|\alpha|+1$ | {"url":"http://mathhelpforum.com/differential-geometry/182253-explanation-needed-sequence-proof-print.html","timestamp":"2014-04-17T15:36:37Z","content_type":null,"content_length":"14553","record_id":"<urn:uuid:01fb8bfa-13fd-4cc0-8d69-9eb3ee9bac7c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there an axiomatic approach of the notion of dimension ?
up vote 11 down vote favorite
There are many notions of dimension : algebraic, topological, Hausdorff, Minkowski... (and others). While the topological one generalize the algebraic one, the last three need not coincide for every
sets. Yet it is generally acknowledged that the Hausdorff dimension has "nice enough" properties to work with (the interest of the Minkowski dimension lies mainly in the fact that it's easier to
So my main question is this : is there an axiomatic approach that would tidy up this mess ? For example, is there a result of the form : if you ask these axioms then the only map from "reasonnable
sets" to the set of positive real integers is the Hausdorff dimension ? (or another one ?). If so what are they ?
Are there also a clearly identified list of properties that you would ask from any notion of dimension ? I give the following as an example :
• it should coincide with the algebraic dimension for finite dimensional vector spaces
• dim A $\leq$ dim B if $A \subset B$
• some sort of nice behaviour for cartesian products (at least for reasonnable sets)
• some sort of nice behaviour for infinite increasing unions and/or decreasing intersections
(The Hausdorff dimention, in general, takes values in the non-negative reals, not only in the positive integers. Having fractional Hausdorff dimension can be taken as a decent definition of
"fractal"... ) – Qfwfq Nov 12 '11 at 1:18
my bad, I meant to write positive real numbers of course – glougloubarbaki Nov 12 '11 at 10:10
add comment
5 Answers
active oldest votes
This might help http://www.springerlink.com/content/y8l2621113212403/: the author claims to have found the axioms for defining the Lebesgue covering dimension. In the paragraph
starting with "the axiomatic problem is an old problem in dimension theory" there is also a list of references that should help. In particular, the paper by Henderson, that would
up vote 4 down contain the axioms for a notion of dimension in any metrizable space, as an extension of the covering dimension.
vote accepted
add comment
My gut feeling is that no list of axioms could simultaneously cover Lebesgue dimension, vector space dimension, Krull dimension, fractal dimension,...
It is not clear to me, for example, how axioms would decide whether $\mathbb C$ has dimension $0$, as required by Krull, dimension $1$ as wished by complex geometers or dimension $2$, the
topologists' choice.
up vote 4 (And I haven't even begun to examine the logicians' claim that it has dimension $2^{\aleph_0}$ over $\mathbb Q$)
down vote
But this is subjective , so let me say something indisputable: your axiom $A\subset B\Rightarrow dim A \leq dim B$ does not hold for Krull dimension .
Indeed, if $A$ is any domain of Krull dimension $n\gt 0$ and if $K$ is its field of fractions, we have $dim K=0$ and the inequality $dim A=n \leq dim K=0$ is not true.
2 So, if the axiomatics work out, we would conclude that Krull dimension is not a dimension. OK? – Gerald Edgar Nov 11 '11 at 22:33
2 Dear @Gerald, of course what you say is logically correct. And as you may have guessed my position is that since I firmly believe that Krull dimension is a dimension, the proposed
axiomatics can't work out. – Georges Elencwajg Nov 11 '11 at 23:03
10 The Krull dimension works out fine as long as you replace the condition $A\subset B$ by a monomorphism ${\rm Spec}(A)\to{\rm Spec}(B)$ in the category of affine schemes. Or, to put it
another way, if $B\to A$ is an epimorphism of commutative rings then ${\rm dim} A\le {\rm dim} B$. So, "subset" should be replaced by "monomorphism" with respect to the correct
category. – George Lowther Nov 11 '11 at 23:33
3 Maybe $\mathbb{C}$ has dimension 0 if you look at it in the category of affine schemes (well, ${\rm Spec}\mathbb{C}$ at least), dimension 1 if you think of it as the closed points of
the variety ${\rm Spec}(\mathbb{C}[x])$ in the category of complex varieties, and dimension 2 in the category of CW complexes. – George Lowther Nov 11 '11 at 23:50
I'd expect the concept of dimension to apply to a category of spaces in some sense. Looked at this way, Krull dimension applies to schemes, giving Lowther's point of view. If you apply
5 it to rings instead, fine (since obviously any commutative ring gives an affine scheme), but this is no longer a category of spaces, so one wouldn't expect the axioms for spaces to
apply. So in short, when considering whether a system of axioms validates a particular notion of dimension, find the appropriate category of spaces for that notion and apply the axioms
there. – Toby Bartels Nov 12 '11 at 0:25
show 4 more comments
In metric geometry, dimension should satisfy the following axioms; otherwise it should not be called "dimension". (There are exceptions, for example Minkowski dimension.)
Normalization axiom. For any $m\in\mathbb Z_\ge$, $$\dim\mathbb E^m=m.$$
up vote 4 Cover axiom. If $\{A_n\}_{n=1}^\infty$ is a countable closed cover of $X$ then $$\dim X=\sup\nolimits_n\{\dim A_n\}$$
down vote
Product axiom. For any spaces $X$ and $Y$, $$\dim (X\times Y) \le \dim X+ \dim Y.$$
The "countable closed" bit bothers me for generalizations. Some objects are countable, but still have high dimension in important senses, like $\mathbb Q^n$. Love the product axiom. It
seems essentially fundamental. The $\mathbb E^m$ part will be the hardest to generalize. I think a general dimension shouldn't have any axiom of the type. – Will Sawin Nov 12 '11 at
For Minkowski dimension the cover axiom holds only for finite union. – Anton Petrunin Nov 12 '11 at 23:25
@Will Sawin, it seems like the normalization axiom might be needed in order to have the notion of dimension be unique. – MTS May 21 '12 at 15:54
add comment
Not a complete answer, but there is a surprisingly general generalization of the dimension of a finite-dimensional vector space available in any (braided?) monoidal category. In any such
category, there is a notion of dimension of a dualizable object $c$ given by the trace of the identity endomorphism $\text{tr}(\text{id}_c)$. It takes values in $\text{End}(1)$ where $1$ is
the monoidal unit and behaves as expected under tensor product. In the category of finite-dimensional vector spaces over a field $K$ it gives the image of the dimension in $K$.
up vote 3 Most notably, this notion of dimension includes as special cases several types of Euler characteristic. For example, the dimension of a dualizable chain complex (I think this is equivalent
down vote to: bounded complex of finitely-generated projective modules) is its Euler characteristic, as is the dimension of a dualizable object in the symmetric monoidal category of dualizable
spectra. A nice exposition is given in Ponto and Shulman's Traces in symmetric monoidal categories, which in particular describes how to use these ideas to understand the Lefschetz fixed
point theorem.
add comment
There is a notion of "Krull dimension" valid for arbitrary topological spaces, whose definition is totally analogous to that of algebraic varieties, but using lattices of closed
subsets instead of rings of functions.
As far as I know, it is the only dimension function $\dim$ defined for arbitrary topological spaces, with the following properties:
• If $Y $ is a subsapce of $X$, then $ \dim Y \leq \dim X $.
• $\dim (X \times Y) \leq \dim X + \dim Y $.
• It coincides with Grothendieck's combinatorial dimension on noetherian spaces, and with the standard dimensions (cover and ind) on separable metric spaces.
up vote 1 down
vote This beautiful idea goes back to the 60's. You can check the following papers and the references therein.
Section 2 of:
• Sancho de Salas, J.B. and M.T.: "Dimension of dense subalgebras of $C(X)$", in Proceedings of the American Mathematical Society, 105 (1989)
or the introduction of:
• Sancho de Salas, J.B. and M.T.: "Dimension of distributive lattices and universal spaces", in Topology and its Applications, 42 (1991), 25-36
add comment
Not the answer you're looking for? Browse other questions tagged dimension-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/80708/is-there-an-axiomatic-approach-of-the-notion-of-dimension?sort=newest","timestamp":"2014-04-24T14:00:40Z","content_type":null,"content_length":"80636","record_id":"<urn:uuid:45b4d74c-448d-4627-853b-9bd191850d8c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
The applicability of logic program analysis and transformation to theorem proving
Results 1 - 10 of 27
, 1995
"... The control of polyvariance is a key issue in partial deduction of logic programs. Certainly, only finitely many specialised versions of any procedure should be generated, while, on the other
hand, overly severe limitations should not be imposed. In this paper, well-founded orderings serve as a star ..."
Cited by 60 (14 self)
Add to MetaCart
The control of polyvariance is a key issue in partial deduction of logic programs. Certainly, only finitely many specialised versions of any procedure should be generated, while, on the other hand,
overly severe limitations should not be imposed. In this paper, well-founded orderings serve as a starting point for tackling this so-called "global termination" problem. Polyvariance is determined
by the set of distinct "partially deduced" atoms generated during partial deduction. Avoiding ad-hoc techniques, we formulate a quite general framework where this set is represented as a tree
structure. Associating weights with nodes, we define a well-founded order among such structures, thus obtaining a foundation for certified global termination of partial deduction. We include an
algorithm template, concrete instances of which can be used in actual implementations, prove termination and correctness, and report on the results of some experiments. Finally, we conjecture that
the proposed framewor...
- THEORY AND PRACTICE OF LOGIC PROGRAMMING , 2002
"... Program specialisation aims at improving the overall performance of programs by performing source to source transformations. A common approach within functional and logic programming, known
respectively as partial evaluation and partial deduction, is to exploit partial knowledge about the input. It ..."
Cited by 54 (12 self)
Add to MetaCart
Program specialisation aims at improving the overall performance of programs by performing source to source transformations. A common approach within functional and logic programming, known
respectively as partial evaluation and partial deduction, is to exploit partial knowledge about the input. It is achieved through a well-automated application of parts of the Burstall-Darlington
unfold/fold transformation framework. The main challenge in developing systems is to design automatic control that ensures correctness, efficiency, and termination. This survey and tutorial presents
the main developments in controlling partial deduction over the past 10 years and analyses their respective merits and shortcomings. It ends with an assessment of current achievements and sketches
some remaining research challenges.
- Handbook of Logic in Artificial Intelligence and Logic Programming , 1994
"... data types are facilitated in Godel by its type and module systems. Thus, in order to describe the meta-programming facilities of Godel, a brief account of these systems is given. Each constant,
function, predicate, and proposition in a Godel program must be specified by a language declaration. The ..."
Cited by 46 (3 self)
Add to MetaCart
data types are facilitated in Godel by its type and module systems. Thus, in order to describe the meta-programming facilities of Godel, a brief account of these systems is given. Each constant,
function, predicate, and proposition in a Godel program must be specified by a language declaration. The type of a variable is not declared but inferred from its context within a particular program
statement. To illustrate the type system, we give the language declarations that would be required for the program in Figure 1. BASE Name. CONSTANT Tom, Jerry : Name. PREDICATE Chase : Name * Name;
Cat, Mouse : Name. Note that the declaration beginning BASE indicates that Name is a base type. In the statement Chase(x,y) !- Cat(x) & Mouse(y). the variables x and y are inferred to be of type
Name. Polymorphic types can also be defined in Godel. They are constructed from the base types, type variables called parameters, and type constructors. Each constructor has an arity 1 attached to
it. As an...
- Logic Program Synthesis and Transformation. Proceedings of LOPSTR’96, LNCS 1207 , 1996
"... This paper is concerned with the problem of removing, from a given logic program, redundant arguments. These are arguments which can be removed without affecting correctness. Most program
specialisation techniques, even though they perform argument filtering and redundant clause removal, fail to re ..."
Cited by 42 (17 self)
Add to MetaCart
This paper is concerned with the problem of removing, from a given logic program, redundant arguments. These are arguments which can be removed without affecting correctness. Most program
specialisation techniques, even though they perform argument filtering and redundant clause removal, fail to remove a substantial number of redundant arguments, yielding in some cases rather
inefficient residual programs. We formalise the notion of a redundant argument and show that one cannot decide effectively whether a given argument is redundant. We then give a safe, effective
approximation of the notion of a redundant argument and describe several simple and efficient algorithms calculating based on the approximative notion. We conduct extensive experiments with our
algorithms on mechanically generated programs illustrating the practical benefits of our approach.
- J. LOGIC PROGRAMMING , 1999
"... ..."
, 2004
"... The so called â cogen approachâ to program specialisation, writing a compiler generator instead of a specialiser, has been used with considerable success in partial evaluation of both
functional and imperative languages. This paper demonstrates that this approach is also applicable to partial eva ..."
Cited by 41 (21 self)
Add to MetaCart
The so called â cogen approachâ to program specialisation, writing a compiler generator instead of a specialiser, has been used with considerable success in partial evaluation of both functional
and imperative languages. This paper demonstrates that this approach is also applicable to partial evaluation of logic programming languages, also called partial deduction. Self-application has not
been as much in focus in logic programming as for functional and imperative languages, and the attempts to self-apply partial deduction systems have, of yet, not been altogether that successful. So,
especially for partial deduction, the cogen approach should prove to have a considerable importance when it comes to practical applications. This paper first develops a generic offline partial
deduction technique for pure logic programs, notably supporting partially instantiated datastructures via binding types. From this a very efficient cogen is derived, which generates very efficient
generating extensions (executing up to several orders of magnitude faster than current online systems) which in turn perform very good and non-trivial specialisation, even rivalling existing online
systems. All this is supported by extensive benchmarks. Finally, it is shown how the cogen can be extended to directly support a large part of Prologâ s declarative and non-declarative features and
how semi-online specialisation can be efficiently integrated.
- In Fourth International Symposium on Practical Aspects of Declarative Languages, number 2257 in LNCS , 2002
"... Abstract. Set-based program analysis has many potential applications, including compiler optimisations, type-checking, debugging, verification and planning. One method of set-based analysis is
to solve a set of set constraints derived directly from the program text. Another approach is based on abst ..."
Cited by 29 (10 self)
Add to MetaCart
Abstract. Set-based program analysis has many potential applications, including compiler optimisations, type-checking, debugging, verification and planning. One method of set-based analysis is to
solve a set of set constraints derived directly from the program text. Another approach is based on abstract interpretation (with widening) over an infinite-height domain of regular types. Up till
now only deterministic types have been used in abstract interpretations, whereas solving set constraints yields non-deterministic types, which are more precise. It was pointed out by Cousot and
Cousot that set constraint analysis of a particular program P could be understood as an abstract interpretation over a finite domain of regular tree grammars, constructed from P. In this paper we
define such an abstract interpretation for logic programs, formulated over a domain of non-deterministic finite tree automata, and describe its implementation. Both goal-dependent and
goal-independent analysis are considered. Variations on the abstract domains operations are introduced, and we discuss the associated tradeoffs of precision and complexity. The experimental results
indicate that this approach is a practical way of achieving the precision of set-constraints in the abstract interpretation framework. 1
- In Joint International Conference and Symposium on Logic Programming , 1998
"... We clarify the relationship between abstract interpretation and program specialisation in the context of logic programming. We present a generic top-down abstract specialisation framework, along
with a generic correctness result, into which a lot of the existing specialisation techniques can be cast ..."
Cited by 27 (13 self)
Add to MetaCart
We clarify the relationship between abstract interpretation and program specialisation in the context of logic programming. We present a generic top-down abstract specialisation framework, along with
a generic correctness result, into which a lot of the existing specialisation techniques can be cast. The framework also shows how these techniques can be further improved by moving to more refined
abstract domains. It, however, also highlights inherent limitations shared by all these approaches. In order to overcome them, and to fully unify program specialisation with abstract interpretation,
we also develop a generic combined bottom-up/top-down framework, which allows specialisation and analysis outside the reach of existing techniques. 1
- Proceedings of the International Workshop on Logic Program Synthesis and Transformation (LOPSTR'96), LNCS 1207 , 1996
"... . Recently, partial deduction of logic programs has been extended to conceptually embed folding. To this end, partial deductions are no longer computed of single atoms, but rather of entire
conjunctions; Hence the term "conjunctive partial deduction". Conjunctive partial deduction aims at achieving ..."
Cited by 27 (19 self)
Add to MetaCart
. Recently, partial deduction of logic programs has been extended to conceptually embed folding. To this end, partial deductions are no longer computed of single atoms, but rather of entire
conjunctions; Hence the term "conjunctive partial deduction". Conjunctive partial deduction aims at achieving unfold/fold-like program transformations such as tupling and deforestation within fully
automated partial deduction. However, its merits greatly surpass that limited context: Also other major efficiency improvements are obtained through considerably improved side-ways information
propagation. In this extended abstract, we investigate conjunctive partial deduction in practice. We describe the concrete options used in the implementation(s), look at abstraction in a practical
Prolog context, include and discuss an extensive set of benchmark results. From these, we can conclude that conjunctive partial deduction indeed pays off in practice, thoroughly beating its
conventional precursor on a wide...
, 1996
"... The relation between partial deduction and the unfold/fold approach has been a matter of intense discussion. In this paper we consolidate the advantages of the two approaches and provide an
extended partial deduction framework in which most of the tupling and deforestation transformations of the fol ..."
Cited by 25 (13 self)
Add to MetaCart
The relation between partial deduction and the unfold/fold approach has been a matter of intense discussion. In this paper we consolidate the advantages of the two approaches and provide an extended
partial deduction framework in which most of the tupling and deforestation transformations of the fold/unfold approach, as well the current partial deduction transformations, can be achieved.
Moreover, most of the advantages of partial deduction, e.g. lower complexity and a more detailed understanding of control issues, are preserved. We build on well-defined concepts in partial deduction
and present a conceptual embedding of folding into partial deduction, called conjunctive partial deduction. Two minimal extensions to partial deduction are proposed: using conjunctions of atoms
instead of atoms as the principle specialisation entity and also renaming conjunctions of atoms instead of individual atoms. Correctness results for the extended framework (with respect to computed
answer semantics and finite failure semantics) are given. Experiments with a prototype implementation are presented, showing that, somewhat to our surprise, conjunctive partial deduction not only
handles the removal of unnecessary variables, but also leads to substantial improvements in specialisation for standard partial deduction examples. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1614942","timestamp":"2014-04-20T13:45:38Z","content_type":null,"content_length":"41078","record_id":"<urn:uuid:8d790c60-3b2d-496b-b360-476e6a01ba9d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
In The Figure Vc(0) = 5V Find An Expression For ... | Chegg.com
In the figure Vc(0) = 5V
Find an expression for Vr. (The switch closes at t=0)
Express your answer in terms of t and appropriate constants, where t in milliseconds.
Answer: \(v_{R}(t) = 5e^{(-t/1.5)}V;\) t (greater than or equal to) 0
Please be very descriptive
Electrical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/figure-vc-0-5v-find-expression-vr-switch-closes-t-0-express-answer-terms-t-appropriate-con-q3303072","timestamp":"2014-04-24T15:29:27Z","content_type":null,"content_length":"20858","record_id":"<urn:uuid:aad58fdf-96e0-4681-9a6a-cccd57627a1c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method for Brightness Level Calculation in the Area of Interest of the Digital X-Ray Image for Medical Applications
Patent application title: Method for Brightness Level Calculation in the Area of Interest of the Digital X-Ray Image for Medical Applications
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
The invention relates to methods for evaluation a level of brightness in the area of interest of the digital x-ray image for medical applications by means of the image histogram using a neural
network. The calculations comprise of: image acquisition, image histogram calculation, converting histogram values into input arguments of the neural network and output values of the neural network
acquiring. As input arguments of the neural network the histogram values calculated with the given class interval and normalized to unit are used. The level of brightness is calculated as a linear
function of the output value of the neural network. Neural network learning is performed using a learning set calculated on the base of the given image database; as a set of target values the levels
of brightness calculated for each image over the area of interest and scaled to the range of the activation function of a neuron in the output layer of the neural network are used.
A method for determining brightness in a region of interest of a medical digital X-ray image comprising: acquiring the image; calculating a histogram of the image; converting the histogram values
into input arguments for a neural network; and calculating the brightness using the neural network; wherein the histogram values are calculated using a class interval; further comprising normalizing
the histogram values to unity before using the histogram values as input arguments for the neural network; wherein the brightness is calculated as a linear function of an output value of the neural
network; further comprising neural network learning using a training set calculated using an image database; further comprising using, as a set of target values, a plurality of brightness values
calculated for a region of interest of each image and scaled to a range of a neuron activation function on an output layer of the neural network.
The method of claim 1, wherein the neural network is an artificial feedforward neural network having one hidden layer and an output layer consisting of one neuron and having sigmoid neurons'
activation functions.
The method of claim 1, wherein the class interval is a ratio of a quantile of pixel brightness distribution with near-unity brightness and a number of input arguments for the neural network.
The method of claim 1, wherein brightness for a region of interest is a mean value of pixel brightness within the region of interest.
The method of claim 1, wherein the histogram values are calculated for all pixels of the image.
The method of claim 1, wherein the histogram values are calculated for pixels within a circle centered at an image center, and wherein the circle's diameter is a smallest image dimension.
This application is a Continuation application of International Application PCT/RU2010/000611 filed on Oct. 21, 2010, which in turn claims priority to Russian application No. RU2010112306 filed on
Mar. 31, 2010, both of which are incorporated herein by reference in their entirety.
FIELD OF THE INVENTION [0002]
This invention relates to processing methods of digital x-ray images for medical applications, namely, to the brightness level calculation in the area of interest of the digital x-ray image for
medical applications.
BACKGROUND OF THE INVENTION [0003]
An x-ray image besides images of patient organs projections generally involves images of parts of a device (e.g. collimator) and air projections. An area of interest is usually meant as that part of
an image where there is the image of patient organs projections only. The necessity to determine the brightness level correctly occurs for example in the following cases:
1) At digital image visualization on a display of the monitor 2) For exposure control during the acquisition of series of x-ray images.
The x-ray image visualization with the correct brightness and contrast levels contributes to better understanding the x-ray image and right diagnosing respectively. While acquiring series of
sequential images knowing the level of brightness in the area of interest in the previous image the digital detector exposure time can be correctly set to acquire the next image. Correctly chosen
exposure allows acquiring x-ray images of considerable higher quality without dark and/or overexposed areas with optimal noise-to-signal relationship in the area of interest. The standard exposure
frequency of the x-ray image series is 30 frames per second, therefore it is extremely important to determine the brightness level fast enough to be able to adjust the exposure time and/or x-ray tube
characteristics. It is also necessary that the brightness level calculation method be stable in course of calculations performed on series of sequential images.
The method [R. Gonzalez, R. Woods, S. Eddins. Digital Image Processing Using MATLAB (DIPUM). Technosphera, 2006, p. 32] for image brightness level determination is known. According to that method the
level of brightness is calculated as a mean value between the minimum and maximum brightness values
.sub.α is a quantile level α for brightness of each pixel over the image. Parameter α is to be chosen sufficiently small, not more than 0.01. This method does not provide necessary calculation
accuracy of the level of brightness in case of presence of air and/or collimator areas within the image.
The closest technical solution chosen as a prototype is the method for determination of the brightness level described in [Patent EP 0 409 206 B1, p. 6, published Jan. 10, 1997,]. In accordance with
the prototype the method comprises of reading out the digital image data into the main memory of the device and performing after that the following calculations:
1) The image histogram with the class interval equal to 1 is calculated.
2) The level A of brightness at which pixels of lower brightness considered the background once is calculated.
3) The histogram within the interval where pixel brightness is more than A is analyzed. The brightness MVP correlating with the maximum histogram value in the said interval is calculated.
4) Initial values for image visualization is chosen: window level WL
=MVP and the window width WW
5) The parameter ΔWW=WW
/2 is calculated.
6) Using a neural net the quality index {Q
is calculated for each pair of values (WL
±ΔWW, WW
7) Using the hill climbing method, such a pair of values (WL
, WW
) at which the quality index Q
has its maximum value is calculated. During an iterative procedure the parameter ΔWW is corrected.
The quality index is evaluated by means of an artificial feedforward neural network, (hereinafter--neural network), having one hidden layer and one neural in the output layer with the sigmoid
activation functions of neurons. The window level and window width (WL
, WW
), correlating to the maximum value of the quality index Q
, is considered optimal parameters for image visualization.
One or several images for which a skilled operator sets desirable values of the window level and window width (WLG, WWG) are used for training. Then a table consisting out of 25 values is made.
(WLG±ΔWWG/2±ΔWWG/4,WWG±ΔWWG/2±ΔWWG/4- )Q
--predetermined values of the quality index. Neural network input arguments (five or even more) are calculated for each pair (WL
, WW
). The quality index Q
, correlating to the appropriate pair (WL
, WW
), is used as a target value. So, marking desirable parameters of the window level and window width on the given image set an operator gets data for neural network training and after that trains it.
Disadvantages of the method according to the prototype are as follows:
1) Being applied to exposure control task when brightness level is only to be determined the method provides redundant information.
2) By means of the method the algorithm stability in course of calculation of series of imagers is not controlled. It is important for exposure control during the acquisition of series of imagers.
SUMMARY OF THE INVENTION [0019]
The technical result tends to determine a brightness level correlating to a mean brightness value in the area of interest of an x-ray image for medical applications.
The technical result of the invention under application consists of the determination of the level of brightness in the area of interest of an x-ray image for medical applications. At that, the
method is stable in course of calculation of series of imagers. The supplementary technical result involves simplicity of the hardware and high performance algorithm.
The technical result of the invention consisting of the acquisition of the image, calculation of the image histogram, transformation of the histogram values into input arguments of the neural
network, and calculation of the level of brightness by means of the artificial neural network is achieved, due to the fact, that the values of the histogram are calculated with the given class
interval, normalized to unit and used as the input arguments of the neural network, the level of brightness is calculated as a linear function of the output value of the neural network and neural
network training is performed using a learning set determined on the base of the given images, the levels of brightness calculated for each image in the area of interest and scaled to the range of
activation function of a neural of the output level of the neural network are used as a set of target values.
As a neural network an artificial feedforward neural network having one hidden layer and one neuron in the output layer with the sigmoid activation functions of neurons is used.
The class interval for calculating histogram values is assumed to be equal to the relationship between quantile of pixel brightness distribution over the image and number of input arguments of the
neural network.
The level of brightness in the area of interest is calculated as a mean value of pixel brightness within the area of interest.
The values of histogram are calculated over all image pixels.
The values of histogram are calculated on pixels within the circle, the center of which coincides with the image centre and its diameter is equal to the shortest side of the image.
The algorithm is based on the experimentally established fact that there is a statistical relationship between the image histogram and the level of brightness in the area of interest.
The aspect of the method under application is as follows.
As input arguments of the neural network the normalized to unit image histogram values calculated with the given class interval are used.
1) The level of brightness is calculated as a linear function of the output value of the neural network.
2) Neural network training is performed using a learning set specified on the base of the given images, the levels of brightness calculated for each image in the area of interest and scaled to the
range of activation function of a neural of the output level of the neural network are used as a set of target values.
In order to identify a statistical relationship between the histogram and the level of brightness an artificial feedforward neural network is used [Simon Haykin. Neural networks a Comprehensive
Foundation., 2006, p. 55]. Now we are going to identify some general stages of method realization:
1) Database generation and classification out of medical x-ray images.
2) Design of learning set examples--a set of input arguments of the neural network and a set of target values.
3) Selection of the error function and neural network training algorithm.
4) Training the set of neural network of different architecture, different input number, layers and neurons.
5) Selection of the neural network with the least number of parameters, in the best way suited to solve the problem.
The method essence is explained by means of the figures given below.
BRIEF DESCRIPTION OF THE DRAWINGS [0039]
FIG. 1 Example of the digital x-ray image for medical applications acquired from one of the x-ray apparatuses.
FIG. 2 Area of interest correlating to the image of FIG. 1.
FIG. 3 Example of the histogram of 16 bit image. Gray scale is in horizontal direction, pixel number with the given brightness--in vertical direction. Vertical lines show interval subdividing [0,
Bright] into 32 parts. The value Bright is defined as quantile of image brightness having the level α=0.999.
FIG. 4 Common histogram of a relative error for a learning sample
'--level of brightness acquired using the method under application; Level--level of brightness calculated over the area of interest.
FIG. 5 Common histogram of a relative error for a test sample.
Stage 1. Image database generation involves image classification on organ types and generating a binary image having an area of interest for every image. Generating a binary image having an area of
interest can be performed using a specialized software or manually marking the area of interest in the image by means of any standard word processor. At the first stage a database is generated which
consists of pair set {Image, Roi}, where Image is the initial image and Roi--the image of the appropriate area of interest. In our case there was collected and processed about ten thousand of images.
Stage 2. involves design of learning set examples. For each pair {Image, Roi} the histogram of the image Hist is calculated, class interval is equal to unit, the level of brightness in the area of
interest is Level. As a level of brightness an average pixel brightness value over all pixels within the area of interest
= 1 M k .di-elect cons. Roi p k ##EQU00001##
--level of brightness in the area of interest; p
--k-pixel brightness value; M--number of pixels in the area of interest. As a result for each pair {Image, Roi} a pair consisting of histogram and level of brightness {Hist, Level} is acquired.
The histogram can be evaluated over the whole image as well as over the preliminary selected area. Usually, a patient under exposure is positioned in such a way that the image projection of organs
under exposure be in the center of the digital matrix. Therefore in the second variant of histogram evaluation a circle can be considered such an area where the center aligns with the center of the
image and diameter, for instance, is equal to the shortest side of the image.
Now for each pair {Hist, Level} some input arguments Input and a target value Target shall be evaluated. Input arguments Input and target value Target denoting a set of learning parameters, shall
meet the following requirements:
1) Pairs {Input, Target} shall be invariant relatively to multiplication of the image by a constant value and not to be dependent on the image size (with allowance for pixel brightness discreteness).
2) Target values Target must belong to the range of activation functions of neuron of the output layer. Then it is necessary to provide the invariance of pairs {Input, Target} relatively to
multiplication of the image by a constant value. For histogram Hist the brightness interval [0, Bright] is calculated in such a way that upper limit Bright be a quantile level α of the image pixel
brightness. Then, the interval [0, Bright] shall be divided by S--equal intervals and Input, value is estimated as a sum of histogram Hist
values within the interval I,
Input i = k .di-elect cons. I Hist k ##EQU00002##
--input argument with the index i; Hist
--value of the histogram Hist with the index k. As a result the value of Input
is normalized to unit
Input i
= Input i k Input k ##EQU00003##
S is a number of inputs of the neural network
, it is selected by means of numerical experiments together with parameter α. Input arguments Input are the normalized to unit histogram values calculated with the class interval Bright/S. Further,
Target'=Level/Bright is calculated for each Level. In such a way obtained pairs {Input, Target'} are invariant relatively to multiplication of the image by a constant value and not to be dependent on
the image size.
Sigmoid function is used to generate a set of target values used for activation functions of neurons
( x ) = 1 1 + exp ( - x ) , ##EQU00004##
The range of which is the interval
[0,1], that is way a set of target values Target is to be normalized to this interval. For this purpose the linear transformation is used:
= Target ' - min { Target ' } max { Target ' } - min { Target ' } ##EQU00005##
Below can be seen a formula to calculate the level Level on the base of
the output value Output of the neural network:
=max{Target'}-min{Target'}; C
=min{Target'} The level of brightness is calculated as a linear function of the output value of the neural network.
As an error function of the neural network serve two variants. The first variant is a mean-root-square error having regularity
= Ratio × 1 N i = 1 N ( Output i - Target i ) 2 + ( 1 - Ratio ) × k x k 2 ##EQU00006##
The second variant is a mean
-root-square error having regularity
= Ratio × i = 1 N W i ( Output i - Target i ) 2 + ( 1 - Ratio ) × k x k 2 ##EQU00007##
--regularity parameter; W
--weight corresponding to the learning pair {Input, Target} having index i; N--number of learning pairs taking part in the error evaluation;
k x k
2 ##EQU00008##
--sum of squares of all neural net parameters. The first summand in both formulas defines the accuracy of neural net learning, and the second one--regularizing multiplier--provides neural net
steadiness. Weights W
are calculated using the formula
W i
= 1 Target i ' 2 / i = 1 N 1 Target i ' 2 ##EQU00009##
.g. pairs with big value Target'
corresponds to smaller weight W
For neural net learning a standard algorithm--conjugate gradient method with back-propagation--is used [Moller, Neural Networks, vol. 6, 1993, p. 525]. Regularizing multiplier Ratio is selected in
such a way as to eliminate maverick more than 0.5 percent while multiple calculating the level Level at image rotation. In our case this parameter turned to be equal to Ratio=0.9999.
In order to avoid overexposure it is used a standard approach in course of which a range of learning examples {Input, Target} is divided into two parts. One of them is used for neural net learning,
and the second one--for testing. After image database generation they perform classification of medical x-ray images on the base of the organ type. Then the range of learning examples is divided into
two samples in relationship 80 and 20 percent in such a way as to place 80 percent of images of each group into a learning sample, and remaining 20 percents--into a sample for testing.
Numeral experiments showed that for the solution of the given task there turned to be possible to use a feedforward neural network, having one hidden layer, 30 to 60 inputs and 5-10 neurons in the
hidden layer. Parameter α can be chosen from the interval 0.98 to 0.9999. In order to realize the method under application in a specified device there was chosen a neural network having minimum
number of parameters, other conditions being equal.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS [0055]
The preferable variant of the invention implementation is the method to evaluate the brightness level in the area of interest of the digital x-ray image for medical applications that involves an
image acquisition, image histogram calculation, converting histogram values into input arguments of the neural network and the brightness level calculation using artificial neural network.
The histogram values are calculated with the given class interval then normalized to unit and used as input arguments of the neural network. The level of brightness is calculated as a linear function
of the output value of the neural network.
Neural network learning is performed using a learning set calculated on the base of the given image database; as a set of target values the levels of brightness calculated for each image over the
area of interest and scaled to the range of the activation function of a neuron in the output layer of the neural network are used.
As a neural network an artificial feedforward neural network having one hidden layer and one neuron in the output layer with the sigmoid activation functions of neurons is used.
The class interval for calculating histogram values is assumed to be equal to the relationship between quantile of pixel brightness distribution having the level close to unit and number of input
arguments of the neural network.
The level of brightness in the area of interest is calculated as a mean value of pixel brightness within the area of interest.
The values of histogram are calculated on pixels within the circle, the center of which coincides with the image centre and its diameter is equal to the shortest side of the image.
INDUSTRIAL APPLICABILITY [0062]
Known numerical methods of data processing and analyzing are used in the method under application. Also known hardware and devices are used to acquire said data.
Patent applications by ZAKRYTOE AKCIONERNOE OBSHCHESTVO "IMPUL'S"
Patent applications in class X-ray film analysis (e.g., radiography)
Patent applications in all subclasses X-ray film analysis (e.g., radiography)
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20120014586","timestamp":"2014-04-17T17:27:12Z","content_type":null,"content_length":"47828","record_id":"<urn:uuid:15185220-2b57-45c8-991b-42db02778a1d>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quark Confinement and the discovery of quarks
I see. But what exactly are "scattering angles"? Sorry for my ignorance. :)
Hit a particle with a beam, observe the angles between scattering products and your incoming beam.
So it is by the values and numbers from the resulted data, that each value correspond to a specific property of a particle (i.e. spin, momentum, charge, etc?), that we can identify and deduce
specific and unique particles, in particular on this context, specific quarks?
Usually, you cannot do anything with a single collision - you need many collisions, and statistical methods to analyze them. The angular distribution can tell you something about the spin of
particles, for example.
So after the interaction with the target, when these outgoing neutrinos are detected with a certain charge, it then indicates that the neutrino have interacted with a certain specific particle? Did I
understood this right?
AdrianTheRock described two different scattering processes here: Electron+nucleon to anything and neutrino + nucleon to anything. Neutrinos as scattering products are nearly impossible to detect, as
their interaction is so weak. And they are uncharged.
If a neutrino hits a proton and you get a neutron plus a positron afterwards (those can be detected), you know that the incoming particle was an anti-electronneutrino. In addition, it confirms that
you can transform a proton into a neutron if one up-type quark (here: an up-quark) is converted to a down-type-quark (here: a down-quark).
I see. By different proportions of protons to neutrons, you mean isotopes/nuclides? So really, it is by data/values of the results of the experiments that we are able to distinguish, differentiate,
and identify different particles and their properties so that we could discover and study them, not by physically "seeing" them separately?
Correct. There are events where you can be quite sure that particle X was there, but this is never 100% certainty.
[quoteWhat kind of new behavior is that? How is the interaction look like? [/quote]
That is scattering at the individual components of protons - quarks or gluons.
[quote]Is the math the framework of Quantum Chromodynamics?[/quote
QCD allows to predict those functions, but to measure them you have to analyze the data (not with QCD).
Wow cool. I find that very interesting. How did they conclude that it was the Gluons that were carrying those remainding momenta? Was that how, in a way, gluons was discovered/observed
They were discovered via jets of hadrons they produced in collisions - see the references
for more details.
Most interesting LHC collisions are gluon-gluon interactions, so you can really see their effects in colliders.
I almost forgot they ultimately result to produce quark-antiquark pair. The flavor/the kind of quarks produce in the pair is random?
Well, they don't have to, lepton+antilepton is possible, too.
It is random, but you can only produce pairs if the energy is high enough for them. Therefore, LEP could not produce top+antitop, for example.
And also, I don't get it in the first place, why and how a photon would "decay" and produce an quark-antiquark pair?
This is not a real "decay" - the photon itself has to be virtual in the Feynman diagram, you cannot view it as two separate processes.
And when the pair produce are tops, as you said would decay first before they could form into a hadron, would they decay still confined to each other?
They would decay independently.
(I would like to clear out that quark confinement does not necessarily mean they are hadronized?)
Confinement means anything low-energetic (->no quark-gluon plasma) and long-living enough (-> enough time to hadronize) does not have free color charges, so all quarks are bound in hadrons.
Oh that's why we get more new particles discovered as we use more energies in particle accelerators and experiments?
Haha are there other definitions of "matter phase", besides the usual definition that we know of as the "solid-liquid-gas-plasma" phases of matter?
Phase diagram of QCD
Ooooh I see. Is this why atoms in this environment/conditions (i.e. hot dense plasmas) become ions because those electrons inside the atoms are "released" become "unbound" and "independent"?
How do they interact, in what way(s)? It's hard for me to picture/visualize it.
Both electrons and ions have an electric charge.
And if all those electrons and nuclei that make up the atoms and molecules become "unbound", wouldn't that "destroy" the chemical elements/materials that make up the gas/plasma altogether?
If enough electrons leave the molecules, they will break apart. The nuclei are stable (unless you come in regions where the QCD phase diagram gets relevant), so the elementary composition stays the
Because all that gas/plasma, as matter, is composed of an element, like say Hydrogen in the core of the sun, right?
About 75% hydrogen, 25% helium and smaller contributions from other atoms.
And that Hydrogen molecules are composed of atoms, which then are composed of electrons and nuclei. Did I understand this correctly?
A neutral hydrogen molecule has 2 nuclei (usually, just 2 protons), with 2 electrons bound to them. The sun is so hot that they easily break up into individual atoms. | {"url":"http://www.physicsforums.com/showthread.php?p=4152907","timestamp":"2014-04-21T07:26:34Z","content_type":null,"content_length":"88758","record_id":"<urn:uuid:4bc73ee8-8020-4afc-8755-bc259e2d818a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Axioms of reducibility and infinity
kremer at uchicago.edu kremer at uchicago.edu
Mon Aug 8 21:25:34 EDT 2011
This is somewhat confused. "Order" in the context of ramified type theory doesn't mean what you think it means.
In ramified type theory, propositional functions are assigned a type and an order. The type is determined recursively by the type of arguments to the function (so individuals are of lowest type, functions of individuals are of higher type etc). But order is determined by the items quantified over in the function (in the function's expression? -- this is a matter of some unclarity).
This means we can have two functions of the same type that are materially equivalent (true of the same things) but are of different orders. So for example suppose that we have three people in a room, Joe, Bob and Bill, and Joe is 6 feet tall while Bob and Bill are 5 feet tall. Suppose also that Joe is a Republican and Bob and Bill are Democrats.
Now consider the following two functions:
(a) x is a Democrat in the room
(b) x is in the room and is shorter than someone in the room
These two functions are materially equivalent (they both are true of Bob and Bill)
But the second function quantifies over individuals whereas the first does not, so the second function is of higher order than the first.
Reducibility then says that for every function of whatever order, there is an equivalent function of the same type (taking the same type of arguments) of lowest order compatible with that type (so here for (a) there is (b) -- and we are guaranteed some such (b) even if we don't know a property like "Democrat" shared by only Bob and Bill but not Joe).
The whole logic remains "higher-order" in the sense that you have in mind, but the effect of reducibility is to claim that as far as extensions of functions is concerned, the ramification introduced by "orders" in the sense just explained makes no difference (the motivation for introducing this ramification is tied to solving paradoxes like the liar which were held by Russell to involve an illegitimate form of quantification).
On Wittgenstein: one thing he objected to in the Axiom of Reducibility was its seeming quantification across all types. This was supposed to be avoided by the idea of "typical ambiguity" which Wittgenstein saw as a dodge. (This criticism is made by him in a letter to Russell in something like 1914, as I recall.) In the Tractatus Wittgenstein says that even if Reducibility were true this would only be a kind of accident -- meaning that there is no logical guarantee that we can always find a function like (b) to correspond to any higher-order function like (a).
Hope this helps.
--Michael Kremer
---- Original message ----
>Date: Mon, 8 Aug 2011 14:06:32 -0700 (PDT)
>From: fom-bounces at cs.nyu.edu (on behalf of steve newberry <stevnewb at att.net>)
>Subject: Re: [FOM] Axioms of reducibility and infinity
>To: Foundations of Mathematics <fom at cs.nyu.edu>
>My understanding of the Axiom of Reducibility is that it was
>intended to state that:
>To every proposition of higher-order, there is an equivalent
>proposition of First-order,
>or more precisely, to every entity definable in Higher-order
>logic there is an equivalent
>such entity definable in First-0rder logic.
>If AXIOMATICALLY true, then there is no ontological
>difference between First- and
>Higher- order logic, which is now well known to be untrue,
>and Wittgenstein may
>well have intuited that fact.
>Nicht wahr?
>Steve Newberry
>--- On Sun, 8/7/11, Alasdair Urquhart
><urquhart at cs.toronto.edu> wrote:
> From: Alasdair Urquhart <urquhart at cs.toronto.edu>
> Subject: Re: [FOM] Axioms of reducibility and infinity
> To: "Foundations of Mathematics" <fom at cs.nyu.edu>
> Date: Sunday, August 7, 2011, 10:59 AM
> Wittgenstein's reasons for rejecting the axiom of infinity
> are quite clear. As stated by Whitehead and Russell
> in Principia Mathematica, it says that there are infinitely
> many individuals (i.e. objects of the lowest type).
> Clearly there is no reason to think this is true a priori
> of the world (the Tractatus is an attempt to describe
> the a priori logical structure of the world).
> In other words, in the construal of Whitehead, Russell
> and the early Wittgenstein, the Axiom of Infinity
> is an empirical postulate -- there is no reason to think
> it is a logical truth.
> I have never understood Wittgenstein's reasons for
> rejecting the Axiom of Reducibility, and always found
> his discussions of it quite obscure.
> On Sun, 7 Aug 2011, Francisco Gomes Martins wrote:
> > I´m working on Tractatus; Wittgenstein rejects the axiom
> of reducibility (see
> <tel:%286.1232-6.1233>6<tel:%286.1232-6.1233>.1232-6.1233),
> the axiom of infinity (5.535) and even the even the set
> theory (6.031). First, I ´d like to know more about those
> axioms. Second, I´d like to know why/how does Wittgenstein
> reject all of them?
> >
> > Francisco
> -----Inline Attachment Follows-----
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
>FOM mailing list
>FOM at cs.nyu.edu
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2011-August/015685.html","timestamp":"2014-04-17T00:55:30Z","content_type":null,"content_length":"11266","record_id":"<urn:uuid:4eac89e4-193f-48d5-9dcb-b9ef923eaefa>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/paglia/asked","timestamp":"2014-04-20T14:12:11Z","content_type":null,"content_length":"76307","record_id":"<urn:uuid:cb20859f-01a6-45e0-a3bf-47bc836283ed>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
Atomistic Modeling of Gas Adsorption in Nanocarbons
Journal of Nanomaterials
Volume 2012 (2012), Article ID 152489, 32 pages
Review Article
Atomistic Modeling of Gas Adsorption in Nanocarbons
Department of Fundamental and Applied Science for Engineering-Physics Section, University of Rome “La Sapienza”, via A. Scarpa 14-16, 00161 Rome, Italy
Received 11 July 2012; Revised 27 September 2012; Accepted 28 September 2012
Academic Editor: Jinquan Wei
Copyright © 2012 G. Zollo and F. Gala. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Carbon nanostructures are currently under investigation as possible ideal media for gas storage and mesoporous materials for gas sensors. The recent scientific literature concerning gas adsorption in
nanocarbons, however, is affected by a significant variation in the experimental data, mainly due to the different characteristics of the investigated samples arising from the variety of the
synthesis techniques used and their reproducibility. Atomistic simulations have turned out to be sometimes crucial to study the properties of these systems in order to support the experiments, to
indicate the physical limits inherent in the investigated structures, and to suggest possible new routes for application purposes. In consideration of the extent of the theme, we have chosen to treat
in this paper the results obtained within some of the most popular atomistic theoretical frameworks without any purpose of completeness. A significant part of this paper is dedicated to the hydrogen
adsorption on C-based nanostructures for its obvious importance and the exceptional efforts devoted to it by the scientific community.
1. Introduction
The discovery of novel carbon nanostructures (CNSs) has caused many expectations for their potential impact in gas adsorption, storage, and sensing thanks to their large surface/volume ratio. Gas
adsorption is particularly focused on hydrogen for clean energy sources and “small-scale” devices for fuel cells, involving either the hydrocarbon reforming or the hydrogen storage, are currently
under study.
The amount of hydrogen storage in solid substrates for commercial use has been estimated to about 9wt% for the year 2015 by the US Department of Energy (DOE) [1] but none of the major storage media
has reached this value so far.
Gas storage is important also for many other technological applications: helium and nitrogen, for example, have many applications in metallurgic industry.
Nanostructured media are also required for high sensitivity monitoring of chemical species in many fields, from medical to environmental applications. Monitoring of nitrogen dioxide and carbon mono-
and dioxide, for instance, is important for the environment while the detection of ammonia (NH[3]) [2, 3] and hydrogen sulphide (H[2]S) [4] is compulsory in industrial, medical, and living
Many experiments made on gas adsorption in CNSs have evidenced, however, controversial results with partial understanding of the processes involved [5].
The adsorption processes in CNSs can be, in fact, quite tricky because chemisorption and physisorption phenomena may coexist and, moreover, weak interactions are highly sensitive to temperature,
pressure, humidity, and so forth, that may vary between different experiments [6].
Another source of uncertainty is that CNS samples are often impure as uncontrolled phenomena and contamination may occur during synthesis [7] resulting in a variety of carbon structures.
All these aspects are strong motivations for reliable atomistic modelling because the understanding of the adsorption/desorption processes is intimately related to the character and the strength of
the atomistic interactions. The methods chosen to model such systems, however, may vary greatly depending on the level of accuracy, on the number of particles treated, and on the specific system
The paper is organised as follows: in the next section we give an overview on the recent literature on gas adsorption CNSs approached by the main atomistic theoretical schemes that are introduced in
Section 3; in the following three sections, results on gas physisorption and chemisorption in NCSs by atomistic modelling are detailed and revised critically with some emphasis on hydrogen and
methane due to their importance in technology. Finally some concluding remarks emerging from the general scenario are drawn for the different cases treated.
2. Overview of Gas Adsorption in Nanocarbons
Carbon materials exhibit quite different adsorption properties depending on the valence states. Moreover, stable carbon phases may coexist in amorphous carbon where “graphite-like” or “diamond-like”
short range order may occur. The other metastable carbon allotropes, such as graphene, fullerenes, carbon nanotubes, carbon nanohorns, and so forth, constitute the backbone of a novel carbon-based
chemistry and nanotechnology and exhibit different adsorption properties; in the following of this paragraph, the recent literature on atomistic simulations of gas adsorption is briefly introduced
for various allotropes.
2.1. Graphene and Activated Carbons
From graphene, that may be considered a CNS by itself [8, 9], several other CNSs can be processed such as armchair semiconducting or zig-zag metallic graphene nanoribbons (GNRs), obtained by standard
lithography [10], or graphite nanofibers (GNFs). Nanostructured graphite, either hydrogenated or not, can be synthesized by ball milling in a controlled atmosphere, and also activated carbons,
consisting of a multitude of stacks of disordered graphene planes of various sizes, are obtained from graphene by steam or chemical agents processing.
Systems for hydrogen storage in graphene-based NSs have been studied concerning both physisorption and chemisorption [11, 12] revealing that doping or defects affect the storage capacity as found,
for instance, in Li doped graphene [13].
Thanks to their metallic behavior, graphene layers have been widely studied also for gas sensing applications of various gas species (NO[2], H[2]O, NH[3], O[2], CO, N[2] and B[2]) exploiting the
charge carrier density change induced by the adsorption [14–16]; some examples of graphene-based “nanodevices” for pH sensors [17] and biosensors [18] can also be found in the literature.
2.2. Fullerenes
Fullerenes and related structures are usually considered as ideal adsorbents. C[60] (bucky-ball) can stably host atoms of the appropriate size, either inside or outside its spherical structure.
Hexagonal lattices of C[60] molecules can be deposited on a substrate in monolayered or multilayered films while, at low temperatures, cubic C[60] lattices (fullerite) are favored; since these
fullerene lattices have large lattice constants (hundreds of nanometers), they are fairly appealing open structures for gas storage [19, 20]; thus different adsorption sites in, f.i., hexagonal C[60]
monolayers have been studied.
Charged fullerenes have been explored for helium adsorption [21] or as H[2] storage media as well [22]; moreover it has been shown that “bucky-balls” can also easily bind other gas molecules thanks
to their polar properties.
Doping of fullerenes may improve the adsorption of molecular hydrogen and many examples can be found involving light elements, such as fluorine, nitrogen, boron [23], alkali [24–26] and transition
metals (TMs) [27–30] and silicon [31].
2.3. Carbon Nanotubes
Single walled carbon nanotubes (SWCNTs) [32, 33] are single graphene sheets folded to form a cylindrical shape in various ways (chirality) resulting in semiconducting or metallic behavior [7].
Gas adsorption occurs in carbon nanotube (CNT) arrays both inside and outside the tubes and hydrogen storage has been intensively studied using classical models [34–37], quantum mechanics [38–40],
and hybrid models [41] showing that H[2] molecules are weakly bound on carbon nanotubes even though H chemisorption may also occur [42]. Similarly to fullerenes, doping of CNTs with metal species
such as Ti [28], Li, Pt, and Pd [43–45] can improve the adsorption.
Functionalized CNTs are predicted to be suitable for sensing of hydrocarbons [46–51] that can be employed also as hydrogen sources in reforming processes [52].
Defects insertion, structural deformation, or doping are also employed to improve the binding of low adsorption gaseous species on bare CNTs [53]. B or N doped CNTs exhibit nice adsorption features
for H[2]O and CO molecules [54] while TMs doped zigzag and armchair SWCNT have been studied for the detection of N[2], O[2], H[2]O, CO, NO, NH[3], NO[2], CO[2], and H[2]S [44, 48, 55]. Concerning
sensing, however, metal doped SWCNTs are still problematic because their transport properties are weakly affected by the adsorbed molecules [56]. CNT bundles have been studied also for the storage of
noble gases such as He, Ne, Ar and Xe and N[2] [36, 42, 47, 57, 58].
2.4. Other CNSs
Single walled carbon nanohorns (SWCNHs) are conical-shaped graphene sheets that tend to form spherical aggregates with “internanohorn” and “intrananohorn” accessible pores; hydrogen and nitrogen
adsorption in such structures have been studied both experimentally and theoretically [59, 60].
During the synthesis it may happen that one or several fullerenes get stacked in the internal cavity of a nanotube [61]. Such “peapod” structures are ideal gas “nanocontainers” with enhanced binding
properties [62].
3. Theoretical Methods
Various atomistic simulation schemes are currently employed to model gas adsorption in CNSs. Of course, chemisorption and bonding events must be approached by quantum theory and CNSs as gas sensors,
which require an accurate description of the chemical interactions and the electronic properties, are approached by quantum chemistry techniques [63, 64] or ab-initio calculations based on the
Density Functional Theory (DFT) [65, 66]. These schemes are also used to study the equilibrium configurations of physisorbed molecules in CNSs.
Collective studies on gas adsorption are usually studied with the Metropolis scheme [67, 68] in various statistical ensambles.
Other models based on the continuum theory of fluids are also used to model the gas adsorption experiments in carbon porous materials [69–72].
In the next section we introduce briefly the above-listed, not exhaustively, theoretical schemes evidencing their limits of validity and accuracy levels.
3.1. Density Functional Theory ab-Initio Calculations
DFT ab-initio calculations [73, 74] are efficient tools to study atomistic systems and processes [75, 76]. According to DFT, the total energy of a system of ions and valence electrons is a functional
of the electron density : where is the ionic potential. The universal Hohenberg-Kohn functional is where and are, respectively, the electron kinetic energy and the electron-electron interaction
energy; contains the Coulomb and exchange-correlation energy . The total energy is variational with respect to the electron density and the ground state is obtained self-consistently [75–77].
The key factors affecting the accuracy of the DFT calculations are the pseudopotentials, used to replace the ionic potential (for computational reasons) [78–83], and the scheme adopted to approximate
the exchange-correlation potential that is unknown a prior; the most popular schemes are the local density (LDA) [73, 77] and the generalized gradient (GGA) approximations [84, 85] and some of them
are very accurate such as PBE, B3LYP, and so forth. [65, 84–87]. Generally speaking, LDA and GGA are robust for chemisorption but inaccurate for long-range interactions even though recent studies
have shown that the LDA results are surprisingly accurate in many cases [88].
3.2. Hartree-Fock Based Quantum Chemistry Techniques
Various strategies are used to include the electron correlation in Hartree-Fock (HF) based calculations [63, 64, 89–92]; in the Configuration Interaction (CI) scheme, the HF ground-state wavefunction
is replaced by a linear combination of ground and excited states obtained by populating virtual molecular orbitals (MOs). CI is very accurate but limited to very small systems for computational
reasons. Various CI schemes are used, namely, CIS, CISD amd SF-CISD including, respectively, single, single and double (CIS) and spin flips excited states [63, 64, 90, 92].
In the Möller-Plesset method (MP) the correlation potential is treated as a perturbation of the HF Hamiltonian and, for a system of electrons, nuclei and occupied states, it is formally defined as
with , being the usual HF Coulomb and exchange integrals.
The exact wavefunction is obtained by solving the secular equation where both wavefunctions and the eigenvalues are expanded in Taylor series of the perturbation parameter ; the -order of the
wavefunction expansion in terms of a complete set of the HF eigenfunctions is denoted as . MP2 is enough efficient but the correlation energy can be severely underestimated while MP4 is quite
accurate, but limited to small systems due to computational limits.
Beyond MPq theory, multiconfiguration states are used instead of single determinants (MCSCF: multiconfiguration self-consistent field) with various “multireference” perturbation schemes such as
CASTP2D [63, 64, 90, 92].
The Coupled Cluster (CC) theory [63, 93] is virtually equivalent to a full-CI approach because the wavefunction is represented as where is the “cluster operator” that formally includes all the
possible excited states, being the state with excitations of the HF ground state [90]. Among the different CC schemes encountered, one of the most popular is the CCSD(T) that includes also a singles/
triples coupling term [90].
3.3. Monte-Carlo Sampling Techniques in the Grand-Canonical Ensamble
The Metropolis algorithm [67] allows the Monte Carlo sampling of a -particles statistical ensamble such as the Grand Canonical one that is suitable to study gas adsorption. Many particles are
required in this scheme and thus reliable classical atomistic interaction potentials must be used [68]. A physical quantity is measured by calculating it statistically over the ensamble that is
generated by using acceptance rules that depend on the energy and the particles number. Hence, the pressure dependence of the equilibrium gas density in CNSs can be calculated. The above described
Grand Canonical Monte Carlo (GCMC) method is suitable for large scale gas adsorption studies provided that chemical events, such as bonding, reactions, and so forth, are excluded; the key factor
affecting the reliability of GCMC simulation is the accuracy of the interaction potential and still nowadays, the simple Lennard-Jones (LJ) potential (and the ones derived from it) is a popular
choice [36, 42, 94].
Quantum effects are encompassed mainly through the Path Integral Monte Carlo (PIMC) approach where the quantum system is mimicked by a classical polymeric ring system whose equilibrium properties
return the statistical properties of the quantum system [37, 95, 96].
3.4. DFT Nonuniform Fluid Models
In the spirit of DFT, a variational method to find the ground state particles density in fluids was developed with particular emphasis for fluids close to surfaces [69, 70]. For such systems, the
intrinsic free energy functional that must be minimized (e.g., by solving the Euler-Lagrange equations) is made of two terms: where is the universal “hard-sphere” free energy functional that contains
the repulsive energy, and is the attractive part of the pairwise interaction potential. As is not known a priori, the Local Density (LDA) or the Smoothed Density Approximations (SDA) can be employed
[71, 72]. In the “non-local density functional theory” (NLDFT), SDA is adopted, the density being smoothed by an appropriate weight function, in order to reproduce the Percus-Yevick description of a
homogeneous hard sphere fluid [97]. With this approach, the structural properties and the adsorption isotherms of gases are calculated, and the pore size distribution of the adsorbant can be
4. Physical and Chemical Adsorption of Gaseous Species in CNSs
In the previous sections we have emphasized that atomistic modeling of gas adsorption in CNSs should be treated differently depending on the specific phenomena involved, either physical adsorption or
chemical bonding.
Therefore, we will treat separately the gas adsorption in CNSs depending whether either physical adsorption phenomena or chemical bonds are involved and, as a consequence, the next two sections will
be focused on the physisorption (Section 5) and chemisorption (Section 6) phenomena, respectively.
Sometimes, however, the classification of the studied phenomena in terms of physical or chemical adsorption is quite difficult due to the occurrence of strong polar interactions or weak charge
transfer that make uncertain the classification of the case under study; in these cases, the calculation of energetic quantities, such as the activation energy or the adsorption enthalpy, may help to
get a clearer scenario because it is expected that physical adsorption exhibits lower adsorption enthalpy values than the ones involved in chemical bonds.
5. Gas Physical Adsorption in CNSs
A great deal of the scientific literature over the past twenty years has been devoted to hydrogen physical adsorption in carbon nanomaterials of different allotropic forms due to the potential impact
of nanotechnology to solve this challenging problem that is still preventing the hydrogen economy from the success.
Thus we have chosen to dedicate the next subsection to hydrogen storage and to treat the other gaseous species in the following subsections.
5.1. Hydrogen Physical Adsorption
In order to evaluate the hydrogen storage performance of CNSs, one should always refer to the DOE target as the minimum extractable loading for addressing the commercial storage needs.
The typical parameters used to measure the storage are the gravimetric excess (excess hydrogen adsorption): (, ,and being, respectively, the free and the precipitated molecular hydrogen mass and
the mass of the adsorbent nanostructured material) and the analogous volumetric excess.
CNSs can be considered as ideal hydrogen adsorption media due to the high surface/volume ratio favouring physisorption with fast adsorption/desorption thermodynamic cycles. The atomistic modelling of
such phenomena in nanotubes or fullerenes benefit from their well-known atomic arrangements and thus the possible discrepancies between theory and experiment arise from impurities, samples
dishomogeneity, and the limits inherent in the theoretical approach adopted.
On the contrary, complex CNSs, such as activated carbon (AC) or microporous carbon, are particularly challenging because require a great deal of effort to build reliable atomistic models. Thus the
literature on atomistic modelling of hydrogen adsorption in CNSs is discussed with reference to the various allotropes considered.
5.1.1. Carbon Nanotubes
Early experiments on CNTs, dating back to the end of the ‘90s, indicated these CNSs as ideal candidates to fulfill the DOE requirements [98]. Since then, many controversial theoretical and
experimental results appeared; recent review papers [99–102] have discussed the experimental data spread (see Figure 1), especially affecting the early measurements, suggesting that it originates
from experimental errors and from the samples inhomogeneity and impurity. Nowadays, new purification strategies have been introduced evidencing that hydrogen storage in CNT media may be problematic.
H[2] physical adsorption in SWCNT or MWCNT systems has been mainly studied by GCMC and molecular dynamics simulations using simple LJ-derived potentials which have been proven to give realistic
results. Apart from early, unconfirmed results [34] supporting the exceptional uptake performance reported in coeval measurements [98], atomistic modelling have evidenced a complicated scenario: H[2]
uptake can occur either on the external (exohedral) or on the internal surface (endohedral) of a SWCNT where atomic hydrogen is unstable and only molecular hydrogen can exist [39]. However the
endohedral storage is limited by steric hindrance phenomena that may cause the breakdown of the tube if it is excessive.
The LJ parameters of the carbon-gas interactions are usually obtained from the well known Lorentz-Berthelot rules ; where , are the LJ gas/gas parameters and are the LJ carbon/carbon parameters.
Stan and coworkers [36, 42, 58, 94] have integrated the LJ potential over an ideal CNT surface for different gaseous species that casts in the following potential of a molecule in the vicinity of an
ideal CNT: where is the distance from the cylinder axis, is the surface density of C atoms, and is the radius of the ideal cylinder, if and
This model has been used to calculate the uptake of different gases in CNT bundles showing that, provided that the adsorbate molecule is small enough (as in the hydrogen case) and that the tubes are
properly arranged in a honeycomb structure, the amount of hydrogen in the interstitial regions is comparable to the one inside the tubes.
Anyway, the approximations included in this model were quite severe (for instance the gas-gas interactions have been neglected) and these results have been revised by GCMC simulations [103] with the
Silvera-Goldman potential [104] for H[2]–H[2] and the LJ potential for C–H[2]; it was shown that the adsorption isotherms decrease as the rope diameter increases because the specific area uptake in
the interstitial and endohedral sites is nearly independent on the rope diameter (see Figure 2). These results agree with recent experiments and show that the DOE requirements are not satisfied at
room temperature in the 1–10MPa pressure range, even for an isolated SWCNT. PIMC simulations for (9,9) and (18,18) SWCNT arrays, implemented with the Silvera-Goldman and the Crowell-Brown [105]
potentials respectively, for H[2]–H[2] and C–H[2], have shown that quantum effects lower the GCMC results independently on the CNT chirality [106] and confirm that the previous optimistic
experimental results on bare CNTs [98, 107] cannot be explained by physisorption.
GCMC simulations have been used also to study how hydrogen physisorption in CNT media is affected by oxidation or by hydrogen chemisorption [108] showing that oxidation should favor the endohedral
physical adsorption thus increasing both the volumetric and the gravimetric densities (see Figure 3). The theoretical limits of the hydrogen physical adsorption in SWCNT systems have been discussed
by Bhatia and Myers [109] who recast the problem as a delivery one involving storage and release.
The delivery is defined from the adsorption/desorption Langmuir isotherms at different pressures as with · , , are the charge and the discharge pressures and the adsorption capacity and , are the
average heat of adsorption and the entropy change. Using GCMC simulations and thermodynamic arguments, the theoretical maximum delivery has been estimated lower than 4.6wt%, even for optimal
temperature (see Figure 3) given the adsorption heat kJ/mol. In this context, the authors evidenced with persuasive arguments that the H[2] heat of physisorption on the SWCNT “side-wall” (related to
the LJ energy parameter K) makes pure CNTs unfit to satisfy the DOE requirements.
Before drawing conclusive statements, however, it should be emphasized that the LJ energy parameter used in the simulations discussed so far did not include any curvature correction. To correct this
discrepancy, the LJ parameters for endohedral and exohedral adsorption have been calculated using quantum chemistry methods: f.i. Guan and coworkers [110] have used MP2 evidencing that the curvature
of a (10,10) CNT makes the endohedral adsorption stronger than the exohedral one.
The difference between the endohedral and exohedral H[2] adsorption has been evaluated in the frame of NLDFT for a CNT square lattice arrays showing that the outer adsorption, that depends on the Van
der Waals gap (i.e., the “intertubular” distance in a bundle of nanotubes) can be improved [41].
The binding energy of physisorbed H[2], calculated by accurate DFT in both zigzag and armchair CNTs, is in the range 0.049 ~ 0.113eV due to dipolar interactions [48]; these values are slightly
improved in nanotube bundles where the adsorption energy for the interstitial and groove sites is larger. Dag and coworkers [111] have tried to clarify the nature and the strength of H[2] adsorption
on armchair CNTs by using a hybrid model accounting for GGA–DFT short-range interactions and Van der Waals long-range forces [112]. The equilibrium configuration was found at a distance nm with a
binding energy of 0.057eV (almost independently on the tube curvature) that, despite implying the revision of previous results, does not change the whole scenario. Indeed, theoretical calculations
have shown that, in order to have good delivery properties with efficient charging/discharging cycles, a system with an adsorption heat of about 15kJ/mol should be considered [109, 113].
Therefore many authors have suggested that CNT adsorbing properties could have been improved by doping with different species, mostly alkali and transition metals. In a Li doped SWCNT, lithium atoms
donate the 2s valence electrons to the lowest CNT conduction band so that the semiconducting SWCNT becomes metallic. The equilibrium distance from the Li impurity of the physisorbed hydrogen molecule
is nm or nm for Li, respectively, internally or externally bonded on the tube [111]. Generally speaking, if the interaction potential and the configuration of the doping alkali metal species are
modeled reliably, the hydrogen adsorption results to be enhanced and SWCNT could possibly approach the DOE threshold as shown in Figure 4 [114] where GCMC simulations of Li doped CNT arrays are
High capacity hydrogen storage has been reported in B-doped or defective nanotubes with Ca impurities [115]; in this case the empty Ca -orbitals form hybrids with the H[2]-orbitals thus enhancing the
hydrogen uptake up to 5wt%. In this case, moreover, Ca impurities do not clusterize and remain dispersed on the tube side-wall.
More uncertain is the benefit of this strategy, however, if one considers the whole amount of adsorbed gas and the delivery properties of real samples due to their inherent inhomogeneity and
Chen and coworkers [43] tried to improve the uptake in a peapod structure obtained by encapsulating a fullerene molecule inside a Li-doped SWCNT; in this case a complex charge transfer process occurs
favoring Li charging and its strong bonding to the CNT surface that results in a noticeable increase of H[2] binding.
5.1.2. Activated and Microporous Carbons
In the case of activated (AC) and microporous carbon (MPC) as gas storage media, a severe bottleneck for theoretical predictions is the definition of reliable atomistic models for such disordered
materials. Also these materials, however, suffer from the limiting factor of the C–H[2] interaction properties that make unlikely large amount of H[2] storage by physisorption.
Several potential functions, such as the Mattera, the Taseli [116], and the Steele 10-4 potential [117] have been employed to treat the graphene-H[2] interactions in the contexts of the “slit-pore”
model where pores are delimited by graphene planes [109]; most of these studies predict similar values of the adsorption heat and the excess gravimetric percentage that is below the DOE requirements
at the operating pressure and temperature. In contrast, about 23wt% has been obtained by extrapolation at very high pressure conditions where it has been claimed that the hydrogen density exceeds
the liquid hydrogen one [118]; anyway this idea has been confuted by molecular dynamics simulations at 77K showing also that oxygenation, differently from the CNT case, does not improve the uptake [
Recently, well-founded atomistic models of ACs and MPCs have been obtained using the Hybrid Reverse Monte Carlo (HRMC) scheme [120, 121] starting from an initial realistic “slit-like” pore
configuration obtained from experimental data on the pore wall thickness and size distribution (PSD).
The HRMC algorithm has been applied on the basis of the following acceptance criterion: with where , , , are, respectively, the number of experimental points, the simulated and the experimental
radial distribution functions and, lastly, the errors inherent the experimental data (treated as adjustable parameters).
The AC atomistic model is finally obtained (as shown in Figure 5) by simulated annealing in multiple canonical ensambles with gradually decreasing the temperature and parameters in order to minimize
the energy and contextually [120]; an environmentally dependent interaction potential (EDIP) [122] or a reactive empirical bond order potential [123] can be used to this aim. On this basis, GCMC
simulations with the Feynman-Hibbs (FH) correction for the quantum dispersion effect [96, 124] have been performed at cryogenic temperatures [125]; the FH interaction potential is where , H[2], , is
the classical LJ potential and the C–H[2] parameters are defined using the Lorentz-Berthelot rule.
The energy parameter of a curved surface has been obtained by correcting the one of a flat surface with the factors , for the surface-fluid and the surface-surface interactions, respectively. Then
the effective FH interaction potential of H[2] with an immobile planar carbon wall is calculated and used in a GCMC context where the H[2]–H[2] interactions have been treated with the Levesque
parameters [126]; the C–C interactions have been treated by either the Frankland-Brenner [127] or the Wang et al. [128] parameters obtaining good results while the Steele parameters [129]
underestimate the adsorption. On this basis, reliable RT isotherms for ACs and MPCs have been obtained using new LJ parameters with an enhanced well depth (about ) to correct for the increased
surface polarizability occurring when H[2] molecules approach the carbon surface.
5.1.3. Other Carbonaceous Structures
Other carbon nanomaterials, such as nanostructured graphite, GNFs, fullerenes, nanohorns, and so forth, are frequently found in the literature as potential materials for hydrogen adsorption,
sometimes combined with alkali metal hydrides where lithium carbide forms after few adsorption/desorption cicles [130].
The interaction of an H[2] molecule with a graphene sheet has been studied by LDA-DFT calculations [11] and the energy curves, obtained by varying the molecule orientation, the adsorption site, and
the distance, show the typical van der Waals behaviour for physisorption (see Figure 6). Thus hydrogen uptake in GNFs has been simulated with conventional LJ potentials showing that significant
adsorption, in any case below 1.5wt% at MPa and RT [131], occurs only if the interplanar distance is larger than 0.7nm. The usage of more accurate potential parameters, fitted on MP2 ab-initio
calculations of vibrational energy or on experimental results [116], has demonstrated that, at cryogenic temperature and ambient pressure, the adsorption capacity of GNFs is about 2wt%. However, MP2
results are affected by long range errors and reliable potential well and equilibrium distance can be obtained only with a large basis set. On the basis of the above predictions, the experimental
results reporting adsorption excess data of 10–15wt% at RT and MPa [132, 133] are most probably due to chemisorbed contaminants such as oxygen or residual particles of metal catalysts used during
the synthesis; this circumstance has evidenced the potential positive role played by contaminants in storage thus driving the researchers to study metal doping also in these systems. Therefore some
authors have suggested doping with alkali metals, such as Li and K, to increase the uptake. Indeed, Zhu and coworkers [134] have found that charge transfer, occurring from the metal atoms to the
graphene layer, enhances the hydrogen adsorption at low temperature while it is significantly weakened at higher temperature. In this case Li is slightly more effective than K because of the higher
charge transfer from Li to graphene (0.5 and 0.2 for Li and K respectively) with an H[2] binding energy almost doubled with respect to graphene. Because the transferred charge remains localized near
the metal atom, the uptake enhancement does not hold if H[2] and Li stay on the opposite sides with respect to the graphene layer [13].
Ca doping of zig-zag GNRs, approached by GGA-DFT, has evidenced an H[2] gravimetric capacity of 5% at 0K, with reduced clustering of the impurities; clustering can be suppressed also in armchair GNRs
by B–Ca codoping [135]. B codoping has been also explored in Li doped graphene to suppress the metallic clustering and to fully exploit the enhanced interaction for Li atoms with H[2] molecules due
to van der Waals forces and hybridization [136]. Other attempts to improve storage in graphene include the usage of graphene layers deposited on metallic substrates [137] showing that Ni and Pt
subtrates behave differently, the first one increasing the covalent bonding on graphene. It should be considered, however, that oxygen adsorption is a competing process that strongly suppresses the
hydrogen adsorption in metal doped graphene [138] thus making unlikely the usage of such sysyems for hydrogen storage.
Similarly to other CNSs, fullerenes show low binding energy values (few meV) for molecular hydrogen resulting in poor uptake. Charged fullerenes could be used to improve the uptake performance and ab
-initio calculations of charged fullerenes have been performed accordingly [26].
As reported in Figure 7(a), the binding energy of a hydrogen molecule adsorbed at the fullerene surface can be increased between two and five times depending on the fullerene charge state whose
polarity affects also the H[2] orientation. An uptake of 8.04wt% has been predicted at best. Figure 8 shows both the electric field generated by the charged fullerene and the hydrogen molecule
charge density map obtained under such an electric field, giving, at least classically, a clear insight of the mechanism responsible of the H[2] storage.
Charged fullerenes can be produced by encapsulating a metal atom inside the fullerene cage: for instance, by entrapping a La atom inside a fullerene, three electrons are transferred to . Anyway, in
this case, the electric field outside the carbon molecule still does not differ significantly from the neutral case due to charge localization phenomena [22].
Enhanced adsorption on fullerenes can be obtained also with transition metals (TMs) [29]: according to the Dewar-Chatt-Duncanson model [139], the interaction is caused by a charge transfer from the H
[2] highest occupied molecular orbital (HOMO) to the metal empty -states followed by a back donation from a metal -orbital to the H[2] lowest unoccupied orbital (LUMO). C[60] decorated with Ti have
been investigated extensively showing a hydrogen adsorption up to 7.5wt% depending on doping site: if Ti occupies a hollow site, it strongly binds to the cage and no charge transfer to the hydrogen
molecular orbitals occurs thus causing hydrogen physisorption; on the contrary, if Ti atoms occupy other sites, at least one H[2] molecule dissociates and is bonded to the Ti atom while the other
hydrogen molecules are physisorbed nearby the impurity.
However, Sun and coworkers [140] have found that Ti, similarly to other TMs, tend to agglomerate after the first desorption cycle, thus reducing the hydrogen physisorption and storage. The same
authors have demonstrated also that Li[12]C[60] molecules can bind up to 60 hydrogen molecules resulting in a theoretical gravimetric density of 13wt% with a nearly constant binding energy [25].
This is due to the large electron affinity of C[60] (about 2.66eV) causing the capture of Li valence electrons that strengthen the bond; then the positively charged Li impurity causes the
polarization of the H[2] molecules resulting in an increased interaction. Moreover it was also demonstrated that Li[12]C[60] clustering affects only moderately the hydrogen binding properties.
Alkaly metals doping of C[60] have been studied also by ab-initio B3LYP/3-21G() calculations [24]: being positively charged with respect to fullerenes, these impurities can bind up to six (Li) or
eight (Na and K) H[2] molecules. By increasing the number of Na atoms, the average binding energy remains almost constant because each hexagonal ring of the fullerene cage behaves independently
showing a highly localized reactivity at the individual rings. Na[8]C[60] is found to be energetically stable with a theoretical hydrogen gravimetric ratio of 9.5wt%. DFT calculations of C[60]
doping with alkali metals of the second group (Ca, Sr) have evidenced that a strong electric field arises depending on the significant chemical activity of -orbitals in these species [141], unlike Be
and Mg: the fullerene -orbital, that is partially occupied by the electrons of the alkali metal -orbital, hybridize with the alkali metal -states thus resulting in net charge transfer that causes the
H[2] polarization giving a theoretical hydrogen uptake of 8.4wt%. In Figure 9 spin resolved PDOS (projected DOS, i.e., projected density of states) of a single hydrogen molecule on Ca-coated
fullerene evidences that the hydrogen -orbital, located far below the Fermi level, remains unchanged; also the charge density variations, induced by the hydrogen adsorption, suggest that polarization
of the H[2] occurs near the Ca atom.
Carbon nanocones (CNCs) have been investigated as possible alternatives to CNTs for hydrogen storage [142]. The adsorption isotherms at 77K in CNC structures with different apex angles have been
calculated by GCMC simulations [143] where C–H[2] interactions are treated with second order Feynman-Hibbs LJ potentials showing that molecular hydrogen can be confined in the apex region inside the
cone, in agreement with recent findings from neutron spectroscopy of H[2] in CNHs [59]. The hydrogen density obtained is reported in Figure 10 as a function of the fugacity. The density behaves
differently for the high/low pressure regimes. In any case, the theoretical data demonstrate that the hydrogen uptake is larger in a CNC than in a CNT and this behavior is attributed mainly to the
high interaction region close to the apex. Quite recently Ca-decorated carbyne networks have been considered in the framework of ab-initio DFT calculations suggesting that this system could benefit
of a surface area four times larger than graphene. Each Ca decorated site has been predicted to adsorb up to 6 hydrogen molecule with a binding energy of 0.2eV and no clustering was observed in the
model [144].
5.2. Physical Adsorption of Gaseous Species Other than Hydrogen
Other gaseous species are considered for adsorption CNSs; among them, noble gases are important case studies because they are used in adsorption experiments at low temperature to measure the CNSs
pore size distribution; On the contrary, condensation phenomena occur in these systems that, being studied in the context of the low temperature physics, are beyond the aims of the present review and
will be omitted. However modelling of porosimetry experiments concerning carbon microporous and nanoporous media where physical adsorption phenomena do not cause condensation will be explicitly
In the following, moreover, a special emphasis will be devoted to methane adsorption that is attracting a growing interest for alternative automotive energy sources. Methane, in fact, can be
efficiently stored in CNSs because of its high physisorption binding energy making it attractive for storage at RT and moderate pressure.
5.2.1. Methane Adsorption
Methane uptake in CNT bundles has been studied by Stan and co-workers for rigid tubes following the same approach adopted for hydrogen (Section 5.1.1) [36, 42]. LJ parameters and Lorentz-Berthelot
rules have been employed to calculate the ideal uptake curves (for endohedral and interstitial sites) at low coverage for a threshold gas density and fixed chemical potential and temperature; in
spite of the deep potential energy wall (K) of methane in CNSs, low uptake values at moderate pressure were predicted mainly due to the methane molecular size.
GCMC simulations have been performed to calculate the adsorption excess for both the endohedral and interstitial sites of CNTs for different pressure values and van der Waals gaps between the tubes [
117, 145, 146] (see Figures 11 and 12). The decreasing behavior of the interstice excess adsorption reveals that the outer uptake saturates while the gas density increases linearly for compression.
The usable capacity ratio (UCR), that measures the available fuel upon adsorption/desorption cycles with respect to the available fuel in a storage vessel, is calculated for different loading
pressures and van der Waals gaps (see Figure 13).
For each CNT type and loading pressure, it can be found a peculiar arrangement of the CNT array that maximizes the UCR with respect to the volumetric capacity of the compressed natural gas (CNG)
(about 200V/V at 20MPa) showing that the CNG value is obtained at a much lower pressure in these structures [145]. The potential advantage of carbon tubular “worm-like” structures (CNW: carbon nano
worm) over CNTs for methane storage was evidenced by calculating the “Langmuir-type” adsorption isotherms of these structures compared to (10, 10) armchair CNTs [147].
As expected, the measured isosteric heat of adsorption is maximum for the most corrugated wormlike tube examined, in accordance with the large methane adsorption excess measured.
Using basically the same method, the isosteric heat of methane adsorption at zero loading in various CNT arrays has been calculated focusing on different uptake sites such as interstitial, surface,
groove, “intratubular”, and so forth [148]. If allowed, the interstitial adsorption site is the most favorable followed by intratubular, groove, and surface sites.
Hydrogen and methane mixtures (hythane) are also considered for adsorption in CNT arrays, slitlike carbon nanopores, and mesoporous carbons. This is aimed, for instance, to separate hydrogen, and
methane in synthetic gas obtained from steam reforming of natural gas or for storage clean fuels on vehicles [149–151].
It has been demonstrated that hythane storage in slitlike pores and CNTs can achieve the volumetric stored energy threshold of 5.4MJ/cm^3 for alternative automotive fuel systems established by the
US Freedom Car Partnership. Moreover GCMC simulations using the Feynman-Hibbs quantum effective potential have evidenced important selectivity properties of CNT matrices. For instance, arrays of CNTs
with diameter between 1.2–2.4nm have large volumetric energy storage with respect to compression and evidence methane separation properties at RT and low pressure.
Methane storage in CMK-1 nanoporous structures has also been investigated by GCMC in combination with porosimetry. The isosteric heat values measured are within a broad range because of the
heterogeneous nature of these materials [151].
5.2.2. Physical Adsorption of Other Gaseous Species
CNSs have been repeatedly proposed also for sensing exploiting gas chemisorption. However, sometimes the scenario is more complicated and, instead of chemisorption, strong physisorption is observed
if accurate quantum chemistry methods are employed to describe the system. Moreover a significant part of the literature concerning physisorption of various gas species has been aimed to support
porosimetry, especially concerning AC, MPC, and other disordered porous structures. Porosimetry has been often studied in connection with the storage problem to get reliable adsorption volume
measurements. For instance, the adsorption isotherms for nitrogen, argon, carbon dioxide, and so forth, have been fitted by using GCMC or NLDFT using several interaction potentials, sometimes quantum
corrected [152–155], in order to infer reliable pore size distributions from the experiments (see Figure 14) [154].
Nitrogen physical adsorption in CNT arrays has been studied at subcritical (77K and 100K) and supercritical (300K) temperatures showing that type II isotherms at subcritical temperatures can be
explained by taking into account the outer surface adsorption sites of the CNT bundles [57].
The rest of this subsection is dedicated to the physisorption of gaseous species, different from H[2] and CH[4], in graphene NSs and CNTs.
(1) Graphene
Thanks to its high conductivity, graphene is considered ideal for sensing purposes also because adsorbed species cause an enhanced response of this two-dimensional structure. Indeed charge carrier
concentration can be varied by adsorption of various gases, even though the adsorbate identification may be problematic and accurate atomistic modelling is mandatory.
The graphene charge carrier concentration may be changed by charge transfer that depends on the HOMO and LUMO energy levels of the adsorbate with respect to the graphene Fermi energy. If the HOMO
energy is above the graphene Fermi level, a negative charge is transferred from the molecule to the graphene whereas the opposite occurs if the energy of the LUMO is below the Fermi level. In
addition, charge transfer is also partially determined by the mixing of the HOMO and LUMO with the graphene orbitals. In general charge transfer occur through bonding phenomena but sometimes a more
complicated mixture of weak chemisorption and strong physisorption is evidenced.
-type doping of graphene, for example, can be achieved by NO[2] (or its dimer N[2]O[4]) adsorption [15], at large distances (between 0.34nm and 0.39nm) with one distinction: the open shell monomer
electron affinity is larger than the dimer one suggesting that paramagnetic molecules may act as strong dopants. This hypothesis has been checked by ab-initio calculations on several species, such as
H[2]O, NH[3], CO, NO[2] and NO [156], evidencing that the charge transfer depends on the adsorbate orientation, it is nearly independent on the adsorption site and that paramagnetic molecules may not
behave as strong dopants: indeed, while NO[2] adsorption exhibits both chemisorption and physisorption characters with relatively strong doping (−0.1) at large equilibrium distance (0.361nm), no
doping occurs for NO with negligible charge transfer (<0.02 at 0.376nm distance). The different behavior of NO and NO[2] on graphene can be understood looking at the spin-polarized DOS reported in
Figures 15 and 16. The NO[2] LUMO (6a1, ↑) is 0.3eV below the graphene Fermi energy and therefore charge is transferred from graphene to the molecule. In the NO case, on the contrary, the HOMO is
only 0.1eV below the Fermi energy, is degenerate , and coincides with the LUMO. Thus the charge transfer is weak and the leading phenomenon is the mixing between the NO HOMO/LUMO and the graphene
orbitals; as the hybridization above the Fermi energy prevails, the orbital mixing leads to charge transfer to graphene.
The stable configuration of triplet O[2] on graphite has been modeled by accurate quantum chemistry techniques and high level DFT calculations [157, 158] evidencing that the choice of the
exchange-correlation is crucial (LDA and PBE are inappropriate) and that spin unpolarized schemes are mandatory. A consensus was reached concerning the physisorption binding energy of 0.1eV at a
distance of 0.31nm and negligible charge transfer.
GNRs have been also functionalized with polar groups (COOH, NH[2], NO[2], H[2]PO[3]) evidencing enhanced adsorption for CO[2] and CH[4] physisorption CO[2] binding is by far preferred over CH[4] in
hydrogen passivate GNRs [159].
A comparative study on diatomic halogen molecules on graphene has evidenced the crucial role played by the van der Waals interactions (that is more marked for species with large atomic radii) and the
inadequacy of standard GGA-DFT [160].
(2) Carbon Nanotubes
The detection of physisorbed molecules on a SWCNT wall is an open problem of great technological interest and ab-initio calculations have been employed to this aim for several gaseous species such as
H[2]O, NH[3], CO[2], CH[4], NO[2], and O[2] [48]. Most of the molecules studied are charge donors with small charge transfer (0.010 ~ 0.035 per molecule) and exhibit a weak binding energy (eV) with
no substantial electron density overlap between the adsorbate and the nanotube. On the contrary, acceptors such as O[2] and NO[2] exhibit a significant charge transfer, often accompanied by large
adsorption energy, thus indicating that chemical and physical adsorption characters coexist.
Aromatic compounds interacting with CNSs show a similar uncertain nature of the bonding and their weak intermolecular forces, including van der Waals interactions, are often referred to as -stacking
interactions as they originate from the -states of the interacting systems [50, 51]. Strictly speaking, molecular orbitals can be found only in planar systems such as graphene but for a CNT this
concept still holds if one considers the bonds between the -type orbitals, referred to as POAV ( Orthogonal Axis Vector), that are nearly orthogonal to the three bonds between a carbon and its three
neighbors. There are different metastable adsorption configurations of benzene on a CNT (see Figure 17), the most stable one in narrow CNTs being with the aromatic group above the middle of a C–C
bond (bridge position) that is different from the one on graphene (top position). Therefore the most favorable adsorption geometry should evolve from bridge to top as the nanotube diameter increases.
In any case, the electronic structure calculation performed on these systems evidenced that the DOS is a superposition of the ones of the isolated benzene and the CNT, consistently with the fact that
the -stacking is accompanied by a very small binding energy. Consequently, the adsorption of benzene on a CNT is more appropriately classified as physisorption, although van der Waals interactions
are not involved and a possible explanation is related to the misalignment of the POAV of neighboring carbon atoms. The adsorption of benzene derived molecules with different dipole moment and
electron affinity properties, such as aniline (C[6]H[5]–NH[2]), toluene (C[6]H[5]–CH[3]), and nitrobenzene (C[6]H[5]–NO[2]), on semiconducting (8,0) SWCNT have been compared to the ones of benzene
and of the “closed-shell” functional groups NH[3], CH[4] and CH[3]NO[2] [161]. The general trend found is that compounds with closed shells are always physisorbed with minor changes of the CNT
electronic structure while both physisorption and chemisorption are possible for compounds with open shells. Moreover the adsorption is promoted by either the functional groups or the benzene rings
depending on the configuration: for perpendicular configuration, the functional groups prevail while for the parallel configuration the interaction occurs through the electrons. Thus, in the first
case the adsorption energies are at least 150meV smaller. The equilibrium distances are smaller than the C[6]H[6] equilibrium distance and larger than the relevant functional groups ones, with the
exception of toluene.
Similarly to the other CNSs, doping has been proposed to improve the physisorption of some molecular species on CNTs; B- and N-doped carbon nanotubes experience a large conductivity when exposed to
CO or H[2]O [54]; more specifically, CO molecules are physisorbed onto N-doped CNT because no charge transfer occurs while in the B-doped case chemisorption takes place (see below).
As for graphene, accurate quantum chemistry methods and high level DFT calculations have been performed to study O[2] physisorption at the CNT “side-wall” [157, 158]. Also in this case the
calculation scheme may affect the results and the choice of the exchange-correlation functional is crucial. Using MP2 and other accurate quantum chemistry methods (DFT-B3LYP, DFT-PBE), it has been
shown that O[2] in a triplet state is physisorbed at a CNT, independently on the chiral vectors considered, at a distance of nearly 0.32nm with no charge transfer and low binding energy (about
6. Gas Chemisorption in CNSs
In this section we treat the systems where the adsorbate-substrate interaction can be unambiguously ascribed to chemisorption with predominant bonding phenomena.
As in the previous section, we treat separately the case of the hydrogen chemical adsorption on some of the most recurrent carbon nanostructured adsorption media due to its potential importance in
new technology and energy sources.
6.1. Hydrogen Chemisorption
Generally speaking, hydrogen chemisorption in carbon nanomaterials is not interesting for storage purposes because of the large binding energy involved that would make the experimental conditions for
the adsorption/desorption cycles of little practical use. However, in storage experiments a significant amount of physisorbed hydrogen molecules could be involved in bonding phenomena when the
hydrogen molecules get close to the carbons tanks to their thermal energy. Therefore hydrogen chemisorption must be considered explicitly.
6.1.1. Graphene
Chemisorption of atomic hydrogen on graphene leads to the appearance of a magnetic moment [162, 163] with a local lattice distortion nearby the adsorption site. The phenomenon gives rise to a strong
Stoner ferromagnetism [164] with a magnetic moment of per hydrogen atom, as evidenced by the spin-density in Figure 18. According to the Stoner picture, magnetic ordering is driven by the exchange
energy between the -orbitals of ther adsorption sites and either ferromagnetism or antiferromagnetism occurs if the H derived bound states are located at equivalent or different lattice sites. The
energy difference between different adsorption sites, namely top, bridge and hollow, is small and hydrogen diffusion occurs even at low temperature; as a consequence, two H atoms may easily recombine
and form molecular hydrogen that is immediately desorbed from the graphene [165, 166].
On the other hand, full hydrogen coverage of both sides of an isolated graphene layer form a stable structure where each carbon undergoes a hybridization transition from sp^2 to sp^3. The situation
is different at intermediate coverage and strongly depends on the overall magnetization as indicated by the linear dependence of the secondary H adsorption binding energy on the “site-integrated”
magnetization [167]. Therefore, at least at low temperature, it would be possible to control the adsorption dynamics of H atoms by tuning the substrate magnetization. In Table 1 we report some
selected data concerning the properties of hydrogen adsorption on graphene.
6.1.2. Fullerenes
Novel fullerene organo-metallic molecules have been deeply studied for hydrogen storage. To this aim, light elements, either in interstitial (Li and F) or in substitutional sites (N, B and Be), have
been investigated as doping species of C[36] and C[60] by means of LDA and GGA ab-initio total energy calculations [23]. Fullerenes doped with B and Be at substitutional sites exhibits large hydrogen
binding energies (0.40 and 0.65eV, respectively) due to the strong interaction between the B (Be) -orbital and the hydrogen molecular orbital (MO).
The orbital interaction, evidenced in Figure 19, causes the splitting of the H[2] MO bonding state below the Fermi level, whereas the B -state, that is normally located in the range 1–3eV above the
Fermi energy for B-doped fullerenes, shifts to higher energy values. Similar phenomena occur also for C[35]Be–H[2].
The charge transfer analysis, performed along the direction orthogonal to the hydrogen axis (see Figure 20), shows that, in the case of B, only few electrons are involved in the formation of a
“three-center” bond, in contrast with the Be case; therefore the hydrogen adsorption energy for Be is larger and nearly insensitive to the number of adsorbed molecules than the one for B (see Figure
21) confirming that highly localized -orbitals are needed for not dissociative adsorption. In the B case, moreover, hydrogen desorption may occur more easily. In spite of the advantages of Be over B,
however, a controlled Be doping is difficult to obtain also because of its toxicity while B-doped fullerenes has been already synthesized. In particular, first principles molecular dynamics
simulations have revealed that C[54]B[6] hydrogenation is unstable and that the reaction path (see Figure 22) causes the desorption to occur within the picosecond timescale [168]. Among the other
doping species investigated, Si is interesting because industrial C[60] synthesis is performed on silicon surface [31]. H[2] adsorption on the Si site occurs at a distance of nm with 0.15eV binding
energy that indicates an intermediate state between physical and chemical adsorption. A similar situation is found also in Ni-doped fullerenes [27] where the Ni valence states are depleted by about
half an electron resulting in large van der Waals interactions with a gravimetric ratio of 6.8wt%. From Table 2, where are reported some selected results concerning hydrogen adsorption on
fullerenes, it emerges that atomistic simulations predict Si, Li, Ca, and Sr as the doping species that could enhance most of the hydrogen uptake in “fullerene-like” CNSs.
6.1.3. Carbon Nanotubes
Some of the most notable results found in the literature concerning hydrogen adsorption on a CNT are collected in Table 3 including data on both chemical and physical adsorption. Following
experimental evidences, hydrogen chemisorption has been treated by DFT total energy calculations studying two energetically favored sites where atomic hydrogen is chemisorbed [39]. Both of them are
accompanied by an sp^2 to sp^3 hybridization transition, the most stable being characterized by the hydrogen atoms alternating outside and inside on the tube “side-wall” (zigzag type). Hydrogen
half-full coverage of CNTs has been investigated with high accurate quantum chemistry models showing that this configuration is more stable with respect to the full coverage case and suggesting that
the deformations induced by the adsorption of H atoms can affect the stability of CNTs [38].
However, many experimental studies have shown that the chemisorbed hydrogen storage capacity on pure CNT media is less than 0.01wt% at room temperature, resulting impractical for storage
applications. As for fullerenes, CNT doping with metallic impurities can improve the situation, as evidenced using Ti [28]. Unpolarized spin density calculations have shown that, while an H[2]
molecule approaches a Ti coated zigzag CNT, the energy decreases in two steps, the first one due to a charge overlap resulting in an increased attraction between H[2] and Ti and the second one
related to the H[2] molecule dissociation with a final binding energy of 0.83eV. This scenario is quite different from the case of Ti decorated fullerenes (where H[2] is simply physisorbed) because
of the different coordination numbers of Ti in the two cases: in the CNT case, indeed, the larger Ti charge is responsible for the H[2] dissociation and the subsequent chemisorption. The first H[2]
chemisorption event is followed by the physisorption of three other hydrogen molecules on the same Ti site. Alternatively four hydrogen molecules can be simply physisorbed at a Ti decorated site in a
low energy (0.1eV lower than the previous one) and high symmetry configuration. In this case, the bonding mechanism is quite similar to the Dewar, Chatt and Duncanson model because it implies the
donation of to the 4H[2] antibonding molecular orbital (hybridized with the Ti -state) followed by the transfer of 0.4 to an empty Ti -orbital. The above scenario is schematically drawn in Figure 23
Because Pt surfaces can adsorb gaseous molecules reversibly, DFT calculations of molecular hydrogen on Pt-doped armchair CNTs have been performed [44, 111] showing that chemisorption is accompanied
by an oxidative addition to Pt involving its -orbital. However Pt clustering may occur that favors molecular hydrogen dissociation and reversible atomic hydrogen chemisorption [111].
Pd decoration of SWCNT behaves similarly [45] with a storage capacity of about 3wt%. The most stable configuration exhibits both the physical and chemical adsorption characters with five hydrogen
molecules adsorbed onto two adjacent Pd atoms through a partial hybridization between the H[2]-orbitals and the Pd -orbitals.
Recent ab-initio molecular dynamics simulations of nitrogen decorated SWCNTs [169] have evidenced that hydrogen chemisorption occurs at 77K and is stable at 300K while physisorption is enhanced at
both the temperatures. These results, obtained within a DFT-LDA scheme, have also evidenced that 0K ground state properties of such systems should be revised at higher temperature where desorption
or enhanced chemisorption may occur affecting storage. The scenario emerging from the above discussion and the results resumed in Table 3 is that TMs may enhance physisorption at the expense of
having chemisorption on the CNT walls.
6.2. Gas Chemisorption for Sensing
As mentioned in the Introduction, gas chemical adsorption in CNSs has been studied focusing on gas sensing. Of course, the computational techniques required are quantum chemistry techniques, DFT
calculations, and similar. Due to the amount of the literature found, we have just treated nanostructured graphene and CNTs.
6.2.1. Graphene-Based NSs
Graphene charge carrier concentration can be strongly modified by gas chemisorption. Therefore, the electronic and magnetic properties of GNRs can be modified by edge functionalization or
substitutional doping. However GNRs with well controlled saturated edges without dangling bonds (DBs) are far to be produced; these defects usually enhance the covalent bonding of chemical groups and
molecules thus playing a critical role in the feasibility of using such carbon-based nanostructures as gas sensors. Semiconducting armchair GNRs (AGNRs) are preferred with respect to zig-zag GNRs
(ZGNRs) since gas molecule adsorption is expected to induce little modifications on the electronic properties of metallic ZGNRs.
Adsorption of many gas molecules (CO, NO, NO[2], O[2], CO[2], and NH[3]) has been studied by spin-polarized GGA-DFT total energy calculations [170]: among the different gaseous species considered,
only NH[3] has been found to greatly enhance the AGNRs conductance after chemical adsorption; in this case a semiconducting/metallic transition occurs thus suggesting that, in principle, a
“GNR-based” junction can be used to detect NH[3] (see Figure 24) by - measurements. Indeed, the GNR sensor exhibits a semiconducting behavior when no gas molecule is adsorbed while, after NH[3]
adsorption, the current increases linearly with the applied bias evidencing a metallic behavior.
Molecular adsorption at vacancy sites in nanostructured graphene has been also investigated as a possible sensing mechanism and this system is expected to behave similarly to GNRs. Vacancies and
divacancies can be introduced by ion or electron irradiation under vacuum conditions and their passivation is of crucial interest in the development of graphene nanoelectronics. Divacancies in
graphene have been passivated using several possible gaseous species, such as O[2], N[2], B[2], CO, and H[2]O, in the context of DFT ab-initio calculations [14]. In the particular case of N[2], for
instance, the molecule undergoes dissociation and subsequent chemical adsorption on the graphene layer resulting in substitutional N impurities that introduce extra carriers and change the charge
transport properties. A summary of the most important results discussed here can be found in Table 4 where we have included also data from physisorption studies. It is quite evident that chemical
adsorption at divacancies is significantly stronger than adsorption at dangling bonds.
6.2.2. Carbon Nanotubes
As steam reforming of natural gas is employed to produce hydrogen, the interest in CH[4] and hydrocarbons adsorption has grown rapidly. However the chemical functionalization of a CNT with
hydrocarbons is difficult due to the low reactivity of these systems. Classical molecular dynamics and ab-initio calculations have been employed to study the adsorption improvement of accelerated CH
[4] molecules (with energy in the range 5–100eV) on CNTs [49]. As methane cracking occurs, the obtained radicals (CH[3], CH[2] and CH) are adsorbed in different ways depending on the incident
energy, while no decoration is observed at low energy, CH[4] dissociates into carbon (that is adsorbed and the CNT wall) and hydrogen molecules for incident energy higher than 60eV. Collisions can
also break the tube wall and form structural defects that can be healed through high temperature annealing (2000K), provided the incident energy is lower than 70eV. Among the investigated SWCNT
structures, the ones with larger radius show lower reactivity. The adhesion of radicals modifies the SWCNT transport properties as evidenced by the calculated DOS where localized energy state appears
in the gap for CH[3] and CH adsorption. Weak binding between CH[4] and a (5,5) SWCNT is confirmed at zero temperature [48] while recent tight binding molecular dynamics calculations have evidenced
that at room temperature dissociation reaction proceeds with low enthalpy change, provided the thermal energy is sufficient to get the methane and the CNT close enough [52].
Doping with metallic particles can enhance the homolytic dissociation of the H–CH[3] bond; indeed, recent DFT calculations [171] have suggested that a zigzag nanotube, decorated with an interstitial
C and Mo, can decrease the energy barrier for CH[4] dissociation thanks to the cooperation between the dipole induced in the CNT by the self-interstitial C atom and the Mo--orbitals.
Also simple alkene (C[2]H[4]) and alkyne (C[2]H[2]) have been proposed for catalytic hydrogenation on Pt-doped armchair nanotube and studied by DFT [44]. The ethylene interaction with a CNT is
relatively weak, despite the significant charge transfer in the case of doped SWCNT. In the acetylene case, instead, the interaction is stronger and is presumably related to the observed
hybridization transition from sp to sp^2. Metallic doping is not the unique way to improve the CNT reactivity at 0K; indeed, accurate GGA-DFT calculation performed with a localized basis set (B3LYP/
6−311+G* level of theory) have evidenced that nitrogen doping of CNTs enhances the oxygen stability at the CNT sidewall; this circumstance favors the methane cracking at the oxygen impurity at 0K
through the orbitals overlap [172]. Then nitrogen doped CNTs can be engineered in order to obtain highly reactive catalysts, comparable to metal ones.
Besides it is only marginally pertaining the theme of gas-CNT interaction, it is worth to mention that CNTs can be functionalized to improve their solubility in water or in organic solvents that is
important for, for example, nanomedicine. For instance, the functionalization of a CNT with a carboxylic group or methane-derived radicals (–CH[2]–OH, –CH[2]–Cl, –CH[2]–SH, and –CONH–CH[3]), has been
investigated with regard to the –OH free radical scavenging capability [173]. ab-initio calculations, in the framework of B3LYP hybrid HF-density functional and the 3−311+G(d) localized basis set,
have shown that the CNT elicity affects the free radical scavenger capacity, armchair tubes being more effective than zig-zag ones. Moreover it is shown that functional groups with the best
performance are the ones containing just carbon, hydrogen and nitrogen atoms. Moreover different vacancy defects affect differently the OH addition on the SWCNT while the Stone-Wales point defects
show the largest site dependent effect [174].
The chemical reactivity of CNT for oxygen chemisorption has been addressed by MP2, to obtain accurate binding energy, and DFT calculations (at various levels of theory) for larger system. It is shown
that singlet O[2] is the most stable chemisorption configuration but it is not expected to occur at RT due to the large activation barrier [158]. It should be emphasized that the exchange-correlation
functional adopted affect much the O[2] ground state properties found within DFT [157].
Using analogous theoretical schemes, NH[3] on (9,0) CNTs has been studied evidencing no charge transfer and suggesting that no chemisorption occurs [175].
CNTs as chemical sensors of other gaseous species can take advantage from the change of the electrical conductivity induced by adsorption of functional groups.
Despite recent controversial data [176], theoretical results seem to indicate that donor or acceptor species may change the carrier density of a p-type semiconducting CNT. However, in the case of a
metallic CNT, transport properties show a peculiar dependence on the positions of the adsorbed molecules with the possible suppression of conductivity [56]. It is known that transport in a metallic
CNT occurs through two channels corresponding to the Bloch states at the K and K′ points of the graphene first Brillouin zone. A simple “tight-binding” picture of the coupling between an impurity
level and CNT -orbitals show that, in the case of an isolated impurity, one of the two channels is suppressed. Accurate DFT and nonequilibrium Green function (NEGF) transport calculations confirm
this simple view as shown by the transmission curves for different adsorbates, such as H, COOH, OH, NH[2], and NO[2], reported in Figure 25. If two impurities are adsorbed on the CNT sidewall, tight
binding and DFT-NEGF calculations still agree showing that the transport behavior depends on the relative positions of the two impurities (being , the basis vectors of graphene): if for all
transmission is the same as the one obtained with only one impurity. If the previous condition is not satisfied, the transmission is completely suppressed (see Figure 26). Some molecular species,
such as CO, are not chemisorbed on semiconducting SWCNTs; however the local chemical activity can be changed by applying an uniaxial stress orthogonal to the tube axis so that, for example, the CO
molecule can be bonded on the surface [53].
Impurity inclusion in CNTs can improve the chemical sensing of molecules that are not chemisorbed onto pure CNT side wall. ab-initio calculations of a B-doped CNT for CO and H[2]O detection have
evidenced an enhanced chemical reactivity with increased binding energy [54] accompanied by a large charge transfer from the nanotube to the molecule.
Systematic investigations on small molecules adsorbed on a Pt doped armchair SWCNT [44] have shown chemisorption and significant charge transfer from the nanotube to the adsorbate for most of the
examined species resulting in a change of the CNT conductance. NH[3] behaves in the opposite way due to the high LUMO energy of this molecule that, differently from the other cases, inhibits any
“back-donation” from the nanotube.
TM-doped CNT structures, perhaps, are the most promising candidates for detection of small molecules under standard conditions. In recent experiments, CNT samples have been pretreated by irradiation
with Ar ion beams to form vacancies where TM atoms are strongly bonded thanks to their partially occupied -orbitals.
DFT total energy calculations have shown that substitutional atoms of most of the transition metals (Ti, V, Cr, Mn, Fe, Co, Ni) exhibit a high binding energy in different sites of an armchair carbon
nanotube (see Figure 27), with the exception of Cu and Zn that are rather unstable because of their fully occupied bands [55]. The general trend emerging is that light transition metals can bind
several adsorbate molecular species (N[2], O[2], H[2]O, CO, NH[3], and H[2]S) with large binding energy values. Water molecules are weakly bound to most of the active site, suggesting that these
sensors are robust against humidity. Ni-doped CNT systems seem to be the most promising candidate for CO detection as indicated by the conductance data reported, together with the adsorption
energies, in Figure 28: indeed for this system we have an electrical resistance change per active site greater than 1Ω. In the same figures, data concerning adsorption of molecular species in mono-
and divacancies on CNTs are also reported. In order to give a general view on gas sensing in CNTs, we have chosen to summarize some of the most important results previously discussed in Table 5 where
the main features concerning both the binding energy and the ground state configuration distances are provided together with the indication of the level of theory used. For completeness, in the same
Table are also reported the data concerning physisorption in various CNT structures.
7. Conclusions
The wide variety of data on atomistic simulations of gas adsorption in CNSs, sometimes affected by dispersion, makes it difficult drawing a general scenario. It must be evidenced, however, that
“in-silico” experiments play a fundamental role with increasing importance in the understanding of the phenomena involved in adsorption. Besides being a fundamental support to experiments, atomistic
simulations and total energy calculations may reveal unexpected phenomena that could lead experimentalists. For instance, while a general agreement has been reached concerning the unsuitability of
pure CNSs for hydrogen storage, many predictions on alkaly or transition metal doping indicate a new promising route where, however, problems of contaminants may represent a challenge.
Concerning methane adsorption in CNSs, instead, atomistic simulations predict storage properties close to needs for industrial applications.
ab-initio total energy modelling is mandatory for impurities, doping, chemisorption, and sensing due to the inherent complexity of the processes involved. In these cases, simulations give encouraging
results and evidence new challenges in controlling the CNSs local chemistry for sensing that is still on the way.
Generally speaking, atomistic modelling has shown that TM doping is most probably the right way to engineer the various different CNSs in order to obtain valuable materials for sensing devices. A
careful choice of the correct scheme is often mandatory to avoid artifacts; this should be evaluated case by case because even DFT-LDA can be appropriate for selected systems.
However it should be emphasized that the relationship between in silico and real experiments is often vitiated by the fact that the most accurate predictions available concern ground state
properties; higher temperature values (f.i. even room temperature), in fact, may change dramatically the scenario because most of the carbon nanostructured materials investigated may behave quite
differently due to the possible hybridization transition induced by thermal distortions. In the near future, the enormous increase of the computational resources and the improvement of the algorithms
should play a key role to make RT in silico experiments, such as accurate ab-initio molecular dynamics, achievable also for large systems thus giving a more complete insight to the behavior of real
nanostructures for gas adsorption.
1. S. Satyapal, J. Petrovic, C. Read, G. Thomas, and G. Ordaz, “The U.S. Department of Energy's National Hydrogen Storage Project: progress towards meeting hydrogen-powered vehicle requirements,”
Catalysis Today, vol. 120, no. 3-4, pp. 246–256, 2007. View at Publisher · View at Google Scholar · View at Scopus
2. E. Bekyarova, M. Davis, T. Burch et al., “Chemically functionalized single-walled carbon nanotubes as ammonia sensors,” Journal of Physical Chemistry B, vol. 108, no. 51, pp. 19717–19720, 2004.
View at Publisher · View at Google Scholar · View at Scopus
3. J. Suehiro, G. Zhou, and M. Hara, “Fabrication of a carbon nanotube-based gas sensor using dielectrophoresis and its application for ammonia detection by impedance spectroscopy,” Journal of
Physics D, vol. 36, no. 21, pp. L109–L114, 2003. View at Publisher · View at Google Scholar · View at Scopus
4. N. S. Lawrence, R. P. Deo, and J. Wang, “Electrochemical determination of hydrogen sulfide at carbon nanotube modified electrodes,” Analytica Chimica Acta, vol. 517, no. 1-2, pp. 131–137, 2004.
View at Publisher · View at Google Scholar · View at Scopus
5. G. E. Froudakis, “Hydrogen interaction with carbon nanotubes: a review of ab initio studies,” Journal of Physics Condensed Matter, vol. 14, no. 17, pp. R453–R465, 2002. View at Publisher · View
at Google Scholar · View at Scopus
6. D. G. Narehood, J. V. Pearce, P. C. Eklund et al., “Diffusion of H[2] adsorbed on single-walled carbon nanotubes,” Physical Review B, vol. 67, no. 20, Article ID 205409, 5 pages, 2003. View at
Publisher · View at Google Scholar · View at Scopus
7. R. Saito, G. Dressalhaus, and M. S. Dressalhaus, Physical Properties of Carbon Nanotubes, Imperial College Press, London, UK, 1999.
8. K. S. Novoselov, A. K. Geim, S. V. Morozov et al., “Electric field in atomically thin carbon films,” Science, vol. 306, no. 5696, pp. 666–669, 2004. View at Publisher · View at Google Scholar ·
View at Scopus
9. A. K. Geim and K. S. Novoselov, “The rise of graphene,” Nature Materials, vol. 6, no. 3, pp. 183–191, 2007. View at Publisher · View at Google Scholar · View at Scopus
10. F. Cervantes-Sodi, G. Csányi, S. Piscanec, and A. C. Ferrari, “Edge-functionalized and substitutionally doped graphene nanoribbons: electronic and spin properties,” Physical Review B, vol. 77,
no. 16, Article ID 165427, 13 pages, 2008. View at Publisher · View at Google Scholar · View at Scopus
11. J. S. Arellano, L. M. Molina, A. Rubio, and J. A. Alonso, “Density functional study of adsorption of molecular hydrogen on graphene layers,” Journal of Chemical Physics, vol. 112, no. 18, pp.
8114–8119, 2000. View at Scopus
12. M. H. F. Sluiter and Y. Kawazoe, “Cluster expansion method for adsorption: application to hydrogen chemisorption on graphene,” Physical Review B, vol. 68, no. 8, Article ID 085410, 7 pages, 2003.
View at Publisher · View at Google Scholar · View at Scopus
13. I. Cabria, M. J. López, and J. A. Alonso, “Enhancement of hydrogen physisorption on graphene and carbon nanotubes by Li doping,” Journal of Chemical Physics, vol. 123, no. 20, Article ID 204721,
9 pages, 2005. View at Publisher · View at Google Scholar · View at Scopus
14. B. Sanyal, O. Eriksson, U. Jansson, and H. Grennberg, “Molecular adsorption in graphene with divacancy defects,” Physical Review B, vol. 79, Article ID 113409, 4 pages, 2009.
15. T. O. Wehling, K. S. Novoselov, S. V. Morozov et al., “Molecular doping of graphene,” Nano Letters, vol. 8, no. 1, pp. 173–177, 2008. View at Publisher · View at Google Scholar · View at Scopus
16. A. Saffarzadeh, “Modeling of gas adsorption on graphene nanoribbons,” Journal of Applied Physics, vol. 107, no. 11, Article ID 114309, 7 pages, 2010. View at Publisher · View at Google Scholar ·
View at Scopus
17. P. K. Ang, W. Chen, A. T. S. Wee, and P. L. Kian, “Solution-gated epitaxial graphene as pH sensor,” Journal of the American Chemical Society, vol. 130, no. 44, pp. 14392–14393, 2008. View at
Publisher · View at Google Scholar · View at Scopus
18. H. Wu, J. Wang, X. Kang et al., “Glucose biosensor based on immobilization of glucose oxidase in platinum nanoparticles/graphene/chitosan nanocomposite film,” Talanta, vol. 80, no. 1, pp.
403–406, 2009. View at Publisher · View at Google Scholar · View at Scopus
19. S. M. Gatica, H. I. Li, R. A. Trasca, M. W. Cole, and R. D. Diehl, “Xe adsorption on a C[60] monolayer on Ag(111),” Physical Review B, vol. 77, no. 4, Article ID 045414, 8 pages, 2008. View at
Publisher · View at Google Scholar · View at Scopus
20. R. A. Trasca, M. W. Cole, T. Coffey, and J. Krim, “Gas adsorption on a C[60] monolayer,” Physical Review E, vol. 77, no. 4, Article ID 041603, 5 pages, 2008. View at Publisher · View at Google
Scholar · View at Scopus
21. R. C. Mowrey, M. M. Ross, and J. H. Callahan, “Molecular dynamics simulations and experimental studies of the formation of endohedral complexes of buckminsterfullerene,” Journal of Physical
Chemistry, vol. 96, no. 12, pp. 4755–4761, 1992. View at Scopus
22. M. Yoon, S. Yang, and Z. Zhang, “Interaction between hydrogen molecules and metallofullerenes,” Journal of Chemical Physics, vol. 131, no. 6, Article ID 064707, 5 pages, 2009. View at Publisher ·
View at Google Scholar · View at Scopus
23. Y. H. Kim, Y. Zhao, A. Williamson, M. J. Heben, and S. B. Zhang, “Nondissociative adsorption of H[2] molecules in light-element-doped fullerenes,” Physical Review Letters, vol. 96, no. 1, Article
ID 016102, 4 pages, 2006. View at Publisher · View at Google Scholar · View at Scopus
24. K. R. S. Chandrakumar and S. K. Ghosh, “Alkali-metal-induced enhancement of hydrogen adsorption in C[60] fullerene: an ab initio study,” Nano Letters, vol. 8, no. 1, pp. 13–19, 2008. View at
Publisher · View at Google Scholar · View at Scopus
25. Q. Sun, P. Jena, Q. Wang, and M. Marquez, “First-principles study of hydrogen storage on Li[12]C[60],” Journal of the American Chemical Society, vol. 128, no. 30, pp. 9741–9745, 2006. View at
Publisher · View at Google Scholar · View at Scopus
26. M. Yoon, S. Yang, E. Wang, and Z. Zheng, “Charged fullerenes as high-capacity hydrogen storage media,” Nano Letters, vol. 7, no. 9, pp. 2578–2583, 2007. View at Publisher · View at Google Scholar
· View at Scopus
27. W. H. Shin, S. H. Yang, W. A. Goddard, and J. K. Kang, “Ni-dispersed fullerenes: hydrogen storage and desorption properties,” Applied Physics Letters, vol. 88, no. 5, Article ID 053111, 3 pages,
2006. View at Publisher · View at Google Scholar · View at Scopus
28. T. Yildirim and S. Ciraci, “Titanium-decorated carbon nanotubes as a potential high-capacity hydrogen storage medium,” Physical Review Letters, vol. 94, no. 17, Article ID 175501, 4 pages, 2005.
View at Publisher · View at Google Scholar · View at Scopus
29. T. Yildirim, J. Íñiguez, and S. Ciraci, “Molecular and dissociative adsorption of multiple hydrogen molecules on transition metal decorated C[60],” Physical Review B, vol. 72, no. 15, Article ID
153403, 4 pages, 2005. View at Publisher · View at Google Scholar · View at Scopus
30. Y. Zhao, Y. H. Kim, A. C. Dillon, M. J. Heben, and S. B. Zhang, “Hydrogen storage in novel organometallic buckyballs,” Physical Review Letters, vol. 94, no. 15, Article ID 155504, 4 pages, 2005.
View at Publisher · View at Google Scholar · View at Scopus
31. N. Naghshineh and M. Hashemianzadeh, “First-principles study of hydrogen storage on Si atoms decorated C[60],” International Journal of Hydrogen Energy, vol. 34, no. 5, pp. 2319–2324, 2009. View
at Publisher · View at Google Scholar · View at Scopus
32. S. Iijima and T. Ichihashi, “Single-shell carbon nanotubes of 1-nm diameter,” Nature, vol. 363, no. 6430, pp. 603–605, 1993. View at Scopus
33. S. Iijima, “Helical microtubules of graphitic carbon,” Nature, vol. 354, no. 6348, pp. 56–58, 1991. View at Scopus
34. F. Darkrim and D. Levesque, “Monte Carlo simulations of hydrogen adsorption in single-walled carbon nanotubes,” Journal of Chemical Physics, vol. 109, no. 12, pp. 4981–4984, 1998. View at
Publisher · View at Google Scholar · View at Scopus
35. Y. Ma, Y. Xia, M. Zhao, R. Wang, and L. Mei, “Effective hydrogen storage in single-wall carbon nanotubes,” Physical Review B, vol. 63, no. 11, Article ID 115422, 6 pages, 2001. View at Scopus
36. G. Stan and M. W. Cole, “Low coverage adsorption in cylindrical pores,” Surface Science, vol. 395, no. 2-3, pp. 280–291, 1998. View at Scopus
37. Q. Wang, J. K. Johnson, and J. Q. Broughton, “Path integral grand canonical Monte Carlo,” Journal of Chemical Physics, vol. 107, no. 13, pp. 5108–5117, 1997. View at Scopus
38. J. S. Arellano, L. M. Molina, A. Rubio, M. J. López, and J. A. Alonso, “Interaction of molecular and atomic hydrogen with (5,5) and (6,6) single-wall carbon nanotubes,” Journal of Chemical
Physics, vol. 117, no. 5, pp. 2281–2288, 2002. View at Publisher · View at Google Scholar · View at Scopus
39. C. W. Bauschlicher and C. R. So, “High Coverages of Hydrogen on (10,0), (9,0) and (5,5) Carbon Nanotubes,” Nano Letters, vol. 2, no. 4, pp. 337–341, 2002. View at Publisher · View at Google
Scholar · View at Scopus
40. S. M. Lee and Y. H. Lee, “Hydrogen storage in single-walled carbon nanotubes,” Applied Physics Letters, vol. 76, no. 20, pp. 2877–2879, 2000. View at Scopus
41. X. Zhang, D. Cao, and J. Chen, “Hydrogen adsorption storage on single-walled carbon nanotube arrays by a combination of classical potential and density functional theory,” Journal of Physical
Chemistry B, vol. 107, no. 21, pp. 4942–4950, 2003. View at Scopus
42. G. Stan and M. W. Cole, “Hydrogen adsorption in nanotubes,” Journal of Low Temperature Physics, vol. 110, no. 1-2, pp. 539–544, 1998. View at Scopus
43. L. Chen, Y. Zhang, N. Koratkar, P. Jena, and S. K. Nayak, “First-principles study of interaction of molecular hydrogen with Li-doped carbon nanotube peapod structures,” Physical Review B, vol.
77, no. 3, Article ID 033405, 4 pages, 2008. View at Publisher · View at Google Scholar · View at Scopus
44. C. S. Yeung, L. V. Liu, and Y. A. Wang, “Adsorption of small gas molecules onto Pt-doped single-walled carbon nanotubes,” Journal of Physical Chemistry C, vol. 112, no. 19, pp. 7401–7411, 2008.
View at Publisher · View at Google Scholar · View at Scopus
45. H. Xiao, S. H. Li, and J. X. Cao, “First-principles study of Pd-decorated carbon nanotube for hydrogen storage,” Chemical Physics Letters, vol. 483, no. 1–3, pp. 111–114, 2009. View at Publisher
· View at Google Scholar · View at Scopus
46. A. G. Albesa, E. A. Fertitta, and J. L. Vicente, “Comparative study of methane adsorption on single-walled carbon nanotubes,” Langmuir, vol. 26, no. 2, pp. 786–795, 2010. View at Publisher · View
at Google Scholar · View at Scopus
47. M. M. Calbi, S. M. Gatica, M. J. Bojan, and M. W. Cole, “Phases of neon, xenon, and methane adsorbed on nanotube bundles,” Journal of Chemical Physics, vol. 115, no. 21, pp. 9975–9981, 2001. View
at Publisher · View at Google Scholar · View at Scopus
48. J. Zhao, A. Buldum, J. Han, and J. P. Lu, “Gas molecule adsorption in carbon nanotubes and nanotube bundles,” Nanotechnology, vol. 13, no. 2, pp. 195–200, 2002. View at Publisher · View at Google
Scholar · View at Scopus
49. F. Li, Y. Xia, M. Zhao et al., “Selectable functionalization of single-walled carbon nanotubes resulting from CH[n] ($n=1-3$) adsorption,” Physical Review B, vol. 69, no. 16, Article ID 165415, 6
pages, 2004. View at Publisher · View at Google Scholar · View at Scopus
50. F. Tournus and J. C. Charlier, “Ab initio study of benzene adsorption on carbon nanotubes,” Physical Review B, vol. 71, no. 16, Article ID 165421, 8 pages, 2005. View at Publisher · View at
Google Scholar · View at Scopus
51. F. Tournus, S. Latil, M. I. Heggie, and J. C. Charlier, “π-stacking interaction between carbon nanotubes and organic molecules,” Physical Review B, vol. 72, no. 7, Article ID 075431, 5 pages,
2005. View at Publisher · View at Google Scholar · View at Scopus
52. L. Bagolini, F. Gala, and G. Zollo, “Methane cracking on single-wall carbon nanotubes studied by semi-empirical tight binding simulations,” Carbon, vol. 50, no. 2, pp. 411–420, 2012. View at
Publisher · View at Google Scholar · View at Scopus
53. L. B. Da Silva, S. B. Fagan, and R. Mota, “Ab initio study of deformed carbon nanotube sensors for carbon monoxide molecules,” Nano Letters, vol. 4, no. 1, pp. 65–67, 2004. View at Publisher ·
View at Google Scholar · View at Scopus
54. S. Peng and K. Cho, “Ab initio study of doped carbon nanotube sensors,” Nano Letters, vol. 3, no. 4, pp. 513–517, 2003. View at Publisher · View at Google Scholar · View at Scopus
55. J. M. Garcìa-Lastra, D. J. Mowbray, K. S. Thygesen, A. Rubio, and K. W. Jacobsen, “Modeling nanoscale gas sensors under realistic conditions: computational screening of metal-doped carbon
nanotubes,” Physical Review B, vol. 81, no. 24, Article ID 245429, 10 pages, 2010. View at Publisher · View at Google Scholar · View at Scopus
56. J. M. Garcìa-Lastra, K. S. Thygesen, M. Strange, and A. Rubio, “Conductance of sidewall-functionalized carbon nanotubes: universal dependence on adsorption sites,” Physical Review Letters, vol.
101, no. 23, Article ID 236806, 2008. View at Publisher · View at Google Scholar · View at Scopus
57. J. Jiang and S. I. Sandler, “Nitrogen adsorption on carbon nanotube bundles: role of the external surface,” Physical Review B, vol. 68, no. 24, Article ID 245412, 9 pages, 2003. View at Publisher
· View at Google Scholar · View at Scopus
58. G. Stan, M. J. Bojan, S. Curtarolo, S. M. Gatica, and M. W. Cole, “Uptake of gases in bundles of carbon nanotubes,” Physical Review B, vol. 62, no. 3, pp. 2173–2180, 2000. View at Publisher ·
View at Google Scholar · View at Scopus
59. F. Fernandez-Alonso, F. J. Bermejo, C. Cabrillo, R. O. Loutfy, V. Leon, and M. L. Saboungi, “Nature of the bound states of molecular hydrogen in carbon nanohorns,” Physical Review Letters, vol.
98, no. 21, Article ID 215503, 4 pages, 2007. View at Publisher · View at Google Scholar · View at Scopus
60. H. Tanaka, H. Kanoh, M. El-Merraoui et al., “Quantum effects on hydrogen adsorption in internal nanospaces of single-wall carbon nanohorns,” Journal of Physical Chemistry B, vol. 108, no. 45, pp.
17457–17465, 2004. View at Publisher · View at Google Scholar · View at Scopus
61. B. Burteaux, A. Claye, B. W. Smith, M. Monthioux, D. E. Luzzi, and J. E. Fischer, “Abundance of encapsulated C[60] in single-wall carbon nanotubes,” Chemical Physics Letters, vol. 310, no. 1-2,
pp. 21–24, 1999. View at Scopus
62. A. V. Vakhrushev and M. V. Suetin, “Carbon nanocontainers for gas storage,” Nanotechnologies in Russia, vol. 4, no. 11-12, pp. 806–815, 2010. View at Publisher · View at Google Scholar · View at
63. C. J. Cramer, Essential of Computational Chemistry, John Wiley & Sons, West Sussex, UK, 2004.
64. D. Young, Computational Chemistry, John Wiley & Sons, New York, NY, USA, 2001.
65. R. O. Jones and O. Gunnarsson, “The density functional formalism, its applications and prospects,” Reviews of Modern Physics, vol. 61, no. 3, pp. 689–746, 1989. View at Publisher · View at Google
Scholar · View at Scopus
66. E. Engel and R. M. Dreizler, Density Functional Theory—An Advanced Course, Springer, Heidelberg, Germany, 2011.
67. N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, “Equation of state calculations by fast computing machines,” The Journal of Chemical Physics, vol. 21, no. 6, pp.
1087–1092, 1953. View at Scopus
68. D. Frenkel, Understanding Molecular Simulations, Computational Science Series, Academic Press, San Diego, Calif, USA, 2002.
69. R. Evans, “The nature of the liquid-vapor interface and other topics in the statistical mechanics of non-uniform, classical fluids,” Advances in Physics, vol. 28, no. 2, pp. 143–200, 1979. View
at Scopus
70. N. D. Mermin, “Thermal properties of the inhomogeneous electron gas,” Physical Review, vol. 137, no. 5, pp. A1441–A1443, 1965. View at Publisher · View at Google Scholar · View at Scopus
71. P. Tarazona, “Free-energy density functional for hard spheres,” Physical Review A, vol. 31, no. 4, pp. 2672–2679, 1985. View at Publisher · View at Google Scholar · View at Scopus
72. P. Tarazona, U. Marini Bettolo Marconi, and R. Evans, “Phase equilibria of fluid interfaces and confined fluids,” Molecular Physics, vol. 60, pp. 573–589, 1987.
73. P. Hohenberg and W. Kohn, “Inhomogeneous electron gas,” Physical Review, vol. 136, no. 3, pp. B864–B871, 1964. View at Publisher · View at Google Scholar · View at Scopus
74. W. Kohn and L. J. Sham, “Self-consistent equations including exchange and correlation effects,” Physical Review, vol. 140, no. 4, pp. A1133–A1138, 1965. View at Publisher · View at Google Scholar
· View at Scopus
75. M. C. Payne, M. P. Teter, D. C. Allan, T. A. Arias, and J. D. Joannopoulos, “Iterative minimization techniques for ab initio total-energy calculations: molecular dynamics and conjugate
gradients,” Reviews of Modern Physics, vol. 64, no. 4, pp. 1045–1097, 1992. View at Publisher · View at Google Scholar · View at Scopus
76. U. Von Barth, “Basic density-functional theory—an overview,” Physica Scripta T, vol. T109, pp. 9–39, 2004. View at Publisher · View at Google Scholar · View at Scopus
77. J. E. Dennis and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Non-Linear Equations, Prentice Hall, Englewood Cliffs, NJ, USA, 1983.
78. G. B. Bachelet, D. R. Hamann, and M. Schlüter, “Pseudopotentials that work: from H to Pu,” Physical Review B, vol. 26, no. 8, pp. 4199–4228, 1982. View at Publisher · View at Google Scholar ·
View at Scopus
79. D. R. Hamann, M. Schlüter, and C. Chiang, “Norm-conserving pseudopotentials,” Physical Review Letters, vol. 43, no. 20, pp. 1494–1497, 1979. View at Publisher · View at Google Scholar · View at
80. N. Troullier and J. L. Martins, “Efficient pseudopotentials for plane-wave calculations,” Physical Review B, vol. 43, no. 3, pp. 1993–2006, 1991. View at Publisher · View at Google Scholar · View
at Scopus
81. D. Vanderbilt, “Soft self-consistent pseudopotentials in a generalized eigenvalue formalism,” Physical Review B, vol. 41, no. 11, pp. 7892–7895, 1990. View at Publisher · View at Google Scholar ·
View at Scopus
82. D. Vanderbilt, “Optimally smooth norm-conserving pseudopotentials,” Physical Review B, vol. 32, no. 12, pp. 8412–8415, 1985. View at Publisher · View at Google Scholar · View at Scopus
83. L. Kleinman and D. M. Bylander, “Efficacious form for model pseudopotentials,” Physical Review Letters, vol. 48, no. 20, pp. 1425–1428, 1982. View at Publisher · View at Google Scholar · View at
84. A. D. Becke, “Density-functional thermochemistry. III. The role of exact exchange,” The Journal of Chemical Physics, vol. 98, no. 7, pp. 5648–5652, 1993. View at Scopus
85. J. P. Perdew, K. Burke, and M. Ernzerhof, “Generalized gradient approximation made simple,” Physical Review Letters, vol. 77, no. 18, pp. 3865–3868, 1996. View at Scopus
86. J. P. Perdew and A. Zunger, “Self-interaction correction to density-functional approximations for many-electron systems,” Physical Review B, vol. 23, no. 10, pp. 5048–5079, 1981. View at
Publisher · View at Google Scholar · View at Scopus
87. D. M. Ceperley and B. J. Alder, “Ground state of the electron gas by a stochastic method,” Physical Review Letters, vol. 45, no. 7, pp. 566–569, 1980. View at Publisher · View at Google Scholar ·
View at Scopus
88. Y. Okamoto and Y. Miyamoto, “Ab initio investigation of physisorption of molecular hydrogen on planar and curved graphenes,” Journal of Physical Chemistry B, vol. 105, no. 17, pp. 3470–3474,
2001. View at Scopus
89. N. W. Ashcroft and N. D. Mermin, Solid State Physics, Saunders College, Philadelphia, Pa, USA, 1976.
90. F. Jensen, Introduction to Computational Chemistry, John Wiley & Sons, New York, NY, USA, 2nd edition, 2007.
91. M. Marder, Condensed Matter Physics, John Wiley & Sons, New York, NY, USA, 2000.
92. T. Helgaker, P. Jorgensen, and J. Olsen, Molecular Electronic Theory, John Wiley & Sons, New York, NY, USA, 2002.
93. R. J. Bartlett and M. Musiał, “Coupled-cluster theory in quantum chemistry,” Reviews of Modern Physics, vol. 79, no. 1, pp. 291–352, 2007. View at Publisher · View at Google Scholar · View at
94. G. Stan, V. H. Crespi, M. W. Cole, and M. Boninsegni, “Interstitial He and Ne in nanotube bundles,” Journal of Low Temperature Physics, vol. 113, no. 3-4, pp. 447–452, 1998. View at Scopus
95. D. M. Ceperley, “Path integrals in the theory of condensed helium,” Reviews of Modern Physics, vol. 67, no. 2, pp. 279–355, 1995. View at Publisher · View at Google Scholar · View at Scopus
96. R. P. Feynman and A. R. Hibbs, Quantum Mechanics and Path Integrals, McGraw-Hill, New York , NY, USA.
97. J. K. Percus and G. J. Yevick, “Analysis of classical statistical mechanics by means of collective coordinates,” Physical Review, vol. 110, no. 1, pp. 1–13, 1958. View at Publisher · View at
Google Scholar · View at Scopus
98. A. C. Dillon, K. M. Jones, T. A. Bekkedahl, C. H. Kiang, D. S. Bethune, and M. J. Heben, “Storage of hydrogen in single-walled carbon nanotubes,” Nature, vol. 386, no. 6623, pp. 377–379, 1997.
View at Publisher · View at Google Scholar · View at Scopus
99. V. Meregalli and M. Parrinello, “Review of theoretical calculations of hydrogen storage in carbon-based materials,” Applied Physics A, vol. 72, no. 2, pp. 143–146, 2001. View at Scopus
100. G. E. Ioannatos and X. E. Verykios, “H2 storage on single- and multi-walled carbon nanotubes,” International Journal of Hydrogen Energy, vol. 35, no. 2, pp. 622–628, 2010. View at Publisher ·
View at Google Scholar · View at Scopus
101. K. L. Lim, H. Kazemian, Z. Yaakob, and W. R. W. Daud, “Solid-state materials and methods for hydrogen storage: a critical review,” Chemical Engineering and Technology, vol. 33, no. 2, pp.
213–226, 2010. View at Publisher · View at Google Scholar · View at Scopus
102. C. Liu, Y. Chen, C. Z. Wu, S. T. Xu, and H. M. Cheng, “Hydrogen storage in carbon nanotubes revisited,” Carbon, vol. 48, no. 2, pp. 452–455, 2010. View at Publisher · View at Google Scholar ·
View at Scopus
103. K. A. Williams and P. C. Eklund, “Monte Carlo simulations of H[2] physisorption in finite-diameter carbon nanotube ropes,” Chemical Physics Letters, vol. 320, no. 3-4, pp. 352–358, 2000. View at
104. I. F. Silvera and V. V. Goldman, “The isotropic intermolecular potential for H[2] and D[2] in the solid and gas phases,” The Journal of Chemical Physics, vol. 69, no. 9, pp. 4209–4213, 1978.
View at Scopus
105. A. D. Crowell and J. S. Brown, “Laterally averaged interaction potentials for ^1H[2] and ^2H[2] on the (0001) graphite surface,” Surface Science, vol. 123, no. 2-3, pp. 296–304, 1982. View at
106. H. Dodziuk and G. Dolgonos, “Molecular modeling study of hydrogen storage in carbon nanotubes,” Chemical Physics Letters, vol. 356, no. 1-2, pp. 79–83, 2002. View at Publisher · View at Google
Scholar · View at Scopus
107. A. Chambers, C. Park, R. T. K. Baker, and N. M. Rodriguez, “Hydrogen storage in graphite nanofibers,” Journal of Physical Chemistry B, vol. 102, no. 22, pp. 4253–4256, 1998. View at Scopus
108. M. Volpe and F. Cleri, “Role of surface chemistry in hydrogen adsorption in single-wall carbon nanotubes,” Chemical Physics Letters, vol. 371, no. 3-4, pp. 476–482, 2003. View at Publisher ·
View at Google Scholar · View at Scopus
109. S. K. Bhatia and A. L. Myers, “Optimum conditions for adsorptive storage,” Langmuir, vol. 22, no. 4, pp. 1688–1700, 2006. View at Publisher · View at Google Scholar · View at Scopus
110. J. Guan, X. Pan, X. Liu, and X. Bao, “Syngas segregation induced by confinement in carbon nanotubes: a combined first-principles and Monte Carlo study,” Journal of Physical Chemistry C, vol.
113, no. 52, pp. 21687–21692, 2009. View at Publisher · View at Google Scholar · View at Scopus
111. S. Dag, Y. Ozturk, S. Ciraci, and T. Yildirim, “Adsorption and dissociation of hydrogen molecules on bare and functionalized carbon nanotubes,” Physical Review B, vol. 72, no. 15, Article ID
155404, 8 pages, 2005. View at Publisher · View at Google Scholar · View at Scopus
112. T. A. Halgren, “Representation of van der Waals (vdW) interactions in molecular mechanics force fields: potential form, combination rules, and vdW parameters,” Journal of the American Chemical
Society, vol. 114, no. 20, pp. 7827–7843, 1992. View at Scopus
113. B. Kuchta, L. Firlej, P. Pfeifer, and C. Wexler, “Numerical estimation of hydrogen storage limits in carbon-based nanospaces,” Carbon, vol. 48, no. 1, pp. 223–231, 2010. View at Publisher · View
at Google Scholar · View at Scopus
114. J. Cheng, X. Yuan, X. Fang, and L. Zhang, “Computer simulation of hydrogen physisorption in a Li-doped single walled carbon nanotube array,” Carbon, vol. 48, no. 2, pp. 567–570, 2010. View at
Publisher · View at Google Scholar · View at Scopus
115. H. Lee, J. Ihm, M. L. Cohen, and S. G. Louie, “Calcium-decorated carbon nanotubes for high-capacity hydrogen storage: first-principles calculations,” Physical Review B, vol. 80, no. 11, Article
ID 115412, 5 pages, 2009. View at Publisher · View at Google Scholar · View at Scopus
116. A. Touzik and H. Hermann, “Theoretical study of hydrogen adsorption on graphitic materials,” Chemical Physics Letters, vol. 416, no. 1–3, pp. 137–141, 2005. View at Publisher · View at Google
Scholar · View at Scopus
117. W. A. Steele and M. J. Bojan, “Simulation studies of sorption in model cylindrical micropores,” Advances in Colloid and Interface Science, vol. 76-77, pp. 153–178, 1998. View at Scopus
118. L. Zhan, K. Li, X. Zhu, C. Lv, and L. Ling, “Adsorption limit of supercritical hydrogen on super-activated carbon,” Carbon, vol. 40, no. 3, pp. 455–457, 2002. View at Publisher · View at Google
Scholar · View at Scopus
119. M. Georgakis, G. Stavropoulos, and G. P. Sakellaropoulos, “Molecular dynamics study of hydrogen adsorption in carbonaceous microporous materials and the effect of oxygen functional groups,”
International Journal of Hydrogen Energy, vol. 32, no. 12, pp. 1999–2004, 2007. View at Publisher · View at Google Scholar · View at Scopus
120. T. X. Nguyen, N. Cohaut, J. S. Bae, and S. K. Bhatia, “New method for atomistic modeling of the microstructure of activated carbons using hybrid reverse Monte Carlo simulation,” Langmuir, vol.
24, no. 15, pp. 7912–7922, 2008. View at Publisher · View at Google Scholar · View at Scopus
121. G. Opletal, T. Petersen, B. O'Malley et al., “Hybrid approach for generating realistic amorphous carbon structure using metropolis and reverse Monte Carlo,” Molecular Simulation, vol. 28, no.
10-11, pp. 927–938, 2002. View at Publisher · View at Google Scholar · View at Scopus
122. N. Marks, “Modelling diamond-like carbon with the environment-dependent interaction potential,” Journal of Physics Condensed Matter, vol. 14, no. 11, pp. 2901–2927, 2002. View at Publisher ·
View at Google Scholar · View at Scopus
123. D. W. Brenner, O. A. Shenderova, J. A. Harrison, S. J. Stuart, B. Ni, and S. B. Sinnott, “A second-generation reactive empirical bond order (REBO) potential energy expression for hydrocarbons,”
Journal of Physics Condensed Matter, vol. 14, no. 4, pp. 783–802, 2002. View at Publisher · View at Google Scholar · View at Scopus
124. L. M. Sesè, “Feynman-Hibbs potentials and path integrals for quantum Lennard-Jones systems: theory and Monte Carlo simulations,” Molecular Physics, vol. 85, pp. 931–947, 1995.
125. T. X. Nguyen, J. S. Bae, Y. Wang, and S. K. Bhatia, “On the strength of the hydrogen-carbon interaction as deduced from physisorption,” Langmuir, vol. 25, no. 8, pp. 4314–4319, 2009. View at
Publisher · View at Google Scholar · View at Scopus
126. D. Levesque, A. Gicquel, F. L. Darkrim, and S. B. Kayiran, “Monte Carlo simulations of hydrogen storage in carbon nanotubes,” Journal of Physics Condensed Matter, vol. 14, no. 40, pp. 9285–9293,
2002. View at Publisher · View at Google Scholar · View at Scopus
127. S. J. V. Frankland and D. W. Brenner, “Hydrogen Raman shifts in carbon nanotubes from molecular dynamics simulation,” Chemical Physics Letters, vol. 334, no. 1–3, pp. 18–23, 2001. View at
Publisher · View at Google Scholar · View at Scopus
128. S. C. Wang, L. Senbetu, and C. W. Woo, “Superlattice of parahydrogen physisorbed on graphite surface,” Journal of Low Temperature Physics, vol. 41, no. 5-6, pp. 611–628, 1980. View at Publisher
· View at Google Scholar · View at Scopus
129. W. A. Steele, The Interaction of Gases with Solid Surfaces, Pergamon Press, New York, N Y, USA, 1974.
130. H. Miyaoka, T. Ichikawa, and Y. Kojima, “The reaction process of hydrogen absorption and desorption on the nanocomposite of hydrogenated graphite and lithium hydride,” Nanotechnology, vol. 20,
no. 20, Article ID 204021, 2009. View at Publisher · View at Google Scholar · View at Scopus
131. P. Guay, B. L. Stansfield, and A. Rochefort, “On the control of carbon nanostructures for hydrogen storage applications,” Carbon, vol. 42, no. 11, pp. 2187–2193, 2004. View at Publisher · View
at Google Scholar · View at Scopus
132. D. J. Browning, M. L. Gerrard, J. B. Lakeman, I. M. Mellor, R. J. Mortimer, and M. C. Turpin, “Studies into the storage of hydrogen in carbon nanofibers: proposal of a possible reaction
mechanism,” Nano Letters, vol. 2, no. 3, pp. 201–205, 2002. View at Publisher · View at Google Scholar · View at Scopus
133. O. N. Srivastava and B. K. Gupta, “Further studies on microstructural characterization and hydrogenation behaviour of graphitic nanofibres,” International Journal of Hydrogen Energy, vol. 26,
no. 8, pp. 857–862, 2001. View at Publisher · View at Google Scholar · View at Scopus
134. Z. H. Zhu, G. Q. Lu, and S. C. Smith, “Comparative study of hydrogen storage in Li- and K-doped carbon materials—theoretically revisited,” Carbon, vol. 42, no. 12-13, pp. 2509–2514, 2004. View
at Publisher · View at Google Scholar · View at Scopus
135. H. Lee, J. Ihm, M. L. Cohen, and S. G. Louie, “Calcium-decorated graphene-based nanostructures for hydrogen storage,” Nano Letters, vol. 10, no. 3, pp. 793–798, 2010. View at Publisher · View at
Google Scholar · View at Scopus
136. H. An, C. S. Liu, Z. Zeng, C. Fan, and X. Ju, “Li-doped B2 C graphene as potential hydrogen storage medium,” Applied Physics Letters, vol. 98, no. 17, Article ID 173101, 3 pages, 2011. View at
Publisher · View at Google Scholar · View at Scopus
137. M. Andersen, L. Hornekær, and B. Hammer, “Graphene on metal surfaces and its hydrogen adsorption: a meta-GGA functional study,” Physical Review B, vol. 86, no. 8, Article ID 085405, 6 pages,
2012. View at Publisher · View at Google Scholar · View at Scopus
138. A. Sigal, M. I. Rojas, and E. P. M. Leiva, “Is hydrogen storage possible in metal-doped graphite 2D systems in conditions found on earth?” Physical Review Letters, vol. 107, no. 15, Article ID
158701, 4 pages, 2011. View at Publisher · View at Google Scholar · View at Scopus
139. G. J. Kubas, “Metal-dihydrogen and σ-bond coordination: the consummate extension of the Dewar-Chatt-Duncanson model for metal-olefin π bonding,” Journal of Organometallic Chemistry, vol. 635,
no. 1-2, pp. 37–68, 2001. View at Publisher · View at Google Scholar · View at Scopus
140. Q. Sun, Q. Wang, P. Jena, and Y. Kawazoe, “Clustering of Ti on a C[60] surface and its effect on hydrogen storage,” Journal of the American Chemical Society, vol. 127, no. 42, pp. 14582–14583,
2005. View at Publisher · View at Google Scholar · View at Scopus
141. M. Yoon, S. Yang, C. Hicke, E. Wang, D. Geohegan, and Z. Zhang, “Calcium as the superior coating metal in functionalization of carbon fullerenes for high-capacity hydrogen storage,” Physical
Review Letters, vol. 100, no. 20, Article ID 206806, 4 pages, 2008. View at Publisher · View at Google Scholar · View at Scopus
142. A. J. Maeland and A. T. Skjeltrop, Inventors; Hydrogen storage in carbon material. patent 6290753. 2001.
143. A. Gotzias, H. Heiberg-Andersen, M. Kainourgiakis, and T. Steriotis, “Grand canonical Monte Carlo simulations of hydrogen adsorption in carbon cones,” Applied Surface Science, vol. 256, no. 17,
pp. 5226–5231, 2010. View at Publisher · View at Google Scholar · View at Scopus
144. P. B. Sorokin, H. Lee, L. Y. Antipina, A. K. Singh, and B. I. Yakobson, “Calcium-decorated carbyne networks as hydrogen storage media,” Nano Letters, vol. 11, no. 7, pp. 2660–2665, 2011. View at
Publisher · View at Google Scholar · View at Scopus
145. D. Cao, X. Zhang, J. Chen, W. Wang, and J. Yun, “Optimization of single-walled carbon nanotube arrays for methane storage at room temperature,” Journal of Physical Chemistry B, vol. 107, no. 48,
pp. 13286–13292, 2003. View at Scopus
146. W. A. Steele, “The physical interaction of gases with crystalline solids. I. Gas-solid energies and properties of isolated adsorbed atoms,” Surface Science, vol. 36, no. 1, pp. 317–352, 1973.
View at Scopus
147. P. Kowalczyk, L. Solarz, D. D. Do, A. Samborski, and J. M. D. MacElroy, “Nanoscale tubular vessels for storage of methane at ambient temperatures,” Langmuir, vol. 22, no. 21, pp. 9035–9040,
2006. View at Publisher · View at Google Scholar · View at Scopus
148. F. J. A. L. Cruz and J. P. B. Mota, “Thermodynamics of adsorption of light alkanes and alkenes in single-walled carbon nanotube bundles,” Physical Review B, vol. 79, no. 16, Article ID 165426,
14 pages, 2009. View at Publisher · View at Google Scholar · View at Scopus
149. P. Kowalczyk and S. K. Bhatia, “Optimization of slitlike carbon nanopores for storage of hythane fuel at ambient temperatures,” Journal of Physical Chemistry B, vol. 110, no. 47, pp.
23770–23776, 2006. View at Publisher · View at Google Scholar · View at Scopus
150. P. Kowalczyk, L. Brualla, A. Zywociński, and S. K. Bhatia, “Single-walled carbon nanotubes: efficient nanomaterials for separation and on-board vehicle storage of hydrogen and methane mixture at
room temperature?” Journal of Physical Chemistry C, vol. 111, no. 13, pp. 5250–5257, 2007. View at Publisher · View at Google Scholar · View at Scopus
151. X. Peng, D. Cao, and W. Wang, “Heterogeneity characterization of ordered mesoporous carbon adsorbent CMK-1 for methane and hydrogen storage: GCMC simulation and comparison with experiment,”
Journal of Physical Chemistry C, vol. 112, no. 33, pp. 13024–13036, 2008. View at Publisher · View at Google Scholar · View at Scopus
152. R. J. Dombrowski, D. R. Hyduke, and C. M. Lastoskie, “Pore size analysis of activated carbons from argon and nitrogen porosimetry using density functional theory,” Langmuir, vol. 16, no. 11, pp.
5041–5050, 2000. View at Publisher · View at Google Scholar · View at Scopus
153. J. P. Olivier, “Improving the models used for calculating the size distribution of micropore volume of activated carbons from adsorption data,” Carbon, vol. 36, no. 10, pp. 1469–1472, 1998. View
at Scopus
154. P. I. Ravikovitch, A. Vishnyakov, R. Russo, and A. V. Neimark, “Unified approach to pore size characterization of microporous carbonaceous materials from N[2], Ar, and CO[2] adsorption
isotherms,” Langmuir, vol. 16, no. 5, pp. 2311–2320, 2000. View at Publisher · View at Google Scholar · View at Scopus
155. M. B. Sweatman, N. Quirke, W. Zhu, and F. Kapteijn, “Analysis of gas adsorption in Kureha active carbon based on the slit-pore model and Monte-Carlo simulations,” Molecular Simulation, vol. 32,
no. 7, pp. 513–522, 2006. View at Publisher · View at Google Scholar · View at Scopus
156. O. Leenaerts, B. Partoens, and F. M. Peeters, “Adsorption of H[2]O, NH[2], CO, NO[2], and NO on graphene: a first-principles study,” Physical Review B, vol. 77, no. 12, Article ID 125416, 6
pages, 2008. View at Publisher · View at Google Scholar · View at Scopus
157. P. Giannozzi, R. Car, and G. Scoles, “Oxygen adsorption on graphite and nanotubes,” Journal of Chemical Physics, vol. 118, no. 3, pp. 1003–1006, 2003. View at Publisher · View at Google Scholar
· View at Scopus
158. A. Ricca, C. W. Bauschlicher, and A. Maiti, “Comparison of the reactivity of O[2] with a (10,0) and a (9,0) carbon nanotube,” Physical Review B, vol. 68, no. 3, Article ID 035433, 7 pages, 2003.
View at Publisher · View at Google Scholar · View at Scopus
159. B. C. Wood, S. Y. Bhide, D. Dutta et al., “Methane and carbon dioxide adsorption on edge-functionalized graphene: a comparative DFT study,” Journal of Chemical Physics, vol. 137, no. 5, Article
ID 054702, 11 pages, 2012. View at Publisher · View at Google Scholar · View at Scopus
160. A. N. Rudenko, F. J. Keil, M. I. Katsnelson, and A. I. Lichtenstein, “Adsorption of diatomic halogen molecules on graphene: a van der Waals density functional study,” Physical Review B, vol. 82,
no. 3, Article ID 035427, 7 pages, 2010. View at Publisher · View at Google Scholar · View at Scopus
161. L. M. Woods, Ş. C. Bǎdescu, and T. L. Reinecke, “Adsorption of simple benzene derivatives on carbon nanotubes,” Physical Review B, vol. 75, no. 15, Article ID 155415, 9 pages, 2007. View at
Publisher · View at Google Scholar · View at Scopus
162. D. W. Boukhvalov, M. I. Katsnelson, and A. I. Lichtenstein, “Hydrogen on graphene: electronic structure, total energy, structural distortions and magnetism from first-principles calculations,”
Physical Review B, vol. 77, no. 3, Article ID 035427, 7 pages, 2008. View at Publisher · View at Google Scholar · View at Scopus
163. O. V. Yazyev and L. Helm, “Defect-induced magnetism in graphene,” Physical Review B, vol. 75, no. 12, Article ID 125408, 5 pages, 2007. View at Publisher · View at Google Scholar · View at
164. P. Mohn, Magnetism in the Solid State, vol. 134 of Springer Series in Solid State Sciences, Springer, Berlin, Germany, 2003.
165. O. Maresca, R. J. M. Pellenq, F. Marinelli, and J. Conard, “A search for a strong physisorption site for H[2] in Li-doped porous carbons,” Journal of Chemical Physics, vol. 121, no. 24, pp.
12548–12558, 2004. View at Publisher · View at Google Scholar · View at Scopus
166. Y. Miura, H. Kasai, W. Diño, H. Nakanishi, and T. Sugimoto, “First principles studies for the dissociative adsorption of H[2] on graphene,” Journal of Applied Physics, vol. 93, no. 6, pp.
3395–3400, 2003. View at Publisher · View at Google Scholar · View at Scopus
167. S. Casolo, O. M. Løvvik, R. Martinazzo, and G. F. Tantardini, “Understanding adsorption of hydrogen atoms on graphene,” Journal of Chemical Physics, vol. 130, no. 5, Article ID 054704, 10 pages,
2009. View at Publisher · View at Google Scholar · View at Scopus
168. H. Lee, J. Li, G. Zhou, W. Duan, G. Kim, and J. Ihm, “Room-temperature dissociative hydrogen chemisorption on boron-doped fullerenes,” Physical Review B, vol. 77, no. 23, Article ID 235101, 5
pages, 2008. View at Publisher · View at Google Scholar · View at Scopus
169. E. Rangel, G. Ruiz-Chavarria, L. F. Magana, and J. S. Arellano, “Hydrogen adsorption on N-decorated single wall carbon nanotubes,” Physics Letters, Section A, vol. 373, no. 30, pp. 2588–2591,
2009. View at Publisher · View at Google Scholar · View at Scopus
170. B. Huang, Z. Li, Z. Liu et al., “Adsorption of gas molecules on graphene nanoribbons and its implication for nanoscale molecule sensor,” Journal of Physical Chemistry C, vol. 112, no. 35, pp.
13442–13446, 2008. View at Publisher · View at Google Scholar · View at Scopus
171. Z. H. Guo, X. H. Yan, and Y. Xiao, “Dissociation of methane on the surface of charged defective carbon nanotubes,” Physics Letters, Section A, vol. 374, no. 13-14, pp. 1534–1538, 2010. View at
Publisher · View at Google Scholar · View at Scopus
172. X. Hu, Z. Zhou, Q. Lin, Y. Wu, and Z. Zhang, “High reactivity of metal-free nitrogen-doped carbon nanotube for the C-H activation,” Chemical Physics Letters, vol. 503, no. 4–6, pp. 287–291,
2011. View at Publisher · View at Google Scholar · View at Scopus
173. A. Martínez, M. Francisco-Marquez, and A. Galano, “Effect of different functional groups on the free radical scavenging capability of single-walled carbon nanotubes,” Journal of Physical
Chemistry C, vol. 114, no. 35, pp. 14734–14739, 2010. View at Publisher · View at Google Scholar · View at Scopus
174. A. Galano, M. Francisco-Marquez, and A. Martínez, “Influence of point defects on the free-radical scavenging capability of single-walled carbon nanotubes,” Journal of Physical Chemistry C, vol.
114, no. 18, pp. 8302–8308, 2010. View at Publisher · View at Google Scholar · View at Scopus
175. C. W. Bauschlicher and A. Ricca, “Binding of NH[3] to graphite and to a (9,0) carbon nanotube,” Physical Review B, vol. 70, no. 11, Article ID 115409, 6 pages, 2004. View at Publisher · View at
Google Scholar · View at Scopus
176. A. Goldoni, L. Petaccia, S. Lizzit, and R. Larciprete, “Sensing gases with carbon nanotubes: a review of the actual situation,” Journal of Physics Condensed Matter, vol. 22, no. 1, Article ID
013001, 2010. View at Publisher · View at Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/jnm/2012/152489/","timestamp":"2014-04-17T19:44:36Z","content_type":null,"content_length":"404993","record_id":"<urn:uuid:10b6e03e-b9e4-4a26-9b3c-bb577c7bfefb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
re-allocating a range of values for a program
August 16th 2010, 06:39 AM #1
Aug 2010
Please forgive me if this is the wrong forum. I am not quite sure under which heading this particular problem falls. Mods, please move it if it is not in the correct place. Thanks.
OK, I wonder if some bright mathematician can help me with a PHP programming problem please?
Let me explain. Greys in a black and white image are in the range 0-255. These are the only possible values. Each greyscale pixel has a value in this range.
I have an image where the range is quite narrow (57-205) and I want to re-distribute it throughout the scale(0-255) in as even way as possible. I am writing a program to re-allocate these values
and I would like help in working out a formula or formulas to achieve this.
Can anyone work out a formula or a couple of formulas to allow me to re-distribute these values over the full range please. It does not matter if there are spaces without values, but I would like
a nice relatively even spread if possible.
Thank you so much for any super-duper mathematical solutions. As you can tell, I am not great at maths. :-)
Thanks for your help.
OK, I don't know if this is going to make much sense but here is my attempt!
Let the small range be from "a" to "b". Let the big range be from "A" to "B". Then we want the ratios between the color values to remain the same. Here is what I tried. Let "x" be a point between
"a" and "b", it will be one of the color values in the initial range. Then we map it to a point "y" in the big range in such a way that the distance from "x" to "a" relative to the whole range
("a" to "b") is the same as the distance from "A" to "y" relative the whole big range.
Here is the mathematical way of saying it:
$\displaystyle \frac{x-a}{b-a} = \frac{y-A}{B-A}$
Solving for y, you get:
$\displaystyle y = (x-a)\frac{B-A}{b-a}+A$
Similarly, if you want to go the other way, you just use
$\displaystyle x = (y-A)\frac{b-a}{B-A}+a$
In your case you have a = 57, b = 205 and A = 0 and B = 255. Then you get
$\displaystyle y = (x-57)\frac{255}{148}$
Vlasev, This makes perfect sense.
Thank you so much for spending the time and effort to sort this out for me.
Thanks again.
Hello, ocpaul20!
I got the same result as Vlasev . . .
You have a range of numbers from 57 to 205.
And you want to "spread" them to range from 0 to 255.
So we have these two sets of numbers:
. . $\begin{array}{c||c|c|c|c|c}<br /> x & 57 & 58 & 59 & \hdots & 205 \\ \hline<br /> y & 0 & - & - & \hdots & 255\end{array}$
So we have a linear function which ranges from $(57,0)$ to $(205,255)$
. . The slope of this line is: . $m \;=\;\frac{255-0}{205-57} \;=\;\frac{255}{148}$
. . The line passes through $(57,0)$
Therefore, the equation of the line is: . $y \;=\;\frac{255}{148}(x - 57)$
soroban, thanks to you too.
Program working well and calculating both ways. :-)
August 16th 2010, 06:48 AM #2
Senior Member
Jul 2010
August 16th 2010, 03:01 PM #3
Aug 2010
August 17th 2010, 03:57 PM #4
Super Member
May 2006
Lexington, MA (USA)
August 17th 2010, 10:18 PM #5
Aug 2010 | {"url":"http://mathhelpforum.com/math-topics/153807-re-allocating-range-values-program.html","timestamp":"2014-04-17T02:48:33Z","content_type":null,"content_length":"44067","record_id":"<urn:uuid:e6e738f7-1966-472b-a9a4-51015d94ccb9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recounting the Rationals, part I
This is the first in a series of posts I’m planning to write on the paper “Recounting the Rationals“, by Neil Calkin and Herbert Wilf, mathematicians at Clemson University and the University of
Pennsylvania, respectively. I’m really excited about it, and I hope that you’ll soon see why! It’s an incredibly elegant and interesting paper, but it’s also quite accessible to anyone with only a
modest background in mathematics—that is, it would be if it were longer than three and a half pages. Calkin and Wilf’s main audience is other mathematicians, of course, so their presentation is
fairly concise, trusting the reader to fill in many of the details. So, I plan to take my time going through their ideas, filling in many of the details, and highlighting some interesting tangents
along the way. However, I do not plan to “dumb it down” in any way—I’m going to write about every single bit of mathematics in their paper.
Today, by way of introduction, I’d like to talk about the motivating question: is it possible to make a list of all the positive rational numbers? If it is possible, how can we do it in a nice way?
$\mathbb{Q} = 1/1, 1/2, 3/7, 4/3, 39/17, \text{um}\dots ?$
Well, let’s think about it. How would you make a list of all the positive rational numbers? (You may wish to stop and actually think about this question for a few minutes. No, really!) Recall that
rational numbers are simply fractions of the form $p/q$, where p (the numerator) and q (the denominator) are integers, and q isn’t zero. For example, $1/2, 7/6$, and $14/1$ are positive rational
numbers. (You may have learned about “proper” and “improper” fractions in school, where “proper” fractions are those in which the numerator is less than the denominator; I would advise you to forget
about such a mathematically useless distinction as quickly as possible!)
Here’s one idea: list all the rational numbers with a denominator of 1, followed by all the rational numbers with a denominator of 2, and so on. Like this:
Unfortunately, while technically correct (for suitable values of “technically”), this is not a very good solution. The problem is that if you tried to write down this list, you’d never even make it
through the rationals with denominator 1, since there are infinitely many! You would never, ever get to 1/2, let alone something like 43/19. We’d like our list to contain every positive rational at a
finite index: that is, if someone else (say, your evil, plaid-pants-wearing, math-hating arch-nemesis) picks a positive rational number, no matter what it is, you should be able to start writing out
your list of rationals from the beginning and eventually—as long as you keep writing for a long enough, yet finite, amount of time—write down the chosen rational. Obviously, our first try is a
failure in this department. That evil arch-nemesis has only to pick something like 1/3, and we’re sunk!
Now, you may think this is impossible: our first try failed, so it would seem, because there are “too many” rationals. We couldn’t even make it through all the rationals with denominator 1; how can
we possibly hope to rearrange things so that every rational occurs at a finite place in the list? Well, the first rule of dealing with infinity is: don’t trust your intuition when dealing with
infinity! As it turns out, there aren’t “too many” rationals, we just happened to put them in a bad order. To get a better sense of what’s going on here, think about making a list of all the positive
integers, like this: first, write down all the odd positive integers; then, write down all the even ones.
$\mathbb{Z} = 1, 3, 5, 7, 9, \dots, 2, 4, 6, 8, 10, \dots$
This list obviously includes all the positive integers, but it’s a bad list in the same way as our initial try at a list of rationals: if you tried to write it down, you would keep listing out odd
integers forever, without ever getting around to the even ones. But in this example, it’s easy to see that this can be fixed: everyone knows that if you write out the positive integers in order,
you’ll eventually get to any positive integer you want, as long as you keep writing long enough.
$\mathbb{Z} = 1, 2, 3, 4, 5, 6, 7, 8, 9, \dots$
Ah, that’s better!
Okay, so here’s another idea: list the positive rationals in order by the sum of their numerator and denominator. We can agree to list rationals with the same sum in order by their numerator. So, the
first rational in our list is the only one with a sum of 2, namely, 1/1. Next, the rationals with a sum of 3: 1/2, 2/1. Then 1/3, 2/2, 3/1, followed by 1/4, 2/3, 3/2, 4/1, and so on. This is the same
thing as putting the rationals in a grid, and listing them by “diagonals”, like this:
Pretty simple, right? Now, it’s easy to see that every positive rational is included in this list: the numerator and denominator of every positive rational sum to something (duh!), so any rational we
can think of will get included when we get to that sum. More exciting is the fact that there are only a finite number of rationals with any given sum (in fact, you can see that there are exactly
$n-1$ rationals with sum $n$), which means that every rational must occur at a finite place in the list! Hooray! (By the way, another way to say that we can make a list of the positive rational
numbers like this is that the positive rationals are countable. It is an astonishing fact (with a nifty proof) that although the rational numbers are countable, the real numbers aren’t — but that’s
for another post!)
There’s still something unsatisfying about this solution, though: many numbers occur more than once. In fact, every number occurs infinitely many times! For example, 1/1, 2/2, 3/3, 4/4, and so on are
really all the same rational number (namely, 1) but they all occur separately in our list. The same is true of 1/2, 2/4, 3/6… and so on. Should we leave the repeats in? (Bleh.) Should we just say
that we won’t write down any numbers that have already occurred earlier? (Yuck.) Either way seems inelegant.
Is it too much to ask for a list of the positive rationals which includes each number exactly once, in lowest terms?
As you have probably guessed, it isn’t too much to ask—and that’s where Calkin and Wilf’s paper comes in! They describe an extremely elegant way to list all the positive rational numbers, which has
all the nice properties we’ve just talked about, and a few more besides. In the next post in this series, I’ll reveal what that list actually is, and then I’ll spend several posts talking about (and
proving) all of its nice properties. Along the way we’ll talk about binary trees and binary numbers, the principle of induction, recursion, and a bunch of other interesting things.
(PS — the Carnival of Mathematics is coming soon! There are still a few more hours left for last-minute submissions… I should have it up by midnight at the earliest or tomorrow morning at the latest.
Get excited! =)
19 Responses to Recounting the Rationals, part I
1. Pingback: Math in the News: Infinities « 360
(say, your evil, plaid-pants-wearing, math-hating arch-nemesis)
I think you can guess which part of this I object to.
I remember idling in Matt’s common room, looking at the first page of the math text Frank Morgan had given in its draft form to the class Matt was TAing. Was it Real Analysis? That’s where I
learned the rationals were countable; I remember my deep skepticism and then my admiration of the demonstration’s elegance, recalled now by your table above.
I also took note that this proof occurred on page two of the textbook, evidently meaning that far, far more difficult revelations than this stretch were to come . . .
3. Note that plaid PANTS are totally different than plaid SHIRTS. Like the difference between Darth Vader and Luke Skywalker. They both use the Force, but…
4. Is it true to say the Calkin and Wilf construction and the Stern Brocot construction as opposed to Cantors diagonal grid method for counting the rationals do not involve the Axiom of Choice?
5. Stuart: None of them require the Axiom of Choice. The Axiom of Choice states that every set has a well-ordering (a total order on the set under which every subset has a least element). But all
three of the methods you mention for enumerating the rationals define nice, well-behaved well-orderings. There is no need to invoke the Axiom of Choice in order to assert the existence of a
well-ordering if we can exhibit an actual, concrete one.
6. Thanks for that, I guess you have to expose your ignorance in order to learn sometimes. Where I was coming from was this quote (wikipedia) “In mathematics, the axiom of choice, or AC, is an axiom
of set theory. Informally put, the axiom of choice says that given any collection of bins, each containing at least one object, it is possible to make a selection of exactly one object from each
bin, even if there are infinitely many bins and there is no “rule” for which object to pick from each. The axiom of choice is not required if the number of bins is finite or if such a selection
“rule” is available.” I assumed that in order to ‘cross out’ the infinite numbers of repeat fractions eg 1/1, 2/2, 3/3,… one has to make a selection from an infinite number of ‘bins’, is this not
the case because as the quote says, there is a “rule”?
7. Sure, asking questions is a great way to learn, and necessarily questions expose your ignorance. Nothing wrong with that. =) And you’re exactly right: since there is a nice, well-defined “rule”
in this case that tells us which numbers to pick/cross out etc., we don’t need AC. You only need the Axiom of Choice if you simply want to assert the existence of a function which will
(arbitrarily) choose one thing from each bin, and you have no way of otherwise saying how such a function might actually be constructed. The situations where you actually need this are rather
esoteric. Interestingly, AC is “independent” of the other basic axioms of set theory (the so-called “Zermelo-Fraenkel” axioms)–that is, you can neither prove nor disprove AC from the other
axioms, and you can assume AC or you can assume its negation, and nothing breaks either way.
8. Pingback: Cardinality of infinite sets, part 1: four nonstandard proofs of countability « Division by Zero
9. Awesome post. Its been too long since I’ve enjoyed mathematics!
This entry was posted in counting, infinity, number theory, pattern, sequences. Bookmark the permalink. | {"url":"http://mathlesstraveled.com/2007/12/27/recounting-the-rationals-part-i/","timestamp":"2014-04-19T09:25:34Z","content_type":null,"content_length":"85300","record_id":"<urn:uuid:6977bd08-a6fe-410a-a063-1b64691ab1e8>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Function Graphing Calculator
Quick-Start Guide
When you enter a function, the calculator will begin by expanding (simplifying) it. Next, the calculator will plot the function over the range that is given. Use the following guidelines to enter
functions into the calculator.
You must use a lowercase 'x' as the independent variable.
Exponents are supported on variables using the ^ (caret) symbol. For example, to express x^2, enter x^2. Note: exponents must be positive integers, no negatives, decimals, or variables. Exponents may
not currently be placed on numbers, brackets, or parentheses.
Parentheses and Brackets
Parentheses ( ) and brackets [ ] may be used to group terms as in a standard equation or expression.
Multiplication, Addition, and Subtraction
For addition and subtraction, use the standard + and - symbols respectively. For multiplication, use the * symbol. A * symbol is not necessiary when multiplying a number by a variable. For instance:
2 * x can also be entered as 2x. Similarly, 2 * (x + 5) can also be entered as 2(x + 5); 2x * (5) can be entered as 2x(5). The * is also optional when multiplying parentheses, example: (x + 1)(x -
Order of Operations
The calculator follows the standard order of operations taught by most algebra books - Parentheses, Exponents, Multiplication and Division, Addition and Subtraction. The only exception is that
division is not currently supported; attempts to use the / symbol will result in an error.
Division, Square Root, Radicals, Fractions
The above features are not supported at this time. A future release will add this functionality. | {"url":"http://algebrahelp.com/calculators/function/graphing/","timestamp":"2014-04-19T14:31:06Z","content_type":null,"content_length":"10502","record_id":"<urn:uuid:7a5bc640-7962-4f7b-9ae1-926fc418c3f2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
The State of TheoryThe State of Theory
There’s a lot of buzz in physics blogdom about the Strings 07 meeting, which starts today in Spain. They currently have a list of speakers, and promise slides and video to come.
Also, there’s a new paper by Edward Witten on the arxiv, cue sound of heavenly choirs:
We consider the problem of identifying the CFT’s that may be dual to pure gravity in three dimensions with negative cosmological constant. The c-theorem indicates that three-dimensional pure
gravity is consistent only at certain values of the coupling constant, and the relation to Chern-Simons gauge theory hints that these may be the values at which the dual CFT can be
holomorphically factorized. If so, and one takes at face value the minimum mass of a BTZ black hole, then the energy spectrum of three-dimensional gravity with negative cosmological constant can
be determined exactly. At the most negative possible value of the cosmological constant, the dual CFT is very likely the monster theory of Frenkel, Lepowsky, and Meurman. The monster theory may
be the first in a discrete series of CFT’s that are dual to three-dimensional gravity. The partition function of the second theory in the sequence can be determined on a hyperelliptic Riemann
surface of any genus. We also make a similar analysis of supergravity.
It’s 82 pages, so I don’t expect I’ll be reading it any time soon, but if that’s your sort of thing… Well, you probably don’t need me to tell you that there’s a new Witten paper available.
1. #1 Uncle Al June 25, 2007
No exercise of ink on paper can withstand a reproducible contrary observation. Theory is vassal to experiment. Two allowed and measurable empirical falsifications of metric gravitation remain:
1) Relativistic spin-orbit coupling in binary pulsar PSR J0737-3039A/B. 20 years of observation.
2) Equivalence Principle violation by identical composition, opposite geometric parity mass distributions. 2 days in an analytical lab. String theory is falsified as BRST invariance dies.
Quick, cheap, sensitive
Bill Gates pledges $40+ billion toward buying a Nobel Peace Prize. $5000 and a weekend could do Physics. At worst it SOP validates GR. Somebody should look.
2. #2 andy.s June 25, 2007
Just clicked on the arxiv link to verify. Ed Witten is the sole author, yet the abstract begins, “We consider…”.
So, is this a Royal ‘We’ or an Editorial ‘We’?
3. #3 Chad Orzel June 25, 2007
Jacques Distler is quasi-live-blogging the meeting, as well, so there are all sorts of Web resources for those who care.
4. #4 Torbjörn Larsson, OM June 25, 2007
is this a Royal ‘We’ or an Editorial ‘We’?
In Witten’s case, it is hard to know. Perhaps it is a new duality?
5. #5 Jonathan Vos Post June 25, 2007
Cool thread. Thanks for the heads up.
Laserprinted the new Witten paper. Carried it around the Caltech campus. First guy who discussed it with me was DINAKAR RAMAKRISHNAN, Professor of Mathematics
[Ph.D., Columbia University, N.Y., 1980;
Research Interests: Number theory, automorphic forms, algebraic geometry, representations of Lie and p-adic groups].
His position was, roughly, we all know that Witten is a great genius, in Math (which may or may not relate to our actual universe) and Physics (which may or may not relate to Math that we
Then he compared this and String Theory to Religion. Then he extolled the virtues of Hindu Mythology as Literature, but questioned the logic of anyone who actually believed it as applying to
human life in this specific part of the multiverse.
Then I went off and gave a laserprinted Science Fiction novel manuscript, not by me, about Nanotechnology, already sold (and sequel already sold), to a distinguished professor who does
Nanotechnology, in hopes of a sentence for the Press Packet.
C. P. Snow: “The Two Culture” — demolished twice in an hour at one university campus. | {"url":"http://scienceblogs.com/principles/2007/06/25/the-state-of-theory/","timestamp":"2014-04-16T16:53:41Z","content_type":null,"content_length":"77179","record_id":"<urn:uuid:cc741e5d-d78a-48eb-ab38-70857fac5179>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
Notions of powers
Over on Reddit there's a discussion where one commenter admitted:
"the whole (^) vs (^^) vs (**) [distinction in Haskell] confuses me."
It's clear to me, but it's something they don't teach in primary school, and it's something most programming languages fail to distinguish. The main problem, I think, for both primary ed and for
other PLs, is that they have an impoverished notion of what "numbers" could be, and this leads to artificially conflating things which should be kept distinct. I wrote a reply over on Reddit, hoping
to elucidate the distinction, but I thought I should repeat it in more persistent venue so it can reach a wider audience.
First, let us recall the basic laws of powers:
a^0 = e
a^1 = a
(a^x)^y = a^(x*y)
(a^x)*(a^y) = a^(x+y)
(a*b)^x = (a^x)*(b^x)
There are two very important things to notice here. First off, our underlying algebra (the as and bs) only needs to have the notion of multiplication, (*), with identity, e. Second, our powers (the
xs and ys) have an entirely different structure; in particular, they form at least a semiring (+,0,*,1). Moreover, if we're willing to give up some of those laws, then we can weaken these
requirements. For example, if we get rid of a^0 = e then we no longer need our underlying algebra to be a monoid, being a semigroup is enough. And actually, we don't even need it to be a semigroup.
We don't need full associativity, all we need for this to be consistent is power-associativity.
So we can go weaker and more abstract, but let's stick here for now. Any time we have a monoid, we get a notion of powers for free. This notion is simply iterating our multiplication, and we use the
commutative semiring (Natural,+,0,*,1) in order to represent our iteration count. This is the notion of powers that Haskell's (^) operator captures. Unfortunately, since Haskell lacks a standard
Natural type (or Semiring class), the type signature for (^) lies and says we could use Integer (or actually, Num which is the closest thing we have to Ring), but the documentation warns that
negative powers will throw exceptions.
Moving on to the (^^) operator: suppose our monoid is actually a group, i.e. it has a notion of reciprocals. Now, we need some way to represent those reciprocals; so if we add subtraction to our
powers (yielding the commutative ring (Integer,+,-,0,*,1)), we get the law a^(-x) = 1/(a^x). The important thing here is to recognize that not all monoids form groups. For example, take the monoid of
lists with concatenation. We do have a (^) notion of powers, which may be more familiar as the replicate function from the Prelude. But, what is the reciprocal of a string? what is the inverse of
concatenation? The replicate function simply truncates things and treats negative powers as if they were zero, which is on par with (^) throwing exceptions. It is because not all monoids are groups
that we need a notion of powers for monoids (i.e., (^)) which is different from the notion of powers for groups (i.e., (^^)). And though Haskell fails to do so, we can cleanly capture this difference
in the type signatures for these operations.
Further up, we get another notion of powers which Haskell doesn't highlight; namely the notion of powers that arises from the field (Rational,+,-,0,*,/,1). To get here, we take our group and add to
it the ability to take roots. The fractions in powers are now taken to represent the roots, as in the law a^(1/y) = root y a. Again note that there's a vast discrepancy between our underlying algebra
which has multiplication, reciprocals, and roots vs our powers which have addition, subtraction, multiplication, and division.
Pulling it back a bit, what if our monoid has a notion of roots, but does not have inverses? Here our powers form a semifield; i.e., a commutative semiring with multiplicative inverses; e.g., the
non-negative rationals. This notion is rather obscure, so I don't fault Haskell for lacking it, though it's worth mentioning.
Finally, (**) is another beast altogether. In all the previous examples of powers there is a strong distinction between the underlying algebra and the powers over that algebra. But here, we get
exponentiation; that is, our algebra has an internal notion of powers over itself! This is remarkably powerful and should not be confused with the basic notion of powers. Again, this is easiest to
see by looking at where it fails. Consider multiplication of (square) matrices over some semiring. This multiplication is associative, so we can trivially implement (^). Assuming our semiring is
actually a commutative ring then almost all (though not all) matrices have inverses, so we can pretend to implement (^^). For some elements we can even go so far as taking roots, though we run into
the problem of there being multiple roots. But as for exponentiation? It's not even clear that (**) should be meaningful on matrices. Or rather, to the extent that it is meaningful, it's not clear
that the result should be a matrix.
N.B., I refer to (**) as exponentials in contrast to (^), (^^), etc as powers, following the standard distinction in category theory and elsewhere. Do note, however, that this notion of exponentials
is different again from the notion of the antilog exp, i.e. the inverse of log. The log and antilog are maps between additive monoids and multiplicative monoids, with all the higher structure arising
from that. We can, in fact, give a notion of antilog for matrices if we assume enough about the elements of those matrices.
no subject
Date: 2013-07-21 04:34 pm (UTC)
From: w3future.blogspot.com
Since log and antilog are defined for matrices, it is possible to define x ** y for matrices as exp (log x * y) or as exp (y * log x). Do either of those make sense?
no subject
Date: 2013-07-22 04:40 pm (UTC)
From: twanvl
The way I think about the three power operators in Haskell is by the simplest types to which they can be applied on the left hand side. (^) is for Integer, (^^) is for Rational, (**) is for Real. The
simplest type for which a Rational exponent makes sense are the algebraic numbers. | {"url":"http://winterkoninkje.dreamwidth.org/85045.html","timestamp":"2014-04-21T12:10:11Z","content_type":null,"content_length":"53450","record_id":"<urn:uuid:7ad889e7-7f2d-4ab7-9b75-4250f7557a8f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cypress, CA Trigonometry Tutor
Find a Cypress, CA Trigonometry Tutor
...I have tutored high school students in Algebra, Calculus, Trig and Geometry, and college students in Linear Algebra and Advanced Calculus. Testimonials: "I just wanted to say thank you for all
the effort you put into teaching me chemistry, physics, and math. I learned a lot this summer thanks to you….You are a truly great teacher."-- G.R.
24 Subjects: including trigonometry, chemistry, calculus, geometry
...After 5 years of more or less continuous studying (I started Business School in 2008, graduated in 2011 and immediately afterwards pursued the CFA charter) I am enjoying the study-free time. I
would like to use some of this time to give back and help students in subjects I know well. I have tutored many students overt the years.
24 Subjects: including trigonometry, statistics, geometry, accounting
...When used for admission by independent schools, the test is only one piece of information that is considered. Schools also review the applicant’s school grades, extracurricular participation,
teacher recommendations, essays and interview results. SSAT scores, however, do carry some weight in varying degrees among independent schools.
18 Subjects: including trigonometry, geometry, ASVAB, GRE
...I have an engineering degree from MIT. I taught a summer program multiple times to incoming 8th graders. I have also tutored different math subjects privately in the past.
11 Subjects: including trigonometry, physics, calculus, geometry
...I've simulated synchronization properties of nano-cantilevers, created quasicrystals, and simulated ion trap dynamics. I spend approximately 5-7 hours a day coding in MATLAB. I have taken a
class on numerical methods at Caltech that was done half in mathematica, half in Matlab.
26 Subjects: including trigonometry, calculus, geometry, physics
Related Cypress, CA Tutors
Cypress, CA Accounting Tutors
Cypress, CA ACT Tutors
Cypress, CA Algebra Tutors
Cypress, CA Algebra 2 Tutors
Cypress, CA Calculus Tutors
Cypress, CA Geometry Tutors
Cypress, CA Math Tutors
Cypress, CA Prealgebra Tutors
Cypress, CA Precalculus Tutors
Cypress, CA SAT Tutors
Cypress, CA SAT Math Tutors
Cypress, CA Science Tutors
Cypress, CA Statistics Tutors
Cypress, CA Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Artesia, CA trigonometry Tutors
Bellflower, CA trigonometry Tutors
Buena Park trigonometry Tutors
Cerritos trigonometry Tutors
Fullerton, CA trigonometry Tutors
Garden Grove, CA trigonometry Tutors
Hawaiian Gardens trigonometry Tutors
La Palma trigonometry Tutors
Lakewood, CA trigonometry Tutors
Los Alamitos trigonometry Tutors
Mirada, CA trigonometry Tutors
Norwalk, CA trigonometry Tutors
Rossmoor, CA trigonometry Tutors
Stanton, CA trigonometry Tutors
Westminster, CA trigonometry Tutors | {"url":"http://www.purplemath.com/Cypress_CA_Trigonometry_tutors.php","timestamp":"2014-04-16T07:24:13Z","content_type":null,"content_length":"24315","record_id":"<urn:uuid:f95bb1dc-7fb2-4928-b9a4-c885421ddad3>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Universal LED driver for industrial control applications
Thu, 04/07/2011 - 10:36am
Using light emitting diodes (LED) for automation equipment as indicator lights (lighted buttons and switches) often requires multiple modifications of the same control device depending on the
available supply voltage. This approach severely complicates the logistics and affects the product cost. Moreover, some indicator light applications (elevator buttons, for example) require switching
from the AC mains to a low-voltage backup battery supply for safety reasons. The engineers come back to this problem again and again, how to design a universal LED indicator lamp capable of operation
in the wide range of input voltage from 24VDC to 265VAC. The conventional power supply topologies typically used for LED driver design, such as buck and flyback converters, are unable to satisfy
these input conditions due to the excessive duty cycle range required. The goal of the present paper is to analyze a power topology which could be used as a universal LED indicator driver.
Quadratic buck converter
So-called “quadratic” converters [1] are known from the literature as power topologies featuring a wide range of the voltage conversion ratio m=Vo/Vg within a reasonable range of the duty cycle D=
TON?fSW. (Here and below, Vo and Vg are the output and the input voltage correspondingly, TON is the on-time of the switch, and fSW is the switching frequency.) The example of such power topology is
given in Fig.1, which depicts a 2-stage buck converter with peak-current control feedback.
In the conductive state of Q1, the path for the inductor L2 current is Q1-C1-D2-LED. Therefore, a positive voltage Vc-Vo develops across the inductor, and the current in L2 increases. At the same
time, the current in the inductor L1 flows through the diode D2 in the opposite direction via the path C1-L1-DC-D2, and the current in L1 increases due to the positive voltage drop Vg-Vc. Of course,
it’s been assumed that the current in L1 is less than the current in L2 in magnitude, which is true practically always, under a condition of continuous conduction of L1 and L2. During this state, the
diodes D1 and D3 are reverse-biased and non-conductive.
In the non-conductive state of Q1, the path for the inductor L1 current is C1-D1, and the current in L2 flows through Co-D3, both currents decreasing. Thus, the cascade connection of the two buck
stages is implemented, and the conversion ratio can be expressed as:
Such quadratic characteristic provides a wide range of voltage conversion ratio in the discussed power converter topology.
Note also, that the switch Q1 conducts the inductor L2 current only, and the peak current in Q1 does not depend on the current in L1. Hence, a simple peak-current control of the switch Q1 can achieve
good regulation of the LED current.
Steady-state considerations
It should be noted, that the input stage (D1, D2, L1, C1) is loaded by the current equal Io•D, and, correspondingly, the voltage across C1 equals Vo/D. Therefore, for continuous conduction mode (CCM)
of L1, higher inductance value is required compared to the value of L2. Operating L1 in discontinuous conduction mode (DCM) for reducing its value is possible. However, this operating mode limits the
dynamic range of the input voltage, and, therefore, it will not be analyzed within this paper.
The duty cycle D is dictated by the given range of Vg and Vo, and, in many cases, it exceeds 0.5. To avoid the subharmonic instability [2] under D>0.5, we will only consider a constant off-time
switching mode of Q1. In this case, the inductances of L1 and L2 are given by the equations:
where k1 and k2 are the relative ripple coefficients k= ?I/I in the inductors L1 and L2.
Correspondingly, the inductors L1 and L2 should be designed for peak currents of
We neglected the current ripple in L1 in the equation (4) due to its small magnitude at Vg(min). The diodes D1 and D2 should be rated for the reverse voltage Vr=Vg(max). The reverse voltage at D3,
generally speaking, equals the voltage at C1:
However, a certain margin is needed to account for the voltage spike due to charge redistribution between parasitic capacitances, when Q1 becomes conductive. The switch Q1 itself should be rated for
the drain-source voltage of:
The value of the capacitor C1 will be discussed in the next section.
Stability considerations
Stability of the circuit given in Fig.1 can be analyzed using a large signal average model shown in Fig.2. Since the peak current in L2 is unchanged in each individual switching cycle, in our model,
we replaced L2 by a constant current source equal Io. Hence, the load of the input buck stage is modeled by a depending current source Io•D. On the other hand, the circuit in the lower part of Fig.2
reflects the mechanism of duty cycle modulation in accordance with the voltage transfer function of the output buck stage:
where ?v is a small perturbation of the voltage at C1.
Analysis of the model given in Fig.2 for small signals produces the open-loop transfer function given by:
From (9), the DC voltage gain eguals 1. However, the transfer function includes a resonant double pole at the frequency and a right-half-plane zero (RHPZ) at the frequency . The equation (9),
therefore, shows that stability of the quadratic converter cannot be achieved without damping of the input buck stage, since the pole and the resonanse zero in the transfer function result in a 270°
phase lag.
To achieve stability, a parallel damping network (Rd, Cd) can be used. The corresponding large-signal model is given in Fig.3. Analysis of this model produces the open-loop transfer function in the
The equation (10) gives a regular zero, an RHP zero and three poles. The first zero and the first pole are almost coinciding near the frequency 1/(2?RdCd), cancelling each other. Therefore, the
equation (10) can be simplified almost without any sacrifice to the following form:
where n=Cd/C1.
Good results could be achieved with the following simple estimate, assuming critical damping of the resonant pole. In this case, the damping coefficient is assumed to equal ½.
Let us also assume that
To achieve the desired damping factor, the capacitance Cd must be selected much greater than C1, i.e. n>>1.
The value of the damping resistor Rd can be obtained from the equation (12).
Practical implementation
The quadratic buck converter described above can be easily implemented using the LED driver IC HV9921 by Supertex Inc. [3]
The HV9921 is a switching peak-current regulator IC operating in the fixed off-time mode with TOFF=10µs. The IC is powered through the switching MOSFET drain terminal. This allows using the HV9921 as
the switch Q1 without any additional external circuitry. The driver circuit shown in Fig.4 provides tight regulation of the LED current and operates over a wide range of DC input voltage from 24V to
400V, as well as AC line voltage up to 265V(rms).
[1] Dragan Maksimovi? and Slobodan ?uk, “Switching Converters with Wide Conversion Range,” IEEE Transactions on Power Electronics, Vol.6, ?1, Jan.1991, pages 151-157;
[2] Supertex Inc, “Constant, Off-time, Buck-based, LED Drivers Using the HV9910B,” Application Note AN-H50, page 2;
[3] Supertex Inc, “3-Pin Switch-Mode LED Lamp Driver IC, HV9921,” Datasheet. | {"url":"http://www.ecnmag.com/articles/2011/04/universal-led-driver-industrial-control-applications","timestamp":"2014-04-17T13:18:59Z","content_type":null,"content_length":"78262","record_id":"<urn:uuid:ca9723f4-6126-4444-aad5-000e89445c06>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH 26 – Elementary Algebra
Credits: 5
Class hours: 5 lecture
Prereq: “C” or higher in MATH 21 or MATH 22 or acceptable math placement test score (P47-100 or A 0-33 COMPASS).
Description: MATH 26 covers topics including a review of operations with real numbers, exponents, absolute values, and simplifying mathematical expressions using order of operations; solving
linear equations and inequalities; formulas and applications of algebra; graphing linear equations; systems of linear equations; exponents and polynomials; factoring; rational expressions and
equations; roots and radicals; and solving and graphing quadratic equations.
Fall, Spring | {"url":"http://info.kauaicc.hawaii.edu/courses/math26.htm","timestamp":"2014-04-19T19:34:53Z","content_type":null,"content_length":"5663","record_id":"<urn:uuid:d06b3677-2c34-4cdc-b6e5-d31270bf7690>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meeting Details
You must log in to see who to contact about this meeting.
Title: Hausdorff dimension of the set of Singular Vectors
Seminar: Center for Dynamics and Geometry Seminars
Speaker: Yitwah Cheung (SFSU)
Singular vectors in R^d correspond to divergent trajectories of the homogeneous flow on SL(d+1,R)/SL(d+1,Z) induced by the one parameter subgroup diag(e^t,...,e^t,e^{-dt}) acting by left
multiplication. In this talk, I will sketch a proof of the following result: the Hausdorff dimension of the set of singular vectors in R^2 is 4/3. (Alternatively, the set of points lying on divergent
trajectories of the homogeneous flow on SL(3,R)/SL(3,Z) has Hausdorff dimension 7 and 1/3.) The main idea involves a multi-dimensional generalisation of continued fraction theory from the perspective
of the best approximation properties of convergents. As an application, we answer a question of A.N. Starkov regarding the existence of slowly divergent trajectories.
Room Reservation Information
Room Number: MB106
Date: 11 / 05 / 2007
Time: 03:35pm - 05:40pm | {"url":"http://www.math.psu.edu/calendars/meeting.php?id=391","timestamp":"2014-04-20T08:33:36Z","content_type":null,"content_length":"3820","record_id":"<urn:uuid:57c11557-b397-4587-905d-0f281b87e762>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pi Day 2013: Pi Trivia, Pi Videos, Pi Songs & A Pi Infographic
What is it about this one irrational number that makes tech geeks go nuts? Every year on Pi Day - that's today, March 14 (you know, 3.14) - there are Pi parties, Pi recitation contests and, of
course, pie for Pi.
The whole thing really goes nuts on March 14 at 1:59 a.m. and p.m. local time - because, after all, Pi is 3.14159… The festivities - particularly the big mama of them all at the San Francisco
Exploratium museum going on today - would make Archimedes proud. The ancient Greek mathematician gets credit for popularizing the mathematical constant, of which references can be found as early as
biblical times. And Albert Einstein, born March 14, would be psyched, too. He couldn't have calculated gravitational field theories without Pi, now could he?
Check out the infographic below for a ton of facts about pi - the only error I found in it is the fact that Archimedes was its publicist, not its identifier. Pi is far, far older than even that
ancient Greek. (And don't stop there, more Pi stuff below...)
Can't get enough? Try these Pi facts size on for size. There are Pi clubs around the world - including Japan - where people competitively recite as many digits of Pi as they can. I actually saw a guy
named Koroyuki Gotu recite Pi from memory on stage at the NHK Broadcasting Center in Tokyo. It was at once fascinating and boring. In 112 hours, he recited Pi accurately to 42,195 places.
Did you know you can use Pi to calculate everything from the circle the size of the universe to the height of an elephant to your hat size?
Hats, Elephants And The Entire Universe
For the hat size, measure the circumferance of your head and divide the result by Pi and round it off to an eighth of an inch.
To figure out how tall a particular elephant is, just measure the diameter of its foot and multiply the result by two. Then multiply that result by Pi.
As for the universe, a mathematician I interviewed years ago told me it was possible to calculate a circle the size of the entire known universe down to a proton. And supposedly you'd need only the
first 39 digits of Pi to do it. You can't make this stuff up.
There are also plenty of conspiracy theories surrounding Pi. With a number infinitely long, geeks are always looking for their own social security numbers, password ideas and repetitive structures
that might suggest something strange is going on with this crazy constant. Some readers will be relieved to learn that Satan — if indeed the signature for the demon is 666 — doesn’t make an
appearance until position 2240.
The most interesting thing to note about Pi is its amazing flexibility. Pi is employed in harmonic motion theory, superstring equations and, as mentioned above, Einstein's gravitational theory.
Pi Videos
As promised, here are a two videos that are perfect for celebrating Pi. The first shows Pi as it would sound to music. The second is an explainer of Einstein's relativity theory and its real-llfe
applications. It is, after all, not just Pi Day. It's Einstein Day, too.
The sound of Pi:
Here's Einstein and the theory of relativity explained, excellently, by the fine folks at The Science Channel.
Happy Pi Day. And Happy Birthday, Dr. Einstein. Wish you were here.
Lead image courtesy of Shutterstock. | {"url":"http://readwrite.com/2013/03/14/pi-day-2013-trivia-pi-videos-pi-songs-and-a-pi-infographic","timestamp":"2014-04-21T09:39:17Z","content_type":null,"content_length":"44584","record_id":"<urn:uuid:dfaaf7c4-ced8-4e62-8339-96d38782ae72>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating probability of an event
October 15th 2009, 10:00 PM #1
Junior Member
Jan 2009
Calculating probability of an event
Find the probability of the following event :
$P(A \prime \cap C | B\prime )$
Attempt @ solution :
$P(A \prime \cap C | B\prime ) =\frac{P ( A \prime \cap C \cap B \prime)}{P(B \prime)}= \frac{P ( A \prime \cap C \cap B \prime)}{1-P(b)}$
I am stuck on how do deal with $P ( A \prime \cap C \cap B \prime)$ Is there a way l can simplify this thing ?
I'm not sure what you want to do here.
you can draw a venn diagram and see that
But I'm not sure why you would want to do that.
October 15th 2009, 11:00 PM #2 | {"url":"http://mathhelpforum.com/advanced-statistics/108362-calculating-probability-event.html","timestamp":"2014-04-17T15:44:17Z","content_type":null,"content_length":"33918","record_id":"<urn:uuid:a7b0b2df-a92a-421a-ac50-57a0ed9663ef>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics in Action : An Introduction to Algebraic, Graphical, and Numerical Problem Solving
ISBN: 9780201660418 | 0201660415
Format: Paperback
Publisher: Addison Wesley
Pub. Date: 1/1/2001
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/mathematics-action-introduction-algebraic/bk/9780201660418","timestamp":"2014-04-18T00:46:41Z","content_type":null,"content_length":"48620","record_id":"<urn:uuid:2928399f-4d23-4729-bbca-26c9011ca3e1>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roslyn Heights Algebra 2 Tutor
Find a Roslyn Heights Algebra 2 Tutor
...It is easy for me to get along with people and help them with any problems they have, in education and even personally if necessary. A little about me, I grew up in New York with great family
and friends. I am responsible, hardworking, caring, and a great listener.
29 Subjects: including algebra 2, English, chemistry, geometry
...I finished my degree at Princeton in 2006 majoring in Politics specializing in Political Theory and American Politics so I'm very well equipped to tutor social studies and history along with
related fields. I also took many classes in English at college so I can work with students in that area t...
40 Subjects: including algebra 2, chemistry, English, reading
...I have held various teaching positions in different schools in New York City and I am currently serving as a substitute teacher in a rural school district in Long Island. I have experience with
various reading programs such as System 44, Read 180, and Journeys. I have experience working with students who needed additional support in phonics, spelling, comprehension, and phonemic
37 Subjects: including algebra 2, English, reading, writing
...The TAKS exam is very similar to the upper level ISEE exam. It too consists of verbal and mathematical sections. The math sections also include information about learning sequences.
15 Subjects: including algebra 2, chemistry, physics, calculus
...As an educator, I have spent the last 6 years working with special needs children. I have helped many students on the Autism spectrum. My interest in Autism and other developmental problems in
children has taken me back to school to study communication disorders, specifically language disorders...
39 Subjects: including algebra 2, reading, geometry, English | {"url":"http://www.purplemath.com/Roslyn_Heights_algebra_2_tutors.php","timestamp":"2014-04-16T04:22:36Z","content_type":null,"content_length":"24294","record_id":"<urn:uuid:df25bbcb-76a7-4fcf-93f8-d483899fc609>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 11 - 20 of 23
, 1995
"... . The Generalized Minimal Residual Method (GMRES) is one of the significant methods for solving linear algebraic systems with nonsymmetric matrices. It minimizes the norm of the residual on the
linear variety determined by the initial residual and the n-th Krylov residual subspace and is therefore ..."
Cited by 5 (1 self)
Add to MetaCart
. The Generalized Minimal Residual Method (GMRES) is one of the significant methods for solving linear algebraic systems with nonsymmetric matrices. It minimizes the norm of the residual on the
linear variety determined by the initial residual and the n-th Krylov residual subspace and is therefore optimal, with respect to the size of the residual, in the class of Krylov subspace methods.
One possible way of computing the GMRES approximations is based on constructing the orthonormal basis of the Krylov subspaces (Arnoldi basis) and then solving the transformed least squares problem.
This paper studies the numerical stability of such formulations of GMRES. Our approach is based on the Arnoldi recurrence for the actually, i.e. in finite precision arithmetic, computed quantities.
We consider the Householder (HHA), iterated modified GramSchmidt (IMGSA), and iterated classical Gram-Schmidt (ICGSA) implementations. Under the obvious assumption on the numerical nonsingularity of
the system m...
"... The Generalized minimal residual method (GMRES) is known as an efficient iterative method for solving large nonsymmetric systems of linear equations. In this thesis, we study numerical stability
of the GMRES method. For the construction of the Arnoldi basis, we consider the Householder orthogonaliza ..."
Cited by 2 (1 self)
Add to MetaCart
The Generalized minimal residual method (GMRES) is known as an efficient iterative method for solving large nonsymmetric systems of linear equations. In this thesis, we study numerical stability of
the GMRES method. For the construction of the Arnoldi basis, we consider the Householder orthogonalization and the frequently used modified Gram-Schmidt process. While for the more expensive
Householder implementation the orthogonality of the computed basis is preserved close to the machine precision level, for the modified Gram-Schmidt Arnoldi process the computed vectors gradually lose
their orthogonality. Using the bound on the loss of orthogonality, it is proved that, under certain assumptions on the numerical nonsingularity of the system matrix, the GMRES implementation based on
the Householder orthogonalization is backward stable. It produces an approximate solution with the residual which is of the same order as that one obtained from the direct solving of the system Ax =
b by the Ho...
- In Proc. IEEE Conf. on Decision and Control (Submitted , 2012
"... Abstract — There is a need for high speed, low cost and low energy solutions for convex quadratic programming to enable model predictive control (MPC) to be implemented in a wider set of
applications than is currently possible. For most quadratic programming (QP) solvers the computational bottleneck ..."
Cited by 2 (2 self)
Add to MetaCart
Abstract — There is a need for high speed, low cost and low energy solutions for convex quadratic programming to enable model predictive control (MPC) to be implemented in a wider set of applications
than is currently possible. For most quadratic programming (QP) solvers the computational bottleneck is the solution of systems of linear equations, which we propose to solve using a fixed-point
implementation of an iterative linear solver to allow for fast and efficient computation in parallel hardware. However, fixed point arithmetic presents additional challenges, such as having to bound
peak values of variables and constrain their dynamic ranges. For these types of algorithms the problems cannot be automated by current tools. We employ a preconditioner in a novel manner to allow us
to establish tight analytical bounds on all the variables of the Lanczos process, the heart of modern iterative linear solving algorithms. The proposed approach is evaluated through the
implementation of a mixed precision interior-point controller for a Boeing 747 aircraft. The numerical results show that there does not have to be a loss of control quality by moving from
floating-point to fixed-point. I.
, 1997
"... The 3-term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the
solution of linear systems, by solving the reduced system in one way or another. This leads to well-known ..."
Cited by 1 (0 self)
Add to MetaCart
The 3-term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the
solution of linear systems, by solving the reduced system in one way or another. This leads to well-known methods: MINRES (GMRES), CG, CR, and SYMMLQ. We will discuss in what way and to what extent
the various approaches are sensitive to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods (except CR), and we will
not consider the errors in the Lanczos process itself. These errors may lead to large perturbations with respect to the exact process, but convergence takes still place. Our attention is focussed to
what happens in the solution phase. We will show that the way of solution may lead, under circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our
findings are...
, 1994
"... It is well-known that Bi-CG can be adapted so that hybrid methods with computational complexity almost similar to Bi-CG can be constructed, in which it is attempted to further improve the
convergence behavior. In this paper we will study the class of BiCGstab methods. In many applications, the spe ..."
Cited by 1 (1 self)
Add to MetaCart
It is well-known that Bi-CG can be adapted so that hybrid methods with computational complexity almost similar to Bi-CG can be constructed, in which it is attempted to further improve the convergence
behavior. In this paper we will study the class of BiCGstab methods. In many applications, the speed of convergence of these methods appears to be determined mainly by the incorporated Bi-CG process,
and the problem is that the Bi-CG iteration coefficients have to be determined from the BiCGstab process. We will focus our attention to the accuracy of these Bi-CG coefficients, and how rounding
errors may affect the speed of convergence of the BiCGstab methods. We will propose a strategy for a more stable determination of the Bi-CG iteration coefficients and by experiments we will show that
this indeed may lead to faster convergence.
"... Abstract—In this paper, we present an efficient method to solve the coupled circuit-field problem, by first transforming the partial differential equations (PDEs) governing the field problem
into a simple one–dimensional (1-D) equivalent circuit system, which is then combined with the circuit part o ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract—In this paper, we present an efficient method to solve the coupled circuit-field problem, by first transforming the partial differential equations (PDEs) governing the field problem into a
simple one–dimensional (1-D) equivalent circuit system, which is then combined with the circuit part of the overall coupled problem. This transformation relies on the generalized Falk algorithm,
which transforms the coordinates in any complex system of linear first-order ordinary differential equations (ODEs) or second-order undamped ODEs, resulting from the discretization of field PDEs,
into guaranteed stable-and-passive 1-D equivalent circuit system. The generalized Falk algorithm, having a faster transformation time compared with the traditional Lanczos-type methods, transforms a
general finite-element system represented by possibly a system of full matrices—capacitance and conductance matrices in heat problems, or mass and stiffness matrices in structural dynamics and
electromagnetics—into an identity capacitance (mass) matrix and a tridiagonal conductance (stiffness) matrix. We also discuss issues related to the stability and the loss of orthogonality of the
proposed algorithm. In circuit simulation, the generalized Falk algorithm does not produce unstable positive poles, and is thus more stable than the widely used Lanczos-type methods. The stability
and passivity of the resulting 1-D equivalent circuit network are guaranteed since all transformed matrices remain positive definite. The resulting 1-D equivalent circuit system contains only
resistors, capacitors, inductors, and current sources. The generalized Falk algorithm offers an extremely simple and convenient way to incorporate field problems into circuit simulators to
efficiently solve coupled circuit-field problems. Numerical examples show a significant reduction of simulation time compared to the solution without using the proposed transformation.
, 1993
"... When solving PDE's by means of numerical methods one often has to deal with large systems of linear equations, specifically if the PDE is time-independent or if the time-integrator is implicit.
For real life problems, these large systems can often only be solved by means of some iterative method. Ev ..."
Cited by 1 (0 self)
Add to MetaCart
When solving PDE's by means of numerical methods one often has to deal with large systems of linear equations, specifically if the PDE is time-independent or if the time-integrator is implicit. For
real life problems, these large systems can often only be solved by means of some iterative method. Even if the systems are preconditioned, the basic iterative method often converges slowly or even
diverges. We discuss and classify algebraic techniques to accelerate the basic iterative method. Our discussion includes methods like CG, GCR, ORTHODIR, GMRES, CGNR, Bi-CG and their modifications
like GMRESR, CG-S, BiCGSTAB. We place them in a frame, discuss their convergence behavior and their advantages and drawbacks. 1 Introduction Our aim is to compute acceptable approximations for the
solution x of the equation Ax = b; (1) where A and b are given, A is a non-singular n \Theta n-matrix, A is sparse, n is large and b an n-vector. We will assume A and b to be real, but our methods
are easily ...
, 1994
"... . Many iterative methods for solving linear equations Ax = b aim for accurate approximations to x, and they do so by updating residuals iteratively. In finite precision arithmetic, these
computed residuals may be inaccurate, that is, they may differ significantly from the (true) residuals that corre ..."
Add to MetaCart
. Many iterative methods for solving linear equations Ax = b aim for accurate approximations to x, and they do so by updating residuals iteratively. In finite precision arithmetic, these computed
residuals may be inaccurate, that is, they may differ significantly from the (true) residuals that correspond to the computed approximations. In this paper we will propose variants on Neumaier's
strategy, originally proposed for CGS, and explain its success. In particular, we will propose a more restrictive strategy for accumulating groups of updates for updating the residual and the
approximation, and we will show that this may improve the accuracy significantly, while maintaining speed of convergence. This approach avoids restarts and allows for more reliable stopping criteria.
We will discuss updating conditions and strategies that are efficient, lead to accurate residuals, and are easy to implement. For CGS and Bi-CG these strategies are particularly attractive, but they
may also be used t...
"... In various applications, it is necessary to keep track of a low-rank approximation of a covariance matrix, R(t), slowly varying with time. It is convenient to track the left singular vectors
associated with the largest singular values of the triangular factor, L(t), of its Cholesky factorization. Th ..."
Add to MetaCart
In various applications, it is necessary to keep track of a low-rank approximation of a covariance matrix, R(t), slowly varying with time. It is convenient to track the left singular vectors
associated with the largest singular values of the triangular factor, L(t), of its Cholesky factorization. These algorithms are referred to as “squareroot.” The drawback of the Eigenvalue
Decomposition (€VD) or the Singular Value Decomposition (SVD) is usually the volume of the computations. Various numerical methods carrying out this task are surveyed in this paper, and we show why
this admittedly heavy computational burden is questionable in numerous situations and should be revised. Indeed, the complexity per eigenpair is generally a quadratic function of the problem size,
but there exist faster algorithms whose complexity is linear. Finally, in order to make a choice among the large and fuzzy set of available techniques, comparisons are made based on computer
simulations in a relevant signal processing context. I. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=360636&sort=cite&start=10","timestamp":"2014-04-18T22:03:54Z","content_type":null,"content_length":"39289","record_id":"<urn:uuid:1a7c9614-5008-4a59-b774-8c20f8935c81>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roslyn Heights Algebra 2 Tutor
Find a Roslyn Heights Algebra 2 Tutor
...It is easy for me to get along with people and help them with any problems they have, in education and even personally if necessary. A little about me, I grew up in New York with great family
and friends. I am responsible, hardworking, caring, and a great listener.
29 Subjects: including algebra 2, English, chemistry, geometry
...I finished my degree at Princeton in 2006 majoring in Politics specializing in Political Theory and American Politics so I'm very well equipped to tutor social studies and history along with
related fields. I also took many classes in English at college so I can work with students in that area t...
40 Subjects: including algebra 2, chemistry, English, reading
...I have held various teaching positions in different schools in New York City and I am currently serving as a substitute teacher in a rural school district in Long Island. I have experience with
various reading programs such as System 44, Read 180, and Journeys. I have experience working with students who needed additional support in phonics, spelling, comprehension, and phonemic
37 Subjects: including algebra 2, English, reading, writing
...The TAKS exam is very similar to the upper level ISEE exam. It too consists of verbal and mathematical sections. The math sections also include information about learning sequences.
15 Subjects: including algebra 2, chemistry, physics, calculus
...As an educator, I have spent the last 6 years working with special needs children. I have helped many students on the Autism spectrum. My interest in Autism and other developmental problems in
children has taken me back to school to study communication disorders, specifically language disorders...
39 Subjects: including algebra 2, reading, geometry, English | {"url":"http://www.purplemath.com/Roslyn_Heights_algebra_2_tutors.php","timestamp":"2014-04-16T04:22:36Z","content_type":null,"content_length":"24294","record_id":"<urn:uuid:df25bbcb-76a7-4fcf-93f8-d483899fc609>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interval Queries in SQL Server
Use the RI-tree model to improve your interval query performance
In mathematics, an interval is the set of values between some lower value and some upper value. There are different types of intervals that you might need to represent in your database, such as
temporal (e.g., sessions, contracts, projects, appointments, periods of validity), spatial (e.g., segments of a road), and numeric (e.g., temperature ranges).
Starting with SQL Server 2008, you can represent spatial intervals using the spatial data types GEOMETRY and GEOGRAPHY and operate on those types with methods. There's also indexing and optimization
support for spatial queries.
As for other types of intervals, such as the very common temporal type, SQL Server doesn't yet provide any special support. Most people represent such intervals with two attributes holding the lower
and upper values and use predicates involving those attributes for interval-related querying. There's no special indexing or optimization support for intervals.
Interval-related querying involves identifying common relations between intervals. One of the most common queries is to check whether two intervals intersect (e.g., return all sessions that were
active during a certain period of time represented by the inputs @l and @u). A test for intersection is a composition of eleven out of the thirteen relations defined by James F. Allen. Specifically,
the eleven relations are meets, overlaps, finished-by, contains, starts, equals, started-by, during, finishes, overlapped-by, and met-by. Unfortunately, the classic methods people use to identify
interval intersection and some of the other relations suffer from fundamental optimization problems. The result is that interval-related queries tend to perform very inefficiently.
But all hope is not lost—a group of researchers from the University of Munich (Hans-Peter Kriegel, Marco Pötke, and Thomas Seidl) created an ingenious model called the Relational Interval Tree
(RI-tree), and Laurent Martin from France added some further improvements, allowing you to efficiently handle intervals in SQL Server. However, the model and the further optimizations involve some
math and computer science that could be a bit complex for some people.
The potential exists to integrate the model within the SQL Server database engine and make it seamless to the user. As a user, you would use basic syntax to create a new kind of index and leave your
queries unchanged. The rest of the responsibility would be SQL Server's—your queries would simply run faster. Such support doesn't yet exist in SQL Server, but I'm hopeful that Microsoft will embrace
the idea and include such support in the future.
In this article, I start by describing the traditional representation of intervals in SQL Server, the classic queries against intervals, and the fundamental optimization problems of such queries. I
then describe the RI-tree model and further optimizations. Finally, I describe the potential for integration of the RI-tree model in SQL Server.
The following resources contain my feature enhancement request to Microsoft, an academic paper describing the RI-tree model, and articles describing further optimizations:
Many thanks to Kriegel, Pötke, and Seidl for creating such an ingenious model and to Laurent Martin for the improvements and for acquainting me with the model.
Traditional Representation of Intervals
The traditional representation of intervals in SQL Server is with two attributes holding the lower and upper values of each interval and another attribute holding the key. Of course there can be
other attributes in the table serving other purposes. In my examples, I use a table called Intervals with 10,000,000 rows representing intervals in the traditional way.
Use the code in Listing 1 and Listing 2 to create and populate such a table in your environment. Listing 1 creates a database called IntervalsDB, a helper function called GetNums that generates a
sequence of integers in a requested range, and a staging table called Stage with 10,000,000 rows.
Listing 1: Code to Create the Sample Database, Helper Function, and Staging Data
-- create sample database IntervalsDB
USE master;
IF DB_ID('IntervalsDB') IS NOT NULL DROP DATABASE IntervalsDB;
CREATE DATABASE IntervalsDB;
USE IntervalsDB;
-- create helper function dbo.GetNums
-- purpose: table function returning sequence of integers between inputs
@low and @high
CREATE FUNCTION dbo.GetNums(@low AS BIGINT, @high AS BIGINT) RETURNS TABLE
L0 AS (SELECT c FROM (SELECT 1 UNION ALL SELECT 1) AS D(c)),
L1 AS (SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B),
L2 AS (SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B),
L3 AS (SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B),
L4 AS (SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B),
L5 AS (SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B),
Nums AS (SELECT ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS rownum
FROM L5)
SELECT TOP(@high - @low + 1) @low + rownum - 1 AS n
FROM Nums
ORDER BY rownum;
-- create staging table dbo.Stage with 10,000,000 intervals
CREATE TABLE dbo.Stage
id INT NOT NULL,
lower INT NOT NULL,
upper INT NOT NULL,
CONSTRAINT PK_Stage PRIMARY KEY(id),
CONSTRAINT CHK_Stage_upper_gteq_lower CHECK(upper >= lower)
@numintervals AS INT = 10000000,
@minlower AS INT = 1,
@maxupper AS INT = 10000000,
@maxdiff AS INT = 20;
WITH C AS
n AS id,
@minlower + (ABS(CHECKSUM(NEWID())) %
(@maxupper - @minlower - @maxdiff + 1)) AS lower,
ABS(CHECKSUM(NEWID())) % (@maxdiff + 1) AS diff
FROM dbo.GetNums(1, @numintervals) AS Nums
INSERT INTO dbo.Stage WITH(TABLOCK) (id, lower, upper)
SELECT id, lower, lower + diff AS upper
FROM C;
You'll use the same source data from the staging table to first fill the traditional representation of intervals and later the RI-tree representation. Run the code in Listing 2 to create the
Intervals table with the traditional representation of intervals and fill it with the sample data from the staging table.
Listing 2: Code to Create and Populate the Intervals Table
CREATE TABLE dbo.Intervals
id INT NOT NULL,
lower INT NOT NULL,
upper INT NOT NULL,
CONSTRAINT PK_Intervals PRIMARY KEY(id),
CONSTRAINT CHK_Intervals_upper_gteq_lower CHECK(upper >= lower)
CREATE INDEX idx_lower ON dbo.Intervals(lower) INCLUDE(upper);
CREATE INDEX idx_upper ON dbo.Intervals(upper) INCLUDE(lower);
INSERT INTO dbo.Intervals WITH(TABLOCK) (id, lower, upper)
SELECT id, lower, upper
FROM dbo.Stage;
I use integers to represent the lower and upper values of the interval because the RI-tree model, which I discuss later, works with integers. To represent date and time values, you need to map them
to integers—for example, by computing the difference between some base point and the target value in terms of the desired granularity of the type. Of course, using the traditional representation of
intervals, you could use any type that implements total order.
Observe the two indexes that are defined in the Intervals table. One index is created on the column lower as the key and includes the column upper, and the other is created on the column upper as the
key and includes the column lower.
Most interval-related queries—such as the one that identifies intersecting intervals—involve two range predicates. For example, given an input interval identified by the variables @l (for lower) and
@u (for upper), the intervals that intersect with the input interval are those that satisfy the following predicate: lower <= @u AND upper >= @l. Herein lies the fundamental optimization problem—a
seek in an index can be based on only one range predicate. Other range predicates have to be evaluated as residual predicates. Therefore, the optimizer will have to pick one of the two indexes to
work with and perform a seek based on one of the predicates against the leading index key to filter the qualifying rows. While scanning the remaining rows in the index leaf, evaluate the other
predicate to determine which rows to return.
This is such an important point to grasp that it's worth spending a bit more time on it to make sure it's well-understood. Consider the following list sorted by X, Y:
X, Y
1, 10
1, 20
1, 40
1, 50
2, 20
2, 30
2, 50
3, 20
3, 40
3, 50
3, 50
3, 60
4, 10
4, 10
4, 50
4, 60
Suppose this sorted list represents data in the leaf level of a B-tree index. Consider a query with the following filter: X >= 3 and Y <= 40. Based on the predicate X >= 3, it's possible with an
index seek to go straight to the leaf row (3, 20) and scan only the rows that satisfy this predicate. However, there's no escape from scanning all remaining rows in the index leaf and evaluating the
remaining predicate Y <= 40 as a residual predicate.
You can easily observe the optimization problem with a query looking for intersection against the Intervals table. First, use the following code to turn on STATISTICS IO and STATISTICS TIME:
Next, run the following query to check for intervals that intersect with an input interval that resides roughly in the middle of the entire range:
DECLARE @l AS INT = 5000000, @u AS INT = 5000020;
SELECT id
FROM dbo.Intervals
WHERE lower <= @u AND upper >= @l
OPTION (RECOMPILE);
I use the RECOMPILE query option because otherwise variable values won't be sniffed. Figure 1 shows the plan for this query.
The inputs in this query represent an interval roughly in the middle of the entire range, so it doesn't really matter which of the two indexes the optimizer chooses to use. In this case, it seems
that the optimizer chose to use the index idx_upper. As for observing the optimization problem, notice that the Seek Predicates section contains only the predicate against the column upper; the
predicate against the column lower appears under the section Predicate—meaning residual predicate. The end result is that about half of the leaf level of the index has to be scanned. The statistics
that I got for this query on my system were logical reads: 11256; CPU time: 482 ms. This isn't very efficient, to say the least.
As I mentioned, when looking for an interval in the middle of the entire range it doesn't really matter which index the optimizer uses. However, when looking for an interval that's close to the
beginning of the range, naturally it's more efficient to use idx_lower, which involves scanning a smaller section at the beginning of the index leaf. Try it by using the inputs @l = 80 and @u = 100;
make sure you use the RECOMPILE option so that the optimizer will be able to sniff the variable values. For this query, I got the statistics logical reads: 3; CPU time: 0 ms.
Similarly, when querying the end of the data, the optimizer chooses idx_upper, scanning a small section closer to the end of the index leaf. For example, try running the query with the inputs @l =
9999900 and @u = 9999920. The statistics I got for this query are logical reads: 3; CPU time: 0 ms.
As you can determine from this exercise, when using parameters in a stored procedure and not specifying RECOMPILE, this query is very sensitive to parameter sniffing problems. At any rate, the
important conclusion is that unless users always query only a small section of the entire range that's close to either the beginning or the end of the range, the queries will suffer from serious
optimization problems.
Relational Interval Tree
The RI-tree model is an ingenious model created by Kriegel, Pötke, and Seidl that enables very efficient querying of intervals. Implementing the model involves computing an attribute for each
interval (stored as a column in the table), creating two indexes, and using new queries to look for intersection and other interval relations.
At the heart of the model is a virtual backbone binary tree whose nodes are the integer values in the range that needs to be covered. For example, if the intervals you need to represent can start or
end anywhere in the range 1 to 31, the virtual backbone tree in your case is the one in Figure 2 (ignore the interval and the fork node shown in the figure for now).
You can use a LOG function to compute the height of the tree. To cover a range starting with 1 and ending with @max, the height of the binary tree is @h = CEILING(LOG(@max + 1, 2)). The root of the
tree is POWER(2, @h-1). The reason the backbone tree is virtual is because the complete tree isn't persisted anywhere; instead, its nodes are used only when relevant, as you'll learn shortly.
Fork node. A fundamental component in the RI-tree model is what's called the fork node, which you compute for and store with each interval that you need to represent in your database. Figure 2
illustrates how the fork node is computed for some sample interval. You descend the virtual backbone tree starting with the root with bisection; the first node that you find within the interval is
the fork node.
Listing 3 contains an implementation of this algorithm based on the RI-tree model in a T-SQL user-defined function (UDF) called forkNode for a tree with a height of 31 (covering the range 1 to
Listing 3: Definition of forkNode Function
-- Based on "Managing Intervals Efficiently in Object-Relational Databases"
CREATE FUNCTION dbo.forkNode(@lower AS INT, @upper AS INT) RETURNS INT
DECLARE @node AS INT = 1073741824; -- @node = 2^(h-1),
h = height of tree, #values: 2^h - 1
DECLARE @step AS INT = @node / 2;
WHILE @step >= 1
IF @upper < @node
SET @node -= @step;
ELSE IF @lower > @node
SET @node += @step;
SET @step /= 2;
RETURN @node;
For example, invoke the function with 11 and 13 as inputs, and you get 12 as the output fork node:
SELECT dbo.forkNode(11, 13);
The downside of the fork node computation in Listing 3 is that it uses a T-SQL UDF with iterative logic. The combination is highly inefficient, especially when a fork node needs to be computed for
each interval that you want to store in the database. To give you a sense of the inefficiency, it took 304 seconds on my system to populate a table with 10,000,000 rows, out of which the computation
of the fork node took 297 seconds.
In the iterative algorithm for computing the fork node, the CPU complexity is proportional to the height of the tree. In the RI tree model, Kriegel, Pötke, and Seidl provide a variation where the
height and the range of the tree expand dynamically according to the intervals that are added. But this variation involves maintaining additional parameters for the tree and modifying some of them
with every addition of an interval, which results in support for only single-row insertions; in addition, the insertions are slow, and they can quickly result in a bottleneck.
Laurent Martin came up with a way to compute the fork node using a scalar expression, enabling support for a large static RI-tree, and using very efficient multi-row insertions. I briefly describe
Martin's computation here; for more detail, see "A Static Relational Interval Tree."
As Figure 3 illustrates, the fork node of a given interval is the lowest common ancestor of the boundaries of the interval represented by the values in the columns lower (11) and upper (13).
Examine the binary representation of the nodes in Figure 3. Observe the following:
• For any given node, let L = leading bits before trailing 0s; for example, for node 12 (01100), L = 011.
• Given a node X whose leading bits are L, all nodes in X's left subtree have L-1 as their leading bits (e.g., for L = 011, L-1 = 010), and all nodes in X's right subtree have L as their leading
• For some nonleaf node X and X-1, the ancestor is the same, so for the purposes of computing the ancestor of X, X can be replaced with X-1.
• If Z is a leaf node, Z and Z-1 differ only in the last bit: for Z it's 1 and for Z-1 it's 0.
Based on these observations, the fork node can be computed as: matching prefix of (lower - 1) and upper, concatenated with 1, and padded with trailing zeros.
To achive this computation in SQL Server, consider the following steps, keeping in mind the interval [11, 13] in Figure 3 as an example:
1. Let A = (lower - 1) ^ upper
2. Let B = POWER(2, FLOOR(LOG(A, 2)))
3. Let C = upper % B
4. Let D = upper - C
Step 1 computes a value (call it A) that marks the bits that are different in (lower - 1) and upper as 1s. To achieve this, you apply a bitwise XOR operator between (lower - 1) and upper. In our
example, 11 ^ 13 in binary representation is 01010 ^ 01101 = 00111.
Step 2 computes a value (call it B) where the leftmost bit that's different in (lower - 1) and upper is set to 1 and all other bits are set to 0. In other words, this step identifies the leftmost bit
in A that's turned on. Note that based on the aforementioned observations, the leftmost bit that's different in (lower - 1) and upper will be 0 in (lower - 1) and 1 in upper. In our example, the
result of this step in binary form is 00100.
Step 3 keeps the trailing bits from upper after the bit that's set to 1 in B. In our example, this step computes 01101 % 00100 = 01.
Step 4 sets the trailing bits in upper after the set bit in B to 0s. In our example, 01101 - 00001 = 01100. And voila—D is the fork node!
Putting it all together, use the following formula (expressed as a T-SQL expression in SQL Server 2012) to compute the fork node:
upper - upper % POWER(2, FLOOR(LOG((lower - 1) ^ upper, 2)))
This expression encapsulates the computation of the fork node in an ingenious way. When I described it to a friend of mine who is an expert in T-SQL, he got so excited that he jokingly told me he
wants to be like Martin when he grows up!
Now that you have a scalar expression to compute the fork node based on the columns lower and upper, you can implement it as a computed column (call it node) in the table that holds your intervals.
Based on the RI-tree model, you'll need two indexes: one on the keylist (node, lower) and another on the keylist (node, upper). Note that depending on your needs, you might want to add leading
columns to the keylist for equality-based filters in your queries, as well as columns to the index include list for coverage purposes.
The code in Listing 4 creates the IntervalsRIT table, populates it with the sample data from the staging table, and creates the primary key constraint and the indexing based on the RI-tree model.
Listing 4: Code to Create and Populate the IntervalsRIT Table
-- Based on "A Static Relational Interval Tree"
CREATE TABLE dbo.IntervalsRIT
id INT NOT NULL,
node AS upper - upper % POWER(2, FLOOR(LOG((lower - 1) ^ upper, 2)))
lower INT NOT NULL,
upper INT NOT NULL,
CONSTRAINT PK_IntervalsRIT PRIMARY KEY(id),
CONSTRAINT CHK_IntervalsRIT_upper_gteq_lower CHECK(upper >= lower)
CREATE INDEX idx_lower ON dbo.IntervalsRIT(node, lower);
CREATE INDEX idx_upper ON dbo.IntervalsRIT(node, upper);
INSERT INTO dbo.IntervalsRIT WITH(TABLOCK) (id, lower, upper)
SELECT id, lower, upper
FROM dbo.Stage;
The populated IntervalsRIT table and its indexes are the RI-tree–based representation of your intervals replacing the previous Intervals table and its indexes. Recall that with the iterative forkNode
function, it took 297 seconds on my system just to compute the fork nodes for 10,000,000 intervals. With Martin's optimized formula, it took only 6 seconds.
Querying the RI-tree model. Your IntervalsRIT table has your intervals stored, along with the fork node, in a column named node. Next, you'll learn how to query the table to identify intervals that
intersect with some input interval [@l, @u]. In my examples, I'll use the input interval [@l = 11, @u = 13].
Based on the RI-tree model, there are three disjoint groups of intervals that can intersect with an input interval. I'll refer to them as the left, middle, and right groups.
Left group. Consider the path in the virtual backbone tree descending from the root to @l, as Figure 4 shows. Let leftNodes be the set of all nodes w that appear on the path and are to the left of
the input interval.
For all nodes w in leftNodes, the following are true (please refer to Figure 4):
• An interval registered at a node d in w's left subtree can't intersect with the input interval. If it did, it would have registered at an ancestor of d.
• An interval registered at w could be one of two cases: (1) If upper < @l, clearly the interval doesn't intersect with the input interval; (2) if upper >= @l, the interval must intersect with the
input interval because we already know that lower <= @u (an interval registered at a node that appears to the left of the input interval obviously starts before the input interval ends).
Therefore, the intervals belonging to the left group are the ones registered at a node in leftNodes and haveupper >= @l.
Listing 5 contains the definition of a table function called leftNodes that returns the nodes in the aforementioned set leftNodes.
Listing 5: Definition of leftNodes and rightNodes Functions
-- Based on "Managing Intervals Efficiently in Object-Relational Databases"
-- rightNodes function
CREATE FUNCTION dbo.leftNodes(@lower AS INT, @upper AS INT)
node INT NOT NULL PRIMARY KEY
DECLARE @node AS INT = 1073741824;
DECLARE @step AS INT = @node / 2;
-- descend from root node to lower
WHILE @step >= 1
-- right node
IF @lower < @node
SET @node -= @step;
-- left node
ELSE IF @lower > @node
INSERT INTO @T(node) VALUES(@node);
SET @node += @step;
-- lower
SET @step /= 2;
-- rightNodes function
CREATE FUNCTION dbo.rightNodes(@lower AS INT, @upper AS INT)
node INT NOT NULL PRIMARY KEY
DECLARE @node AS INT = 1073741824;
DECLARE @step AS INT = @node / 2;
-- descend from root node to upper
WHILE @step >= 1
-- left node
IF @upper > @node
SET @node += @step
-- right node
ELSE IF @upper < @node
INSERT INTO @T(node) VALUES(@node);
SET @node -= @step
-- upper
SET @step /= 2;
The following query returns all intervals in the left group (those registered at nodes in leftNodes and that intersect with the input interval):
DECLARE @l AS INT = 5000000, @u AS INT = 5000020;
SELECT I.id
FROM dbo.IntervalsRIT AS I
JOIN dbo.leftNodes(@l, @u) AS L
ON I.node = L.node
AND I.upper >= @l;
Examine the query plan in Figure 5.
The plan performs a seek in the index idx_upper per node in leftNodes. The important thing is that now there's one equality predicate and one range predicate, both of which can be applied as the seek
Right group. The right group is simply the symmetric group to the left group. Consider the path in the virtual backbone tree descending from the root to @u, as Figure 6 shows. Let rightNodes be the
set of all nodes w that appear on the path and are to the right of the input interval.
For all nodes w in rightNodes, the following are true (please refer to Figure 6):
• An interval registered at a node d in w's right subtree can't intersect with the input interval. If it did, it would have registered at an ancestor of d.
• An interval registered at w could be one of two cases: (1) If lower > @u, clearly the interval doesn't intersect with the input interval; (2) if lower <= @u, the interval must intersect with the
input interval because we already know that upper >= @l (an interval registered at a node that appears to the right of the input interval obviously ends after the input interval starts).
Therefore, the intervals belonging to the right group are the ones registered at a node in rightNodes and have lower <= @u.
Listing 5 contains the definition of a table function called rightNodes that returns the nodes in the aforementioned set rightNodes.
The following query returns all intervals in the right group (those registered at nodes in rightNodes and that intersect with the input interval):
DECLARE @l AS INT = 5000000, @u AS INT = 5000020;
SELECT I.id
FROM dbo.IntervalsRIT AS I
JOIN dbo.rightNodes(@l, @u) AS R
ON I.node = R.node
AND I.lower <= @u;
Figure 7 shows the plan for this query. This plan is symmetric to the plan in Figure 5, only this time the index idx_lower is used.
Middle group. Let middleNodes be the set of nodes w that reside within the input interval, as Figure 8 shows.
Any interval registered at w has lower <= @u and upper >= @l; hence, it intersects with the input interval. Use the following query to return all intervals in the middle group:
DECLARE @l AS INT = 5000000, @u AS INT = 5000020;
SELECT id
FROM dbo.IntervalsRIT
WHERE node BETWEEN @l AND @u;
The optimizer can use either idx_upper or idx_lower to process this query efficiently. Looking at the plan for this query in Figure 9, it seems that the optimizer chose to use idx_upper in this case.
Listing 6 contains the code based on the RI-tree model that puts it all together, unifying the results of the queries returning the intersections in all three groups (left, middle, and right).
Listing 6: Intersection Query
-- Based on "Managing Intervals Efficiently in Object-Relational Databases"
DECLARE @l AS INT = 5000000, @u AS INT = 5000020;
SELECT I.id
FROM dbo.IntervalsRIT AS I
JOIN dbo.leftNodes(@l, @u) AS L
ON I.node = L.node
AND I.upper >= @l
SELECT I.id
FROM dbo.IntervalsRIT AS I
JOIN dbo.rightNodes(@l, @u) AS R
ON I.node = R.node
AND I.lower <= @u
SELECT id
FROM dbo.IntervalsRIT
WHERE node BETWEEN @l AND @u
OPTION (RECOMPILE);
The statistics that I got for the unified query are the following: logical reads: 81 against IntervalsRIT + 2 against the table variable returned by leftNodes + 2 against the table variable returned
by rightNodes; CPU time: 16 ms. That's a big improvement compared with the traditional query shown at the beginning of the article; recall that the statistics for the traditional query were logical
reads: 11256; CPU time: 482 ms.
Optimized computation of ancestors. The downside of the implementation of the leftNodes and rightNodes functions in the previous sections is that it uses iterative logic, which isn't too efficient in
T-SQL. Imagine the overhead to find intersections with not just one input interval but rather with a whole set of input intervals.
Laurent Martin came up with a very elegant solution that uses set-based logic. He describes his solution in detail in "A Static Relational Interval Tree" and "Advanced interval queries with the
Static Relational Interval Tree." I'll describe a variation of the solution that I think is a bit easier to understand. The core logic and performance is pretty much the same as in Martin's solution.
Given an input interval [@l, @u], recall that leftNodes is the set of ancestors of @l that appear to the left of the input interval. Similarly, rightNodes is the set of ancestors of @r that appear to
the right of the input interval. Martin made the following observation concerning the ancestors of any given node @node. Suppose @node = 13 (in binary 01101). You can determine the parent of a node
by clearing the node's rightmost bit and setting the bit to the left of it to 1. You can repeat this process until you reach the root, to determine all ancestors. For example, the ancestors of 13 are
14, 12, 8, and 16:
01101 (13)
01110 (14)
01100 (12)
01000 (8)
10000 (16)
To compute the ancestors of a given node, you'll use a helper table called BitMasks, like the one in Table 1 (the values in the columns b1 and b3 are presented in binary form).
Table 1: BitMasks Table
n b1 b3
--- ------ ------
For a virtual backbone tree with the height h, you'll fill the table with rows where n is between 1 and h-1. For example, the code in Listing 7 creates the BitMasks table and populates it with values
for a tree with h = 31 (n between 1 and 30).
Listing 7: Code to Create and Populate the BitMasks Table
-- Based on "A Static Relational Interval Tree"
CREATE TABLE dbo.BitMasks
b1 INT NOT NULL,
b3 INT NOT NULL
INSERT INTO dbo.BitMasks(b1, b3)
SELECT -POWER(2, n), POWER(2, n)
FROM (VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9),(10),
(21),(22),(23),(24),(25),(26),(27),(28),(29),(30)) AS Nums(n);
You're probably wondering why there's no b2 column in the BitMasks table. In the variation of the solution that I describe, I use only b1 and b3—but Martin also uses another column called b2 in his
original solution.
Please refer to Table 1 for the following. To compute the ancestors of an input node @node, you query the BitMasks table applying the expression @node & b1 | b3 in each level, which basically
implements the logic described earlier: In each level, clear that level's bit and set the bit to the left of it to 1. However, you need to filter only the levels where there are ancestors—namely, the
ones where the rightmost set bit (which is simply b3) is to the left of the rightmost set bit in @node.
SQL Server uses the two's complement storage format to represent integers. As the Wikipedia article states, "A shortcut to manually convert a binary number into its two's complement is to start at
the least significant bit (LSB), and copy all the zeros (working from LSB toward the most significant bit) until the first 1 is reached; then copy that 1, and flip all the remaining bits."
This means that given a value @node, you can compute the rightmost set bit with the expression @node & -@node. Back to our filtering needs, to filter only levels representing ancestors of a positive
input node, you use the following predicate: b3 > @node & -@node.
Listing 8 contains the definition of the Ancestors inline table-valued function, which returns all ancestors of an input node. To return only left ancestors of a given node, you query the function
and filter only rows where the returned node is smaller than the input node. To return only right ancestors, you query the function and filter only rows where the returned node is greater than the
input node.
Listing 8: Definition of Ancestors Function
CREATE FUNCTION dbo.Ancestors(@node AS INT) RETURNS TABLE
SELECT @node & b1 | b3 as node
FROM dbo.BitMasks
WHERE b3 > @node & -@node;
You use the query in Listing 9 instead of the one presented earlier in Listing 6 to return intervals that intersect with an input interval [@l, @u].
Listing 9: Intersection Query with BitMasks Table and Ancestors Function
DECLARE @l AS INT = 5000000, @u AS INT = 5000020;
SELECT I.id
FROM dbo.IntervalsRIT AS I
JOIN dbo.Ancestors(@l) AS L
ON L.node < @l
AND L.node >= (SELECT MIN(node) FROM dbo.IntervalsRIT)
AND I.node = L.node
AND I.upper >= @l
SELECT I.id
FROM dbo.IntervalsRIT AS I
JOIN dbo.Ancestors(@u) AS R
ON R.node > @u
AND R.node <= (SELECT MAX(node) FROM dbo.IntervalsRIT)
AND I.node = R.node
AND I.lower <= @u
SELECT id
FROM dbo.IntervalsRIT
WHERE node BETWEEN @l AND @u
OPTION (RECOMPILE);
Notice the addition of filters that exclude nodes from the left path that are less than the minimum node in the table, and nodes from the right path that are greater than the maximum node in the
table. Figure 10 shows the plan for the query in Listing 9.
Because the Ancestors function is inline, the plan scans the BitMasks table directly, without any overhead related to the function itself, as in the previous case. The statistics that I got for this
query are logical reads: 66 against IntervalsRIT + 2 against BitMasks; CPU time: 0 ms.
Here I demonstrated how to query the data to handle interval intersection. To find out how to handle all Allen relations, see "Advanced interval queries with the Static Relational Interval Tree."
Potential for Integration in SQL Server
The RI-tree model and some of the optimizations allow very efficient handling of intervals in SQL Server. However, the math and computer science behind the solution might be a bit complex for some
people. There's the potential to integrate the model within the SQL Server engine and make it transparent to the user. This could be achieved in a number of ways.
One option is to introduce a new type of index (call it interval index) that the user is responsible for creating using fairly basic syntax. Behind the scenes, SQL Server can compute the fork node
for each interval and create two B-tree indexes like the indexes idx_lower and idx_upper that I described in the previous sections. Optimizer support also needs to be added to detect interval queries
based on the predicates in the query filter, and when detected, internally process the request similar to the queries presented in Listing 9 (or Listing 6 based on the original model). With this
approach, the only responsibility of the user is to create the interval index, and the engine is responsible for the rest. The original queries remain unchanged and just start running faster.
The indexes idx_lower and idx_upper that I presented earlier represent the most basic possible form of the needed indexes. As described in "Advanced interval queries with the Static Relational
Interval Tree," you can handle all Allen interval relations based on the RI-tree model. For some of them, you need one index on the keylist (node, lower, upper) and another on (node, upper, lower).
For intersection-only queries, suffice to define the indexes on (node, lower) and (node, upper), respectively (hence the option INTERSECTS_ONLY in the proposed index syntax). Also, if the query has
additional equality-based filters on other columns, you need to add them to both indexes as leading keys. Finally, if you need to return additional columns besides the ones in the keylist from the
query, and you want the indexes to cover the query, you need to add those columns as included ones. So the proposed interval index with all additional optional elements could look like this:
CREATE INDEX myindex
ON dbo.Intervals[(fcol1, fcol2, ...)] -- leading
equality-based filters
INTERVAL(lower, upper) -- interval columns
[INCLUDE(icol1, icol2, ...)] -- included columns
[WITH (INTERSECTS_ONLY = ON)]; -- determines keylist
Behind the scenes, SQL Server would create the following two regular B-tree indexes:
CREATE INDEX myidx1 ON dbo.Intervals([fcol1, fcol2, ...,]
node, lower[, upper]) [INCLUDE(icol1, icol2, ...)];
CREATE INDEX myidx2 ON dbo.Intervals([fcol1, fcol2, ...,]
node, upper[, lower]) [INCLUDE(icol1, icol2, ...)];
As I mentioned, the optimizer would detect interval queries based on the predicates in the query filter and use these indexes.
Remember that although I used integers in all my examples, in reality you often need to work with date and time intervals. Mapping date and time values to integers and back can add a lot of overhead
when you do it in T-SQL. If implemented internally, using natively compiled code with some low-level language, Microsoft can do this much more efficiently—as you can imagine.
Other potential additions in SQL Server based on this model could be to get built-in functions that compute the fork node and ancestors, supporting date and time types, and again, doing it much more
efficiently than the user-defined computations. Plus, these functions would allow a "roll your own" approach to advanced users. As I mentioned, I submitted a feature proposal to Microsoft.
Speed Up Interval-Related Requests
The traditional methods that people often use to handle interval-related requests, especially the common intersection request, suffer from fundamental performance problems. When there are two range
predicates involved in your query filter, only one can be used as a seek predicate in a B-tree index. Using the ingenious RI-tree model and some further optimizations, you get index seeks based on a
single range predicate, which results in significantly faster queries. However, the model and its optimizations could be a bit complex for some people. Because it's purely an engineering problem, and
the engineering solution already exists, it could be completely encapsulated within the database engine and made transparent to the user. Again, I'd like to thank Hans-Peter Kriegel, Marco Pötke, and
Thomas Seidl for coming up with the RI-tree model, as well as Laurent Martin for his additions. | {"url":"http://sqlmag.com/t-sql/sql-server-interval-queries","timestamp":"2014-04-18T20:00:22Z","content_type":null,"content_length":"123243","record_id":"<urn:uuid:0c15c624-538c-451b-9fae-0d735400335e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Artificial Intelligence
Artificial Intelligence
Tautology. Definition: ... well-formed formula (WFF) is a tautology if for every truth value assignment to ... of a tautology is a tautology. AI & CV Lab, ... – PowerPoint PPT presentation
Number of Views:33
Avg rating:3.0/5.0
Slides: 61
Added by: Anonymous
more less
Transcript and Presenter's Notes | {"url":"http://www.powershow.com/view/f58c9-MDY3Y/Artificial_Intelligence_powerpoint_ppt_presentation","timestamp":"2014-04-17T16:10:17Z","content_type":null,"content_length":"117632","record_id":"<urn:uuid:a9988dfc-63e0-4dfc-9185-7beea7881927>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Course Content and Outcome
Course Content and Outcome Guide for CMET 122
Course Number:
CMET 122
Course Title:
Technical Engineering Physics
Credit Hours:
Lecture Hours:
Lecture/Lab Hours:
Lab Hours:
Special Fee:
Course Description
Introduces physical properties of matter and energy, includes properties of solids, liquids and gases. Presents applications of the basic equations of fluid mechanics, heat transfer, and the First
Law of Thermodynamics. Prerequisite or concurrent: CMET 121, 123. Audit available.
Intended Outcomes for the course
The student will be able to:
1. Recognize physical properties of matter and energy including properties of solids, liquids, and gases.
2. Understand basic equations of fluid mechanics, heat transfer, and the First Law of Thermodynamics.
Outcome Assessment Strategies
● Evaluation will include tests, homework and a final examination.
● Specific details of the grading procedure will be given the first week of class.
● Lecture, homework, and laboratory (usually problem solution on board) will be coordinated.
● Students must complete and/or participate in all three areas as indicated by the instructor.
Course Content (Themes, Concepts, Issues and Skills)
1. A background in the topics covered in Technical Engineering Physics are needed in preparation for the subjects covered more in depth in their Engineering Materials, Fluid Mechanics, and
Thermodynamics courses.
2. Mechanical properties of gases, liquids, and solids must be understood in order to analyze and design machines, structures, and other engineering products.
WORK, ENERGY, AND POWER
Instructional Goal:
To develop skills in solving problems involving work, energy, and power.
1.1.0 Define work, kinetic energy, potential energy, power, and efficiency. Understand the concept of principle of conservation of energy, interchange of work, kinetic and potential energy.
1.2.0 Solve problems involving transformation of work and energy, using the principle of conservation of energy.
1.3.0 Solve problems involving calculation of power and efficiency.
Instructional Goal:
To present the molecular model of matter and to see how the behavior of solids, liquids and gases can be understood in light of this model and to develop equations that will predict the behavior of
2.1.0 Present the molecular model of matter and discuss how the forces of attraction and repulsion affect matter.
Discuss the behavior of solids and solve problems involving:
2.2.1 Density
2.2.2 Specific Gravity
2.2.3 Hooke s Law and Moduli of Elasticity
2.2.4 Stress and Strain
2.2.5 Factor of safety
2.2.6 Compressibility
2.2.7 Shear
2.2.8 Torsion
2.3.0 Discus the behavior of liquids and solve problems involving:
2.3.1 Surface Tension
2.3.2 Hydrostatics (pressure and force)
2.3.3 Pascal s law
2.3.4 Archimedes Principle
2.3.5 Specific Gravity
2.3.6 Hydraulics
2.3.7 Bernoulli s Principle
2.4.0 Discuss the behavior of gases and solve problems involving:
2.4.1 Boyle s Law
2.4.2 Charles Law
2.4.3 Gas density
2.4.4 Barometers and manometers
THERMODYNAMICS Instructional Goal:
To discuss heat and its transformation into other forms of energy and to develop equations to describe and predict energy transformation.
3.1.0 Understand the difference between temperature and heat energy.
3.2.0 Solve problems pertaining to:
3.2.1 Fahrenheit/Celsius temperature conversion.
3.2.2 Thermal expansion.
3.2.3 Pressure/VolumeTemperature relationships of gases.
3.2.4 Heat and change of state.
a. Heat of vaporization
b. Heat of fusion
3.2.5 Heat transfer in various solids.
3.2.6 Convection.
3.2.7 Radiation.
3.2.8 Conservation of energy.
3.2.10 Entropy
3.2.11 Cyclic processes in heat engines
3.2.12 Carnot cycle
3.2.13 Use of change of state in refrigeration
3.2.14 Coefficient of performance
3.2.15 Air conditioning and the psychrometric chart | {"url":"http://www.pcc.edu/ccog/default.cfm?fa=ccog&subject=CMET&course=122","timestamp":"2014-04-20T11:37:04Z","content_type":null,"content_length":"10997","record_id":"<urn:uuid:3b0b425a-a38b-4584-bbdf-876d0b97ed53>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
Isospectral graph reductions, estimates of matrices' spectra, and eventually negative Schwarzian systems
Please use this identifier to cite or link to this item: http://hdl.handle.net/1853/39521
Title: Isospectral graph reductions, estimates of matrices' spectra, and eventually negative Schwarzian systems
Author: Webb, Benjamin Zachary
This dissertation can be essentially divided into two parts. The first, consisting of Chapters I, II, and III, studies the graph theoretic nature of complex systems. This includes the
spectral properties of such systems and in particular their influence on the systems dynamics. In the second part of this dissertation, or Chapter IV, we consider a new class of
one-dimensional dynamical systems or functions with an eventual negative Schwarzian derivative motivated by some maps arising in neuroscience. To aid in understanding the interplay
between the graph structure of a network and its dynamics we first introduce the concept of an isospectral graph reduction in Chapter I. Mathematically, an isospectral graph
transformation is a graph operation (equivalently matrix operation) that modifies the structure of a graph while preserving the eigenvalues of the graphs weighted adjacency matrix.
Because of their properties such reductions can be used to study graphs (networks) modulo any specific graph structure e.g. cycles of length n, cliques of size k, nodes of minimal/maximal
Abstract: degree, centrality, betweenness, etc. The theory of isospectral graph reductions has also lead to improvements in the general theory of eigenvalue approximation. Specifically, such
reductions can be used to improved the classical eigenvalue estimates of Gershgorin, Brauer, Brualdi, and Varga for a complex valued matrix. The details of these specific results are
found in Chapter II. The theory of isospectral graph transformations is then used in Chapter III to study time-delayed dynamical systems and develop the notion of a dynamical network
expansion and reduction which can be used to determine whether a network of interacting dynamical systems has a unique global attractor. In Chapter IV we consider one-dimensional
dynamical systems of an interval. In the study of such systems it is often assumed that the functions involved have a negative Schwarzian derivative. Here we consider a generalization of
this condition. Specifically, we consider the functions which have some iterate with a negative Schwarzian derivative and show that many known results generalize to this larger class of
functions. This includes both systems with regular as well as chaotic dynamic properties.
Type: Dissertation
URI: http://hdl.handle.net/1853/39521
Date: 2011-03-18
Publisher: Georgia Institute of Technology
Schwarzian derivative
Global stability
Dynamical networks
Subject: Spectral equivalence
Graph transformations
Complex matrices
Attractors (Mathematics)
Department: Mathematics
Advisor: Committee Chair: Bunimovich, Leonid; Committee Member: Bakhtin, Yuri; Committee Member: Dieci, Luca; Committee Member: Randall, Dana; Committee Member: Weiss, Howie
Degree: Ph.D.
All materials in SMARTech are protected under U.S. Copyright Law and all rights are reserved, unless otherwise specifically indicated on or in the materials.
Files in this item
webb_benjamin_z_201105_phd.pdf 2.220Mb PDF View/ Open
This item appears in the following Collection(s) | {"url":"https://smartech.gatech.edu/handle/1853/39521","timestamp":"2014-04-19T07:38:01Z","content_type":null,"content_length":"25473","record_id":"<urn:uuid:f1276390-9aa5-410d-b8ab-4b7a27ecf99e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Magnetism and the Ising Model
Some materials have the curious property of being magnetic under normal everyday conditions - for example, they stick to the metallic door of your fridge. Technically speaking, they show a
spontaneous magnetisation at room temperature, and are called ferromagnetic, for the Latin name of iron, which is the prototype of a material with these properties. It comes out, the state of being
magnetic is a
, similar to being solid or fluid, and indeed, one can study phase diagrams for magnetic materials. For example, if the temperature of a magnetic chunk of iron is raised above a certain, specific
temperature, the magnetisation is lost. This temperature is called the
Curie temperature
's husband
, a pioneer of solid-state physics. The Curie temperature of iron is at 1043 K. The appearance (or disappearance) of spontaneous magnetisation at the Curie temperature is not only technologically
relevant, it is also very
useful for geologists
: if ferromagnetic minerals in volcanic lava cool down from red-hot molten rock to below the Curie point, they "freeze in" the orientation of the Earth's magnetic field at that very moment. This
allows to reconstruct the orientation and strength of the Earth's magnetic field over history.
One goal of physicists in the early years of the 20th century was to understand how spontaneous magnetisation comes about, and to find a quantitative description of the magnetisation as a function of
temperature. To this end, they made simplified assumptions, for example, that atoms behave like miniature compass needles which interact just with their neighbours. One of these models was proposed
by the German physicist
Wilhelm Lenz
in 1920, and then analysed in more detail by his student
Ernst Ising
- it's the famous
Ising model
(Ising was born in Cologne, Germany, hence the pronunciation of the name is "eeh-sing", not "eye-sing").
In the Ising model, one assumes that the magnetic moments of atoms can have only two orientations, and that it is energetically favourable if the magnetic moments of neighbouring atoms are oriented
in parallel - it costs an energy
to flip one magnetic moment with respect to its neighbour. Then, one applies the rules of statistical mechanics and tries to calculate the magnetisation - the average orientation of the magnetic
moments. As it comes out, there is indeed a spontaneous magnetisation below a certain temperature - one of the most elementary examples of
spontaneous symmetry breaking
. And, even more spectacular from the theorist's point of view, in the special case of a restriction to just two dimensions,
and later
of parity violation and Yang-Mills theories) could derive an exact formula for the magnetisation
as a function of temperature. It looks pretty complicated,
but the interesting thing is that there is only one free parameter in the formula, the Curie temperature
, which depends on the energy
necessary to flip a magnetic moment. Essentially, the magnetisation is 1 at zero temperature (meaning that all magnetic moments point in the same direction), and drops to zero as the eighth root when
the temperature approaches the Curie point.
As nice as it may be to have such a formula, it would be interesting to check in an experiment if it is correct. However, there is a drawback: It's valid only in two dimensions, i.e. for planar
layers just one atom thick, and it works only for magnetic moments which can only be parallel or antiparallel to one fixed direction.
Fortunately, progress in materials science in the 1990s has made it possible to produce thin ferromagnetic films only a few atomic layers thick, with magnetic moments which show indeed the restricted
orientation with respect to an axis as described in the Ising model. So, these films should behave like the Ising model, and one can try to measure the magnetisation as a function of temperature.
This is what is shown in this plot by C. Rau, P. Mahavadi, and M. Lu:
Figure taken from C. Rau, P. Mahavadi, and M. Lu:
Magnetic order and critical behavior at surfaces of ultrathin Fe(100)p(1×1) films on Pd(100) substrates
, J. Appl. Phys.
No. 10 (1993) 6757-6759 (DOI:
It is, unfortunately, not possible to measure magnetisation directly, so one has to rely on other effects which are directly dependent on magnetisation - in this case, one uses a method called
Electron capture spectroscopy (ECS)
: A beam of ions is shot on the film, the ions capture electrons from the surface, and emit light which can be detected. If the surface is magnetised, the light is polarised, and thus, the
polarisation of the emitted light is a measure of magnetisation. This is what is plotted on the vertical axis: the polarisation
, normalised to the polarisation
at low temperatures. For, as it comes out in the experiment, the polarisation - and hence, the magnetisation of the film - is nearly constant at low temperatures, and drops sharply to zero when
approaching a specific temperature, to be identified as the Curie temperature
. In the figure, normalised polarisation is shown as a function of temperature
, where temperature has been normalised to the Curie temperature. Now, one can compare with the theoretical prediction for the magnetisation of the Ising model as a function of temperature. This is
the solid black curve. There are no more free parameters, and, as it comes out, the agreement with experimental data is perfect.
Here is an intriguing circle from experiment to theory back to experiment: Experimental data of ferromagnets measured more than 100 years ago show the appearance of spontaneous magnetisation as
temperature drops below the Curie point. Models are constructed to try to understand this, and for a simplified model restricted to two dimensions, an exact formula for the magnetisation can be
derived. Finally, real materials show up which correspond to the idealisations and simplifications made in the model, the magnetisation can be measured... and it works!
This post is part of our 2007 advent calendar
A Plottl A Day
22 comments:
Hi Stefan and Bee --
Thanks for bringing up the subject of spontaneous magnetization at the Curie transition. It is certainly interesting physics, and also it gives me a chance to ask for your answer(s) to an old
Suppose we propose the following idea for a goofy kind of heat engine. Take a piece of ferromagnetic material and surround it with an coil of wire connected to some kind of (non-polarized) load.
Start with the ferromagnetic piece at high temperature and no magnetic field. Connect the piece to a heat sink and cool it off to below the Curie temperature, and when the magnetic field appears
it will momentarily cut through the coil and push a current pulse through the load. Now connect the piece to a heat source and raise its temperature above the Curie temperature, at which point
the magnetization vanishes and the collapsing field will put generate another current pulse through the load. Repeat around the cycle and you've turned some amount of heat energy into "useful"
electrical potential energy.
This kind of engine can't produce very much power per weight of material, and so you would never be tempted to build one practically. The puzzle, though, comes when we ask about the efficiency.
In principle -- at first glance, at least -- the heat source and heat sink can have an arbitrarily small temperature difference as long as one is above and the other below the Curie temperature.
So the Carnot limit on the conversion efficiency (T_source-T_sink)/T_source can be set arbitrarily low. Since the engine produces a fixed amount of useful work on each cycle, this implies that
the amount of heat energy that must be pulled from(sunk into) the source(sink) to drive the piece up(down) through the transition becomes arbitrarily large as the source(sink) temperature(s)
approach the Curie temperature. That sounds more than a little strange to me; does it make sense to you? if not, where is the error in the puzzle?
Hi Paul,
that's a nice gadget you are proposing, and an interesting question.
One thing is not quite clear to me: What do you want to do with the current induced in the coil? Charge a battery or something like that? The direction of the current will have opposite
directions when magntisation emerges and vanishes within one cycle, and the direction of the current when magnetisation sets in will be at random from cycle to cycle, since magnetisation is
spontaneous. So you will need some ratchet, in the form of a diode?
For the analysis of the thermodynamics, one probalby should have a close look at the MdH contribution to the internal energy, the energy on the magnetic field... hm... too long ago that I've
thought about these kinds of problems... Maybe someone else has an idea?
Best, Stefan
Dear Stefan,
I think one can make a Paul-type gadget around many phase transitions.
E.g., a solid close to its melting point at one atmosphere that expands on melting. You let the solid melt, and it does work = (one atmosphere)*(volume difference of liquid and solid). You then
cool down the liquid to freeze it and repeat the cycle.
The work per cycle is fixed; the source and sink temperatures can be arbitrarily close and above and below the melting point, and the Carnot efficiency can be arbitrarily small, and therefore the
amount of heat required in this cycle seems to grow indefinitely.
However actually, per cycle one is using up the latent heat of phase transition and in return getting the fixed P ΔV work.
Need to think some more about this to resolve the paradox.
Thanks for another great post. If I recall correctly from stat mech., originally Ising created his model assuming only 1 dimension, whhc has an exact solution. Later L. Onsager extended the model
to 2D for which there is also an exact solution (as your equation there implies). But as I recall, there is no exact solution to the 3D extension of the Ising model?
Hi Stefan and Bee,
1 - There may be a means [beyond my ability] to extend this to 3D.
Perhaps for Onsager 2D to 3D, someone may be able use to techniques that extended John von Neumann 2D to 3D [and more?] structures.
RD MacPherson, DJ Srolovitz, “The von Neumann relation generalized to coarsening of three-dimensional microstructures”. [p1053]
Editor’s Summary
There was also a concise synopsis of this paper with one figure [1] on a Princeton IAS site “News Briefs”: ‘Materials Science Problem Solved with Geometry’;
but now all I can find is this pdf summary:
NY Times Summary
2 - Is this thermomagnetism related in some manner to electromagnetism?
IEEE uses CP Steinmetz "phasor" equations based upon Grassmann Algebra, that were shown in 1945 by Gabriel Kron to be related to Shroedinger's Equation.
PhysRev v67n1-2 Jan 1945
A lot is known about 3D Ising approximately - computer simulations, high temp expansions, low temp expansions, expansions in the number of dimensions (4-epsilon or 2+epsilon), etc. As for exact
results in 3D: well, if you find some, you will be rich and famous.
Hi Thomas Larsson,
Could you provide a web reference discussing the 3D Ising?
I would be interested in comparing this to the work of RD MacPherson and DJ Srolovitz.
I sincerely doubt that I "will be rich and famous", but I am curious.
Hi Arun, Paul,
I am still not sure if I have understood the idea correctly.
As I see it, this is not related to latent heat - many magnetic transitions are second order anyhow, or have only a small latent heat. And, I mean, high latent heat does not imply that you can
extract large amounts of work from the cycle?
What I am more puzzled about is this question if Paul's apparatus needs some rectifier to extract work - in this case, the machine does not work reversibly, and Carnot's argument doesn't apply
Best, Stefan
Hi Changcho,
If I recall correctly from stat mech., originally Ising created his model assuming only 1 dimension, which has an exact solution.
yes, you recall nearly correctly ;-) - the model had been formulated by Lenz, who was Ising's PhD thesis advisor. In his thesis, Ising calculated the partition function for the restriction to 1D
(that's an quite easy application of what we now call the transfer matrix, if I remember correctly) and could show that the 1D model has no spontaneous magnetisation at non-zero temperature.
It seems that the name "Ising model" was coined by Peierls, when he described his arguments with domain walls to show that the 2D model has spontaneous magnetisation. The 2D transition
temperature was then calculated by Cramers and Wannier using their famous duality relation between the high-T/low-T expansions. And the exact solution in 2D was then obtained by Onsager...
Best, Stefan
Hi Doug,
thank you for the reference - I am not so familiar with this kind of stuff, it may be relevant to the Ising model, but I do not see it immediately.
But I am quite sure that you will be famous if you find an exact solution to the 3D Ising model ;-), at least among the physics geeks. I mean, exact solution says that you can write down a
formula for the partition function, the correlation functions, the critical exponents, the magnetisation similar to the formula by Yang in the case of 2D, and so on. Hundreds of brilliant
physicists have tried hard to find such an exact solution, so far without success. There is this story of that student at CalTech who was looking for a PhD project or so, and he asked Feynman who
hinted him the Hamiltonian of the 3D Ising model in a corner of his blackboard. The student went on to the next office and asked Gell-Mann, who suggested... the 3D Ising model.
I'm not sure, but such a solution might be important way beyond the Ising model, because the technique may work for other problems as well.
As for all the other approximative techniques on the market, unfortunately I don't know of any recent review article. Maybe someone of our readers knows one?
But you can get a quite good impression about what it is going on by checking out the titles of the results of an arXiv search for "3D Ising".
Best, Stefan
Dear Stefan,
The paradox is that around a phase transition one can build an almost isothermal engine. However, per cycle, this engine seems to do a fixed amount of work (even if small), even as the source and
sink temperatures become arbitrarily close.
Paul's device needs no diode or rachet. For example, it can be used for electrolysis of water. Yes, the H2 and 02 will be mixed; but you can see that you can accumulate a lot of free energy - a
nice explosive mixture :)
Dear Arun,
thank you for your comment! I'll have to give a second thought to Carnot cycles operating around a phase transition with latent heat... it's probably a typical textbook problem...
However, I am still confused about Paul's machine. As said before, if you look, for example, at the current induced in the coil when spontaneous magnetisation sets in when crossing the Curie
point from higher to lower temperature, the direction of this current will be fluctuating at random from cylce to cycle, since magnetisation is spontaneous. Thus, you cannot accumulate work, for
example by electrolysis. Or am I missing something?
One might add a small external magnetic field which breaks symmetry explicitly, but I am not so sure if the usual Carnot argument then still is valid - solid-state physicists sometimes call such
fields time-symmetry breaking, for a reason. So this might by tricky? OK, I should take a break and look in some decent textbook and think about it ;-)
Best, Stefan.
Hi Stefan --
I'm glad that you've found the heat engine question interesting. I have to say that I don't think you'll resolve anything focussing on the question of whether the current/voltage needs to be
rectified. Even if you assume a polarized DC load that can only do "useful" electrical work with the right polarity, this can easily be accommodated: (1) don't connect the coil to the load or
otherwise try to extract energy when the field is being formed, ie on the cooling leg; (2) one the field is established, do a small measurement to determine its direction [I think you can do this
at very little energy cost]; then (3) Install a coil in the orientation you want to get the EMF polarity you want, and then collapse the field on the heating leg. (I think you can install a coil,
or change one's direction, without an energy cost as long as it's not part of a closed circuit; but if you don't buy that, then just imagine surrounding the piece with a number of fixed coils in
different orientations, and then just connect the load to the one that will give you the right polarity on any given heating leg.)
I'm not 100% sure I agree with Arun's generalization to many types of phase transitions, but that's just because I'm slow and have to think about it. Of course, the whole subject of Carnot
efficiencies was derived thinking about steam engines, which certainly do work around a phase transition! so I would guess that the analogous and appropriate logic is in the classical treatment
Lastly, I'll mention that there are such things as magnetic refrigerators; are they in some way related to this proposed heat engine?
Like Stefan, I don't have any references handy, but one place to start at is Wikipedia.
One can only compute the partition function exactly in 1D and 2D, but some exact information is known in higher dimensions. In particular, the critical exponents when d >= 4 are correctly given
by mean field theory. This follows from the Wilson-Fisher renormalization group, which gave Ken Wilson the Nobel prize in 1982. d=4 is the critical dimension because the Ising model at
criticality is described by phi^4 theory (a scalar field with a quartic self-interaction), and this theory is renormalizable exactly in 4D.
On prononciation: Ernst Ising was a german jew who emigrated to the US after WWII, so he probably pronounced his name the american way during most of his long life (almost 100). He left physics
directly after his PhD, disappointed that he failed his thesis project, which was to solve this simple model in 3D.
Some refences on Wikipedia:
Ken Wilson
Mike Fisher
Leo Kadanoff
phase transitions
renormalization group
Hi Thomas,
thank you for collecting the links! It's impressive what is there already... and
Hi Paul,
about magnetic cooling, I have just found the Wikipedia entry on Magnetic refrigeration... now, that's some stuff to start with... I'll have to digest all that a bit...
Best, Stefan
I have managed to confuse myself thoroughly. Suppose we try drawing the cycle of Paul's engine on a temperature-entropy plot, what would it look like?
The fixed work per cycle means even as T(source) tends to T(sink) the area in this cycle remains fixed.
See Wiki diagram
Hi Stefan and Thomas Larsson,
1 - Thanks for the 70 arXiv references to the “3D Ising”
I have only been able to read the first 7, but hope to read the rest over the next year.
I have noticed two ideas:
a - Ref_7 S. Perez Gaviro, et al, ’Study of the phase transition in the 3d Ising spin glass from out of equilibrium numerical simulations’ suggests game theory to me. I think this theme is
present in other papers as well.
b - Ref_4 D. Ivaneyko, et al, ‘On the universality class of the 3d Ising model with long-range-correlated disorder’ may be a link to the paper I referenced above: “non-magnetic impurities”,
“linear dislocations, planar grain boundaries, three dimensional cavities”.
2 - The wiki ’Phase Transition’ page is very interesting.
a - I wonder if plasma from a star/sun can phase directly into a solid [from the diagram]?
b - Pressure is also important.
c- Phase changes seem to be consistent with both concepts of continuous transformation and coupling.
3 - MRI and NMR imaging in medicine can form 3D images from “magnetic resonance” which would seem only a step or two away from 3D Ising?
a - A Critical History of Computer Graphics and Animation Section 18:
Scientific Visualization
OSU example.
b - Bioinformatics and Brain Imaging: Recent Advances and Neuroscience Applications
UCLA example.
Hi doug,
I have only been able to read the first 7, but hope to read the rest over the next year.
Sorry, the suggestion was serious, but I didn't want you to read all the papers, but maybe just have a look at the abstracts to get an impression about the issues that are looked at in connection
with the 3D Ising model. I don't know anything about your background, so please don't mind if this reference to the arXiv was not useful to you... As a practical remark, since I am not sure about
the reproduciblity of hit lists for arXiv searches, it's a good idea in general if you give the arXive numbers, say "cond-mat/0234512", for the references you quote.
MRI and NMR imaging in medicine can form 3D images from “magnetic resonance” which would seem only a step or two away from 3D Ising?
Sincerely, I do not see any connection. The magnetic moments of say hydrogen that are used to produce these nice images in MRI are not coupled to each other - as far as I know, they are
completely independent. Moreover, they do not from a lattice, but a glass at best.
Best, Stefan
An Ising spin glass is related to, but not the same as, the Ising model. There is a huge literature on spin glasses, but the little I once knew about it I have long forgotten.
But if you seriously want to study phase transition theory, the following Phys. Rep. probably contains everything you want to know (and a lot more):
Title: Critical Phenomena and Renormalization-Group Theory
Authors: Andrea Pelissetto, Ettore Vicari
Hi Thomas,
thanks for pointing out this review paper, I didn't know that. It's from after I've switched fields from magnetic phase transitions to heavy ions ;-)
Hi Stefan and Thomas,
Thanks for the new links [Pelissetto et al] and updates RE Ising glass, lattice and models.
This Ising concept and phase transitions are really interesting.
They may be a type of information transition, rather than loss of magnetic information.
There may be some link to the Bekenstein and Hawking discussions of black hole information ~ a phase transition of information transformation?
The 70 paper arXiv reference was helpful since I am uneducated with respect to Ising. I do read the abstracts, but gain more from scanning the paper especially if there are accompanying diagrams.
The arXiv numbers for:
a - S Perez Gaviro, et al, is arXiv:cond-mat/0603266
b - D Ivaneyko, et al, is arXiv:cond-mat/0611568
I have only a math BA with an MD [sort of like have an MS in all Nobel categories if they were preceded by bio-. I have keen interest in “bio-mathematics” which seems to be multidisciplinary with
significant contributions from engineering, physics, mathematics, chemistry and economics. Once upon a time I was a naval gunnery officer with basic knowledge of ballistics with related
mechanical engineering and fire control radar with related electrical engineering. | {"url":"http://backreaction.blogspot.com/2007/12/magnetism-and-ising-model.html","timestamp":"2014-04-21T04:37:46Z","content_type":null,"content_length":"162759","record_id":"<urn:uuid:5d11603e-9c8d-4870-8b11-68905e4ea712>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lecture Notes on General Relativity
(gravitational waves
This set of lecture notes on general relativity has been expanded into a textbook, Spacetime and Geometry: An Introduction to General Relativity, available for disturbing a black hole,
purchase online or at finer bookstores everywhere. About 50% of the book is completely new; I've also polished and improved many of the explanations, and made the from NCSA)
organization more flexible and user-friendly. The notes as they are will always be here for free.
These lecture notes are a lightly edited version of the ones I handed out while teaching Physics 8.962, the graduate course in General Relativity at MIT, during Spring 1996. Each of the chapters is
available here as pdf. Constructive comments and general flattery may be sent to me via the address below. The notes as a whole are available as gr-qc/9712019, and in html from "Level 5" at Caltech.
What is even more amazing, the notes have been translated into French by Jacques Fric. Je ne parle pas francais, mais cette traduction devrait etre bonne.
Dates refer to the last nontrivial modification of the corresponding file (fixing typos doesn't count). Note that, unlike the book, no real effort has been made to fix errata in these notes, so be
sure to check your equations.
In a hurry? Can't be bothered to slog through lovingly detailed descriptions of subtle features of curved spacetime? Try the No-Nonsense Introduction to General Relativity, a 24-page condensation of
the full-blown lecture notes.(pdf)
While you are here check out the Spacetime and Geometry bibliography page -- an annotated bibilography of technical and popular books, many available for purchase online.
• 1. Special Relativity and Flat Spacetime
(22 Nov 1997; 37 pages)
the spacetime interval -- the metric -- Lorentz transformations -- spacetime diagrams -- vectors -- the tangent space -- dual vectors -- tensors -- tensor products -- the Levi-Civita tensor --
index manipulation -- electromagnetism -- differential forms -- Hodge duality -- worldlines -- proper time -- energy-momentum vector -- energy-momentum tensor -- perfect fluids -- energy-momentum
• 2. Manifolds
(22 Nov 1997; 24 pages)
examples -- non-examples -- maps -- continuity -- the chain rule -- open sets -- charts and atlases -- manifolds -- examples of charts -- differentiation -- vectors as derivatives -- coordinate
bases -- the tensor transformation law -- partial derivatives are not tensors -- the metric again -- canonical form of the metric -- Riemann normal coordinates -- tensor densities -- volume forms
and integration
• 3. Curvature
(23 Nov 1997; 42 pages)
covariant derivatives and connections -- connection coefficients -- transformation properties -- the Christoffel connection -- structures on manifolds -- parallel transport -- the parallel
propagator -- geodesics -- affine parameters -- the exponential map -- the Riemann curvature tensor -- symmetries of the Riemann tensor -- the Bianchi identity -- Ricci and Einstein tensors --
Weyl tensor -- simple examples -- geodesic deviation -- tetrads and non-coordinate bases -- the spin connection -- Maurer-Cartan structure equations -- fiber bundles and gauge transformations
• 4. Gravitation
(25 Nov 1997; 32 pages)
the Principle of Equivalence -- gravitational redshift -- gravitation as spacetime curvature -- the Newtonian limit -- physics in curved spacetime -- Einstein's equations -- the Hilbert action --
the energy-momentum tensor again -- the Weak Energy Condition -- alternative theories -- the initial value problem -- gauge invariance and harmonic gauge -- domains of dependence -- causality
• 5. More Geometry
(26 Nov 1997; 13 pages)
pullbacks and pushforwards -- diffeomorphisms -- integral curves -- Lie derivatives -- the energy-momentum tensor one more time -- isometries and Killing vectors
• 6. Weak Fields and Gravitational Radiation
(26 Nov 1997; 22 pages)
the weak-field limit defined -- gauge transformations -- linearized Einstein equations -- gravitational plane waves -- transverse traceless gauge -- polarizations -- gravitational radiation by
sources -- energy loss
• 7. The Schwarzschild Solution and Black Holes
(29 Nov 1997; 53 pages)
spherical symmetry -- the Schwarzschild metric -- Birkhoff's theorem -- geodesics of Schwarzschild -- Newtonian vs. relativistic orbits -- perihelion precession -- the event horizon -- black
holes -- Kruskal coordinates -- formation of black holes -- Penrose diagrams -- conformal infinity -- no hair -- charged black holes -- cosmic censorship -- extremal black holes -- rotating black
holes -- Killing tensors -- the Penrose process -- irreducible mass -- black hole thermodynamics
• 8. Cosmology
(1 Dec 1997; 15 pages)
homogeneity and isotropy -- the Robertson-Walker metric -- forms of energy-momentum -- Friedmann equations -- cosmological parameters -- evolution of the scale factor -- redshift -- Hubble's law
Related sets of notes or tutorials Links to GR resources | {"url":"http://preposterousuniverse.com/grnotes/","timestamp":"2014-04-21T09:38:03Z","content_type":null,"content_length":"8820","record_id":"<urn:uuid:af992599-4d7a-4b10-8c8e-2911b67a77d3>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Literature needed on recursive (union) types
nico@dutiag.twi.tudelft.nl (Nico Plat)
Tue, 15 Sep 1992 15:02:35 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers
From: nico@dutiag.twi.tudelft.nl (Nico Plat)
Organization: Compilers Central
Date: Tue, 15 Sep 1992 15:02:35 GMT
Keywords: types, design, question
Dear all,
I have the following problem:
Suppose I have a language with union types. A union type is a type
denoting those values that are a member of at least one of its
component types. So, suppose I have a type definition T = nat | bool
(The '|' symbol is the type constructor for union types), then 1, 2, 3,
4, etc. would be values of T, but <false> and <true> as well.
The problem arises when trying to determine subtype relations between
recursive union types. I want to define a relationship
`IsSubType (T1, T2)', which yields true if (all the components of) T1
are subtypes of (at least one of) the components of T2.
Consider the two type definitons
T1 = nat | nat X T1 { 'X' denotes a product type }
T2 = real | real X T2
IsSubType (nat, real) is defined to be true, and therefore it is easy
to see that IsSubType (T1, T2) should be true as well. The obvious
implementation does not work, however, because a recursive call to
IsSubType (T1, T2) is made when examing the second component, and then
we have infinite recursion. For this particular class of example a fix
could be found, but they can be made much more complex.
Does anyone know of literature dealing with this kind of problem?
Thanks for your time,
- Nico Plat -
- Delft University of Technology -
- Fac. of Techn. Math. and Informatics -
- P.O. Box 356, NL-2600 AJ Delft, The Netherlands -
- Phone +31-15784433 Fax +31-15787141 E-mail nico@dutiaa.twi.tudelft.nl -
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/92-09-079","timestamp":"2014-04-19T04:23:55Z","content_type":null,"content_length":"5486","record_id":"<urn:uuid:008c4e21-671f-4762-a74e-d8f69cc8311a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is Michelle Obama younger than Barack Obama?
You asked:
Is Michelle Obama younger than Barack Obama?
Barack Obama
Barack Hussein Obama II (born August 4, 1961), the 44th and current President of the United States
Michelle Obama
Michelle LaVaughn Robinson Obama (born January 17, 1964), the wife of the forty-fourth President of the United States, Barack Obama, and is the first First Lady of the United States of
African-American heritage
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/is_michelle_obama_younger_than_barack_obama","timestamp":"2014-04-17T07:17:33Z","content_type":null,"content_length":"56271","record_id":"<urn:uuid:0676243d-07aa-4780-a3e7-a5153883a80e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Planar_algebra
planar algebras
first appeared in the work of
Vaughan Jones
on the
standard invariant
of a
II[1] subfactor
They also provide an appropriate algebraic framework for many
knot invariants
(in particular the
Jones polynomial
), and have been used in describing the properties of
Khovanov homology
with respect to tangle composition
Given a label set $I$ with an involution, and a fixed set of words $W$ in the elements of the label set, a planar algebra consists of a collection of modules $V_omega$, one for each element $omega$
in $W$, together with an action of the operad of tangles labelled by $I$.
In more detail, give a list of words $w_1, w_2, ldots, w_k$, and a single word $w_0$, we define a tangle from $\left(w_1, w_2, dots, w_k\right)$ to $w_0$ to be a disk $D$ in the plane, with points
around its circumference labelled in order by the letters of $w_0$, with $k$ internal disks removed, indexed $1$ through $k$, with the $i$-th internal disk having points around its circumference
labelled in order by the letters of $w_i$, and finally, with a collection of oriented non-intersecting curves lying in the remaining portion of the disk, with each component being labelled by an
element of the label set, such that the set of end points of these curves coincide exactly with the labelled points on the internal and external circumferences, and at the initial points of the
curves, the label on the curves coincides with the label on the circumference, while at the final points, the label on the curve coincides with the involute of the label on the circumference.
While this sounds complicated, an illustrated example does wonders!
Such tangles can be composed. With this notion of composition, the collection of tangles with labels in $I$ and boundaries labelled by $W$ forms an operad.
This operad acts on the modules $V_omega$ as follows. For each tangle $T$ from $\left(w_1, w_2, ldots, w_k\right)$ to $w_0$, we need a module homomorphism $Z_T : V_\left\{w_1\right\} otimes V_\left\
{w_2\right\} otimes cdots otimes V_\left\{w_k\right\} longrightarrow V_\left\{w_0\right\}$. Further, for a composition of tangles, we must get the corresponding composition of module homomorphisms.
Temperley-Lieb algebras
can be retrofitted as a planar algebra.
Fix an element $delta in R$ in the ground ring. Take a one element label set, and allow words of even length. (Thus the words correspond exactly to nonnegative even integers.) For each even integer
$2n$, let $V_\left\{2n\right\}$ be the free module generated by (isotopy classes of) diagrams consisting of $n$ non-intersecting arcs drawn in a disk, with the endpoints of the arcs lying on the
boundary of the disk. The action of tangles is simply by gluing the appropriate disks into the tangle, removing any closed arcs, replacing each with a factor of $delta$.
We can generalise this to allow more complicated label and word sets (including for example, the planar algebra version of the Fuss-Catalan algebras). For each label $i in I$, fix $delta_i in R$ in
the ground ring. For a word $omega in W$, the module $V_omega$ is generated by (again, isotopy classes) of diagrams consisting on non-intersecting arcs drawn in a disk, labelled by elements of $I$
with endpoints on the boundary of disk, such that the induced labels on these points, when read in order, give $omega$. The action of tangles is defined as before, with closed arcs labelled by $i$
being replaced by a factor of $delta_i$.
The (oriented) tangle planar algebra is a planar algebra with a two element label set, the nontrivial involution on it, and balanced even length words. It is generated, as a planar algebra, by the
diagrams of the positive and negative crossings in
knot theory
, living in
Knot polynomials
satisfying skein relations can be succinctly described as quotient maps from this planar algebra, which are rank 1 on
The planar algebras used in describing II[1] subfactors have a two element label set, with the nontrivial involution, and the allowed words are the finite-length alternating words in these two
elements. The labels on the tangles are typically illustrated by shading alternately the regions between the strands; the two types of strands are then distinguished by either having a shaded region
on their right or on their left. | {"url":"http://www.reference.com/browse/wiki/Planar_algebra","timestamp":"2014-04-18T17:28:51Z","content_type":null,"content_length":"77723","record_id":"<urn:uuid:1cde8d5d-4d9b-4c35-9189-980b9f52ce8f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Dozen, A Gross, And A Score,
Plus Three Times The Square Root Of Four,
Divided By Seven,
Plus Five Times Eleven,
Equals Nine Squared Plus Zero, No More.
A dozen, a gross, and a score,
Plus three times the square root of four,
Divided by seven,
Plus five times eleven,
Equals nine squared plus zero, no more.
• A dozen, a gross, and a score, Plus three times the square root of four, Divided by seven, Plus five time eleven, Equals nine squared plus zero, no more.
• 12 + 144 + 20 + 3(4) 2 ---------------------- + 5(11) = 9 + 0 7 A doze
a gross and a score, Plus three times the square root of four, Divided by seven, Plus five times eleven, Equals nine squared plus zero, no more!...
• 12 + 144 + 20 + (3 * 4^1/2)) / 7) + (5 * 11) = 9^2 + 0 A Doze
a Gross and a Score, plus three times the square root of four, divided by seven, plus five times eleven, equals nine squared and not a bit more....
• Twice five syllables Plus seven can't say much but That's Haiku for you.
• Peter's Theorem Incompetence plus incompetence equals incompetence.
• When speculation has done its worst, two plus two still equals four. -- Samuel Johnson (1709-1784)
• When speculation has done its worst, two plus two still equals four. -- S. Johnso
• imes-or-divided-by quant. [by analogy with `plus-or-minus'] Term occasionally used when describing the uncertainty associated with a scheduling estimate
for either humorous or brutally honest effect. For a software project, the scheduling uncertainty factor is usually at least 2....
• Those who think that two plus two equals four have not recently been to a real lumberyard and tried to purchase a two by four. | {"url":"http://www.anvari.org/fortune/Miscellaneous_Collections/350541_a-dozen-a-gross-and-a-score-plus-three-times-the-square-root-of-four-divided-by-seven-plus-five-times-eleven-equals-nine-squared-plus-zero-no-more.html","timestamp":"2014-04-20T18:32:07Z","content_type":null,"content_length":"13492","record_id":"<urn:uuid:1db6e32f-bbef-4c28-9737-1a6acf9415df>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fort Pierce Math Tutor
Find a Fort Pierce Math Tutor
...I am currently employed as a high school English teacher. You will find me to be a creative, organized, analytical educator, trainer, and curriculum developer with a masters of educational
administration. I am eager to contribute a proven ability to raise student achievement to your student and...
51 Subjects: including SAT math, English, ACT Math, algebra 1
...I'm not going to lie to you. It still won't be easy. When you learn a foreign language, you have to practice it to become fluent in it.
8 Subjects: including algebra 1, prealgebra, writing, vocabulary
...I passed the Subject Area Examination SAE for Social Sciences grades 6-12 in 2008 after five days of study. I passed the SAE for Math grades 9-12 in 2012 after three days of study. I tutor
full-time; one-on-one tutoring and facilitating workshops are my niche.
32 Subjects: including algebra 2, ACT Math, algebra 1, geometry
My name is Sean and I have been a teacher for nine years here, in the state of Florida. I have taught 2nd, 3rd, and 5th grade. I earned a Bachelor’s degree in Elementary Education with a math
extension from Buffalo State College in 2005.
13 Subjects: including prealgebra, reading, algebra 1, algebra 2
My name is Sean, I am 18 years old, and an Eagle Scout! I believe learning should be an enjoyable process. I have held to that concept for about 2 years now while tutoring with great success.
29 Subjects: including calculus, statistics, discrete math, differential equations | {"url":"http://www.purplemath.com/fort_pierce_fl_math_tutors.php","timestamp":"2014-04-17T19:53:37Z","content_type":null,"content_length":"23571","record_id":"<urn:uuid:3fe133bc-ae9b-435f-bd09-cadce3515989>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Iterated Block-Pulse Method for Solving Volterra Integral Equations
[1] Jiang, ZH., and Schaufelberger, W., 1992, Block Pulse Functions and Their Applications in Control Systems, Berlin, Springer-Verlag.
[2] Beauchamp, KG., 1984, Applications of Walsh and Related Functions with an Introduction to Sequency theory, Academic Press, London.
[3] Deb, A., Sarkar, G., and Sen, SK., 1994, Block pulse functions, the most fundamental of all piecewise constant basis functions, Int J Syst Sci., 25(2), 351-363.
[4] Rao, GP., 1983, Piecewise constant orthogonal functions and their application to systems and control, Springer-Verlag, New York.
[5] Babolian, E., and Masouri, Z., 2008, Direct method to solve Volterra integral equation of the first kind using operational matrix with block-pulse functions, J. Comput. Appl. Math., 220, 51-57.
[6] Maleknejad, K., and Mahmoudi, Y., 2004, Numerical solution of linear Fredholm integral equation by using hybrid Taylor and Block-Pulse functions, Appl. Math. Comput., 149, 799-806.
[7] Maleknejad, K., and Mahdiani, K., 2011, Solving nonlinear mixed Volterra-Fredholm integral equations with two dimensional block-pulse functions using direct method, Common Nonlinear Sci Numer
Simulat 16, 3512-3519.
[8] Maleknejad, K., Sohrabi, S., and Baranji, B., 2010, Application of 2D-BPFs to nonlinear integral equations, Common Nonlinear Sci Numer Simulat, 15(3), 528-535.
[9] Maleknejad, K., Shahrezaee, M., and Khatami, H., 2005, Numerical solution of integral equations system of the second kind by Block-Pulse functions, Appl. Math. Comp., 166, 15-24.
[10] Maleknejad, K., and Tavassoli Kajani, M., 2003, Solving second kind integral equations by Galerkin methods with hybrid Legendre and Block-Pulse functions, Appl. Math. Comp., 145, 623-629.
[11] Atkinson, K., 1997, The Numerical Solution of Integral Equations of the Second Kind, Cambridge University Press.
[12] Brunner, H., 2004, Collocation Methods for Volterra Integral and Related Functional Equations, Cambridge University Press.
[13] Sloan, I., 1976, Improvement by iteration for compact operator equations, Math Comp., 30, 758-764.
[14] Blyth, W. F., May, R. L., and Widyaningsih, P., 2004, Volterra integral equations solved in Fredholm form using Walsh functions, Anziam J., 45(E), C269-C282.
[15] Krasnov, M., Kiselev, A., and Makarenko, G., 1971, Problems And Exercises in Integral Equations, Mir Publishers, Moscow. | {"url":"http://article.sapub.org/10.5923.j.am.20120201.03.html","timestamp":"2014-04-18T15:39:07Z","content_type":null,"content_length":"33763","record_id":"<urn:uuid:5a64ea05-cee8-4ad8-bb61-a61359cbd425>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polynomial Function
Definition of a Polynomial Function
Back Polynomial Functions Function Institute Math Contents Index Home
Here a few examples of polynomial functions:
f(x) = 4x^3 + 8x^2 + 2x + 3
g(x) = 2.4x^5 + 3.2x^2 + 7
h(x) = 3x^2
i(x) = 22.6
Polynomial functions are functions that have this form:
f(x) = a[n]x^n + a[n-1]x^n-1 + ... + a[1]x + a[0]
The value of n must be an non-negative integer. That is, it must be whole number; it is equal to zero or a positive integer.
The coefficients, as they are called, are a[n], a[n-1,]..., a[1], a[0]. These are real numbers.
The degree of the polynomial function is the highest value for n where a[n] is not equal to 0.
So, the degree of
g(x) = 2.4x^5 + 3.2x^2 + 7
Notice that the second to the last term in this form actually has x raised to an exponent of 1, as in:
f(x) = a[n]x^n + a[n-1]x^n-1 + ... + a[1]x^1 + a[0]
Of course, usually we do not show exponents of 1. So, we write a simple x instead of x^1.
Notice that the last term in this form actually has x raised to an exponent of 0, as in:
f(x) = a[n]x^n + a[n-1]x^n-1 + ... + a[1]x + a[0]x^0
Of course, x raised to a power of 0 is equal to 1, and we usually do not show multiplications by 1. So, the variable x does not appear in the last term.
So, in its most formal presentation, one could show the form of a polynomial function as:
f(x) = a[n]x^n + a[n-1]x^n-1 + ... + a[1]x^1 + a[0]x^0
Here are some polynomial functions; notice that the coefficients can be positive or negative real numbers.
f(x) = 2.4x^5 + 1.7x^2 - 5.6x + 8.1
f(x) = 4x^3 + 5.6x
f(x) = 3.7x^3 - 9.2x^2 + 0.1x - 5.2
Back Polynomial Functions Function Institute Math Contents Index Home | {"url":"http://zonalandeducation.com/mmts/functionInstitute/polynomialFunctions/definition/definition.html","timestamp":"2014-04-16T04:12:39Z","content_type":null,"content_length":"7178","record_id":"<urn:uuid:230b3f1f-8b93-49e6-9a54-74fd57a56232>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
DERIVATIVES OF TRIGO IDENTITIES no.1)Y=cos4 t- sin4 t=-2sin 2t INEED SOLUTION PLS....T_T NID BADLY!!!!
• one year ago
• one year ago
Best Response
You've already chosen the best response.
It makes no sense as it it presented. Trigonometric Identities are statements and the term "derivative" has no meaning for such a statement. Can you provide the exact wording of the problem?
Best Response
You've already chosen the best response.
differentiate Y=cos4 t -sin4 t
Best Response
You've already chosen the best response.
Is this cos(4t) or \[\cos^4t\]
Best Response
You've already chosen the best response.
We're looking for \(\dfrac{dY}{dt}\)? Can you find \(\dfrac{d}{dt}\cos^{4}(t)\)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
What do you get for that?
Best Response
You've already chosen the best response.
ist assingment dude
Best Response
You've already chosen the best response.
Conversation Overlap Misunderstanding... What do you get for \(\dfrac{d}{dt}\cos^{4}(t)\)
Best Response
You've already chosen the best response.
its our assignment in differential calculus...
Best Response
You've already chosen the best response.
Jay you didn't answer the question kirby asked. Is it suppose to be \(\cos(4t)\) or \(\cos^4(t)\)?
Best Response
You've already chosen the best response.
the second one
Best Response
You've already chosen the best response.
i dont know how to type that
Best Response
You've already chosen the best response.
Just think of it as the chain rule. You can move the exponent if it confuses you: \[\frac{d}{dt}\cos^4t=\frac{d}{dt}(\cos t)^4 = 4(\cos t)^3(-\sin t)\]
Best Response
You've already chosen the best response.
-sin t comes from the fact that it is the derivative of cos t by using the chain rule.
Best Response
You've already chosen the best response.
the book has an answer of -2sin 2 t
Best Response
You've already chosen the best response.
Just use the same logic on \[\frac{d}{dt}\sin^4t=4\sin^3t(\cos t)\]
Best Response
You've already chosen the best response.
cos4t-sin4t= kirb can u give me the whole solution soo i can studied it... please
Best Response
You've already chosen the best response.
so now: \[-4\cos^3t(\sin t) - 4\sin^3t(\cos t) = -4\cos t*\sin t(\cos^t+\sin^2t)=-4\cos t*\sin t(1) \]
Best Response
You've already chosen the best response.
Now use the double-angle formula
Best Response
You've already chosen the best response.
\[Since : \sin(2t)=2\sin t \cos t\]
Best Response
You've already chosen the best response.
\[-4\cos t \sin t = -2(2\cos t \sin t) = -2\sin(2t)\]
Best Response
You've already chosen the best response.
THANK YOU KIRB!!!
Best Response
You've already chosen the best response.
Since we're just doing your homework, I'd do it this way. \(\cos^{4}(t) - \sin^{4}(t) = [\cos^{2}(t) - \sin^{2}(t)][\cos^{2}(t) + \sin^{2}(t)] = \cos(2x)\) It's a lot easier after that.
Best Response
You've already chosen the best response.
Sorry, not sure why I wrote 2x on the end, there. Should be 2t. If you are going to make me do ALL the work, you'll have to show me how that last step happened. There's a whole lot of stuff in
there that magically turned into cos(2t).
Best Response
You've already chosen the best response.
The method by tkhunny is also excellent :) It is shorter but I usually just do it "straight-forward" unless I'm stuck and tkhunny's method is a good trick to make the derivative a lot easier
Best Response
You've already chosen the best response.
I'm usually the brute force guy. Once in a while I see one!
Best Response
You've already chosen the best response.
Hehe good one ;)
Best Response
You've already chosen the best response.
how about differentiate: y=sec²x-tan²x
Best Response
You've already chosen the best response.
Go right ahead. Let's see your first attempt. Hint: It is amazingly trivial! Remember your trigonometry. This is not much of a calculus problem.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f22e32e4b0abb3d86fd4b7","timestamp":"2014-04-21T12:40:24Z","content_type":null,"content_length":"94881","record_id":"<urn:uuid:caf03285-2490-4f94-a9fc-7ad7e572914e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
ちょっと草植えときますね型言語 Grass
_, ._
( ・ω・) んも〜
|:::::::::\, ', ´
、、、、し 、、、(((.@)wvwwWWwvwwWwwvwwwwWWWwwWw
Development Environment
• grass.el for Emacsen, including an interpreter written in Emacs Lisp, by irie
If you write another implementation of Grass, please let me know.
Grass is a functional grass-planting programming language. Syntax and semantics of Grass are defined based on (A-normalized, lambda lifted, and De Bruijn indexed) untyped lambda calculus and SECD
machine [Landin 1964] respectively so that Grass is Turing-complete.
Grass was first proposed by Katsuhiro Ueno at the 39th IPSJ Jouho-Kagaku Wakate no Kai (symposium for young researcher of information science) in 2006 to introduce how to define a programming
language in formal way.
Grass is a kind of esoteric programming language typified by BrainF*ck, but Grass has unique characteristics which any other esoteric ones don't have. For example:
1. Grass is based on lambda calculus, not Turing machine.
2. Grass is slightly easier to read than functional programming languages designed based on combinatory logic.
3. Grass has formal definition.
4. Grass is easy to slip into ASCII art.
Sample programs
Print "w" to standard output.
Calculate 1 + 1 and print the result by the number of "w". This example consists of 3 functions. First one is 1 represented by lambda term (Church integer). Second one is addition operator of Church
integer. Last one is main function. This program is equivalent to (λi.(λmnfx.m f (n f x)) i i OUT w) (λfx.f x).
wwwwwwwwwwwwwwwww wwwwwwwwWWwwwwwww
wwwwwwwwwwwwwwwww は wwwwwWWWWWWWWWW
WWWWWwWwwwWWWW わ い WWWWWWWwwwWwwWW
WWWWWWWWWWwwww ろ は wWwwwwwwwWWWWWWW
WWWWWWWWWWwwww す い wwwWwwWWWWWWWWW
WWWWWWWWWwwwww わ wwwwWwwWWWWWWWW
WWWWWWWWWWWWW ろ WWWWWWWWWwwwwww
wwwwwWwwWWWWWWW す WWWWWWWWWwwwwww
wwwwwwwWwwwwwwwww wwwwwwWWWWWWWWW
Intuitive explanation
A Grass program is written by using only "W", "w" and "v". Any other characters are ignored as comment. "W" and "v" appearing before first "w" are also ignored.
A Grass program consists of a list of top-level instructions separated by "v". The top-level instruction is either a function definition or a function application list. Every function definition
starts with "w", and every function application list starts with "W".
Program :
wwwwwWW ... wwWw v wwWWwW ... wWw v WWWwwwWw ... wwWWw v wW ...
<---Function---> <--Function--> <--Applications-->
A function application list is a list of pairs of "W" sequence and "w" sequence. Every pair of "W" sequence and "w" sequence in a function application list denotes one function application, where the
number of "W" is index of function and the number of "w" is index of argument.
Applications :
WWWWWwwwwww WWWWWWwwwwwww ... WWWWwwww WW ...
|<-5-><-6-->|<-6--><--7-->| | 4 4 |
| | | | |
| (5, 6) | (6, 7) | | (4, 4) |
| apply | apply | | apply |
A function definition consists of a pair of arity and a function application list. The length of first "w" sequence of a function denotes arity of the function. What follows the arity is the function
application list, that is, the function body. Note that only application may appear in function body; nested function is not allowed.
Function :
wwwwww WWWWWwwwwww WWWWWWwwwwwww ... WWWWwwww v
<-6-->|<--- Applications (function body) --->|
| | | | |
6 args| (5, 6) | (6, 7) | | (4, 4) |end of
| apply | apply | | apply |function
Grass interpreter is a kind of stack-based machine (similar to SECD machine). Grass interpreter maintains a stack of values (called as environment in formal definition) during evaluation. Any values
calculated by Grass program are pushed to this stack. Unlike any other popular stack-based machines, in Grass, once a value is pushed to the stack, the value is never popped; the stack of Grass
interpreter grows only. Every value in the stack is indexed by sequentially ascending positive integer from the top of the stack to the bottom. In other words, index N indicates N-th value from top
of the stack. Each index of the function application pair indicates this index.
Value Stack :
1: value1 top of stack
2: value2 (1, 2) apply
3: value3 ^ ^
... | argument is 2nd value from top.
N: valueN |
-------------- bottom of stack function is 1st value from top.
Evaluation of a Grass program is performed as follows:
1. Initialize the stack with system primitives, such as constant character and output function. See Primitives section for detail.
2. Evaluation starts from the beginning of the program and goes left-to-right.
3. If the interpreter meets a function definition, then the interpreter creates a function closure of the function with the whole of current stack, and pushes it to the stack.
4. If the interpreter meets a function application, then the interpreter takes a function closure and an argument indicated by the application from the stack, and call the function closure with the
argument. The function body is evaluated immediately (Grass adopts eager evaluation) and the return value of the function is pushed to the stack. The evaluation of the function body is performed
with stack saved in the function closure. The argument is pushed to the saved stack before the evaluation.
5. If the interpreter meets the end of function, then interpreter takes the top of the stack as return value, and resumes the evaluation of caller.
6. If the interpreter meets the end of program, then interpreter takes the top of the stack as a function, and call it with itself as argument. Return value of this function is the return value of
the entire program.
Formal definition
Only "W", "w", and "v" are used for a Grass program. Fullwidth version of these characters ("W" (U+FF37), "w" (U+FF57), and "v" (U+FF56)) are also accepted so that they are identical to
non-fullwidth version characters. Any other characters may appear in a Grass program but they are ignored.
First character of a Grass program must be "w". Both "W" and "v" appearing before first "w" are ignored like any other characters than "W", "w", and "v".
The syntax of Grass is defined by the following BNF notation. X^+ means repeat of X more than 1 time, and X^* means repeat of X more than 0 time.
• app ::= W^+ w^+
• abs ::= w^+ app^*
• prog ::= abs | prog v abs | prog v app*
app denotes function application, and abs denotes function abstraction. Valid Grass program, ranged over by prog, is a list of app and abs separated by "v".
Operational Semantics
To make the definition accurate, first we define abstract syntax of Grass as follows:
• I ::= App(n, n) | Abs(n, C)
• C ::= ε | I :: C
where n is an positive integer, and ε and :: are general list constructor denoting nil and cons, respectively. Intuitively, I ranges over the set of instructions and C ranges over the set of
instruction list.
Correspondence between concrete syntax defined in previous section and the abstract syntax is trivially defined as follows:
• app is corresponding to App(m, n), where m is the number of "W" and n is the number of "w".
• abs is corresponding to Abs(n, C), where n is the number of "w" and C is an list of App corresponded to app^*.
• Thus prog is to be in C.
Additionally we define semantic object (ranged over f), environment (ranged over E), and suspended computation (ranged over D). D plays the same role as the dump of the SECD machine in Landin, so in
what follows we call them dumps.
• f ::= (C, E)
• E ::= ε | f :: E
• D ::= ε | (C, E) :: D
The operational semantics of Grass is defined through a set of rules to transform a machine configuration. A machine configuration is a triple (C, E, D) consisting of a code block C, an environment
E, and a dump D. We write
if (C, E, D) is transformed to (C', E', D'). The reflexive transitive closure of → is denoted by →^*.
Here is the set of transformation rules.
• (App(m, n) :: C, E, D) → (Cm, (Cn, En) :: Em, (C, E) :: D) where E = (C[1], E[1]) :: (C[2], E[2]) :: … :: (C[i], E[i]) :: E' (i = m, n)
• (Abs(n, C') :: C, E, D) → (C, (C', E) :: E, D) if n = 1
• (Abs(n, C') :: C, E, D) → (C, (Abs(n - 1, C')::ε, E) :: E, D) if n > 1
• (ε, f :: E, (C', E') :: D) → (C', f :: E', D)
The top-level evaluation relation is defined as follows.
• (C[0], E[0], D[0]) →^* (ε, f :: ε, ε)
where E[0] is initial environment defined in Primitives section, C[0] is Grass program intended to be evaluated, and D[0] is the initial dump such that
• D[0] = (App(1, 1)::ε, ε) :: (ε, ε) :: ε
If (C[0], E[0], D[0]) ↛^* (ε, f :: ε, ε), then evaluation is stuck or never terminated.
We define initial environment E[0] for current version of Grass as follows. In future version, more primitives may be defined in the initial environment.
• E[0] = Out :: Succ :: w :: In :: ε
where Out, Succ, w, and In are primitives.
Primitives may have special behaviour and some side-effects. Although they cannot be described in pure lambda-calculus, we assume that every primitive is somehow encoded in the same manner as
ordinary semantic object and behaves like ordinary function in the operational semantics. How to implement primitives are out of this document.
A value denoting an "w" character (code 119). Usually, a character is used as an argument for Out and Succ primitive, or is a return value of In primitive.
A character also performs as a function which returns equality between 2 characters; it takes an argument, and returns true of Church Boolean (λx.λy.x) if the argument is a character and is
equivalent to the applied character, otherwise returns false of Church Boolean (λx.λy.y).
Take a character as argument, print it to standard output, and return the given character. If the argument is not a character, evaluation will be aborted.
Take an arbitrary value as argument, read a character from standard input, and return it. If input reaches end of file, return the argument.
Take a character (code n) as argument and return its next character (code n+1) if n < 254. if n = 255, return a null character (code 0). If the argument is not a character, evaluation will be
From λ-Calculus to Grass
Start from untyped lambda calculus.
Perform CPS Transformation.
• t ::= x | λx.r
• c ::= k | μx.e
• e ::= r c | x x c | c t
• r ::= δk.e
Perform Inverse CPS Transformation.
• e ::= x | λx.r | x x
• r ::= e | let x = e in r
Perform Lambda Lifting so that all functions are to be at top-level.
• e ::= x | x x
• m ::= e | let x = e in m
• f ::= λx.f | λx.m
• r ::= e | let x = f in r | let x = e in r
Translate every variable name into de Bruijn Index.
• n ::= • | n ↑
• e ::= n | n n
• m ::= e | let e in m
• f ::= λf | λm
• r ::= e | let f in r | let e in r
Plant grasses over this calculus. That's all.
Web pages related to Grass.
Other functional esoteric programming languages.
• 2007-10-04: Japanese version is available.
• 2007-10-02: Extended the syntax so that function applications may appear at top-level. Refined the operational semantics to make it simpler and clearer. Implementations were also updated along
with the changes of formal definition.
• 2007-09-24: This webpage was founded.
• 2006-09-17: Proposed at 39th IPSJ Jouho-Kagaku Wakate no Kai.
© 2006, 2007 UENO Katsuhiro.
stylesheet prints mail address here. | {"url":"http://www.blue.sky.or.jp/grass/","timestamp":"2014-04-16T04:13:39Z","content_type":null,"content_length":"21145","record_id":"<urn:uuid:4e9d7fce-6ca8-443c-9d7c-a244aba85e54>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lyons, IL Geometry Tutor
Find a Lyons, IL Geometry Tutor
...I also specialize in preparing students for ICTS(TAP) basic skills exam. Having earned M.S. degree in chemistry and winning a medal at university level, I am very knowledgeable in all aspects
of chemistry. I have been tutoring honors chemistry, AP chemistry, Intro. and regular chemistry and col...
23 Subjects: including geometry, chemistry, biology, ASVAB
...I can also teach mechanical engineering and basic electrical engineering subjects as well. I work as a full time employee at CNH, Inc. Currently I have free time on weekday evenings (after
6PM) & weekends.
16 Subjects: including geometry, chemistry, physics, calculus
...I am a recent alum of the prestigious Teach for America program, where I worked in a Baltimore City School and was responsible for seeing math test results more than double from their previous
years. While in Baltimore I began working with MERIT to help tutor some of the city's most promising yo...
20 Subjects: including geometry, physics, algebra 1, algebra 2
...The test covers basic educational skills in math, grammar, and reading. I understand all aspect of elementary math including addition, subtraction, multiplication and division. I understand
the concept of turning word problems into numerical problems and the importance of making the connection of elementary math to algebra.
10 Subjects: including geometry, reading, GRE, algebra 2
...I use online tools to provide practice and assessment as students progress through the course. I taught calculus at the university as an undergraduate for 4 years. I have also taught AP
calculus as well as fun exploratory classes in calculus.
24 Subjects: including geometry, calculus, algebra 1, GRE
Related Lyons, IL Tutors
Lyons, IL Accounting Tutors
Lyons, IL ACT Tutors
Lyons, IL Algebra Tutors
Lyons, IL Algebra 2 Tutors
Lyons, IL Calculus Tutors
Lyons, IL Geometry Tutors
Lyons, IL Math Tutors
Lyons, IL Prealgebra Tutors
Lyons, IL Precalculus Tutors
Lyons, IL SAT Tutors
Lyons, IL SAT Math Tutors
Lyons, IL Science Tutors
Lyons, IL Statistics Tutors
Lyons, IL Trigonometry Tutors
Nearby Cities With geometry Tutor
Argo, IL geometry Tutors
Berwyn, IL geometry Tutors
Broadview, IL geometry Tutors
Brookfield, IL geometry Tutors
Countryside, IL geometry Tutors
Forest View, IL geometry Tutors
La Grange Park geometry Tutors
Mc Cook, IL geometry Tutors
Mccook, IL geometry Tutors
North Riverside, IL geometry Tutors
Riverside, IL geometry Tutors
Stickney, IL geometry Tutors
Summit Argo geometry Tutors
Summit, IL geometry Tutors
Western Springs geometry Tutors | {"url":"http://www.purplemath.com/Lyons_IL_Geometry_tutors.php","timestamp":"2014-04-16T04:48:49Z","content_type":null,"content_length":"23927","record_id":"<urn:uuid:c13847b5-cbe4-4961-b406-14aa71a4f97d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maths Test Revision Ks3
Sponsored High Speed Downloads
KS3 REVISION BOOKLET Year 8 Trinity Catholic High School 2013. ... Maths Revision English Revision Spanish Revision Sunday French Science ... revision resources and past test papers that your
teachers have created for you!!!!
KS3 REVISION BOOKLET Year 7 Trinity Catholic High School 2013 ... Saturday RE Maths Revision English Spanish Revision French Revision Science Revision ICT Revision History Revision. Revision poster
test yourself and se created for you!!!! Write the title of the section in the middle of your ...
Sumbooks Key Stage 3 7 ... In September, the maths department bought 5 reams of lined paper and 2 reams of graph paper for £25. ... They do an experiment to test this statement on two sets of plants.
One set they give the fertiliser to, ...
Mathematics “Improving your Key Stage 3 test result” A booklet of helpful hints, tips and a revision checklist Contents Page Why are KS3 tests important?
KS3 Maths Revision Arithmetic: -Decimals without calculators -Addition - Put the values into a long addition format - Line the decimals up
The Key Stage 3 Mathematicsseries covers the new National Curriculum for Mathematics ... • Examination revision ... KS3 Maths Level by Level Pack A: Level 4 Author: Stafford Burndred Subject:
Science: Revision Guide - Letts Key Stage 3 Success ... test preparation easy with manageable content and reliable revision methods *Provides plenty ... I am 13 and I find that the key stage 3
revision is too easy and the GCSE revision is too ... Top quality KS3 Maths Workbook. Advantages ...
SAT ATTACK Badger KS3 Maths Test Guides Supplies of ammunition to help make the Maths Exams Mission Possible for your students Teacher Book with Copymasters
Key Stage 3 Maths teaches Mental Arithmetic, Algebra, Fractions, Decimals, Percentages, Volume and Measurement. Students will learn to solve problems quickly and improve ... Related searches; year 7
science revision test; year 7 science revision sheets Math mock exam.
The Key Stage 3 Mathematicsseries covers the new National Curriculum for Mathematics ... KS3 Mathematics C: Level 6 Test 1 Stockwell Park High School Television set normally £400. 4 Karen ... KS3
Maths Level by Level Pack C: Level 6 Author: Stafford Burndred Subject:
Key Stage 3 Revision Tips Booklet . How to Revise… Hodgson Academy 2011 WELCOME ... Vary the subjects – don’t do all your Maths revision on day one! ... Visualising yourself passing the test
KEY STAGE 3–5 TIER Ma QCA/04/1196 For marker’s use only Total marks Borderline check. KS3/04/Ma/Tier 3–5/P2 2 Instructions Answers This means write down your answer or show your working ... KS3
Mathematics test - tier 3-5 paper 2 Author: QCA Created Date:
CGP Ks3 revision Guides and work book available from most good book shops ... Every half term a topic based or optional test is taken. End of every term the year group will complete a maths project
that will give the opportunity to apply the maths that they have learnt during that term ...
KS3 MATHEMATICS 10 4 10 Level 6 Questions Day 1 . Toys ... Tariq won one hundred pounds in a maths competition. He gave two-fifths of his prize money to charity. How much of his prize money, in
pounds, did he have left? 2.
KEY STAGE 3 Ma 2009 Mathematics test TOTAL MARKS. KS3/09/Ma/Tier 5–7/P2 2 Instructions Answers Calculators This means write down your answer or show your working and write down your answer. You may
use a calculator to answer any question in this test.
Revision for Mock SATs Summer Term : Construction and Loci ... Oct half term test and December end of term test Spring Term: Mock SATs ... Year 9 will complete an additional End of Key Stage 3
Assessment in the Summer ...
Week 4 All Levels: Test Revision and Mock GCSE Test (Foundation Level) Week 5 ,6 Higher: Measure ; Foundations: ... Gabriel Gitonga, Third in Maths, KS3 M aths Co-ordinator, Maths Teacher: [email
protected] Zsuz sanna Thomas, ...
Pass GCSE Maths - How to Pass your Maths GCSE in 4 Weeks revision system. However, when studying for my Key Stage 3 Maths, I used the same approach.
Test Papers – Reading, Writing and Maths Review Collins Revision – KS3 Maths L5-8: Revision Guide + Workbook ... Maths Worksheets KS1 | Maths Worksheets KS1, Learning maths ... Maths Worksheets KS1,
Learning maths is fab fun with our ks2 numeracy activities and
Mathematics Key Stage 3 Scheme of Work The Scheme of Work provides full coverage of the National Curriculum and has been developed to create a Pathway through the levels of attainment
Revision for year 7 optional test (exam) End of year exam ... Inter Form Maths Quiz Consolidations of Year 7 topics Introduction to Yr 8 topics Assessments Autumn Term: Oct half term test and
December end of term test ... Key Stage 3 Co-ordinator : Mr Warren Bayliss . 17
KS3 Exam Guidance for Parents and Students Mrs P. Clarke, Assistant Headteacher, ... Year 8 will sit exams in the core subjects of Maths, English, IT, PE and RE. Year 7, ... questions as part of this
so that they can test their revision processes as well as just re-learning and
Don't pay for past SATs test papers! Download the official Key Stage 2 maths papers from 2003 to 2011, plus other revision resources. KS2 Science SATs Practice Papers - Levels 3-5 (Book) by CGP ...
we use the same rules for Key Stage 3 students so by the time you take your public exams ... The exam will cover the whole Maths curriculum so anything you have done this year ... A KS3 revision
guide can also be purchased to support with revision.
Following my previous letter about end of year examinations for students at Key Stage 3, I have made ... Revision lists for the tests can be found on FROG (goto Maths, KS3, ... (goto Maths, KS3, Year
9). There is a test for students working at the 3-5 level, the 4-
Key Stage 3 neW Online Try it FREE for one month! Visit ... National Test-style questions for pupils in Years 3 and 4. They cover all the areas of learning that children must master ... 978 1 84680
775 6 Achieve Maths Revision Level 4 £6.00
Year 8 Maths 2013-2014 Dates Below is a basic break down of your child’s year 8 Maths Schedule (Subject to Change) Autumn Term 1st Half Week 1,2,3 Topic 10: Level 3 -5: Algebra3; Level 4 -6:
Algebra3; Level 5 -7: Algebra4; Level 6 -8: Algebra1/2
Year 9 sets 1 and 2 Maths Homework Week Homework number Topic 1 1.1 Expressions 2 1.2 Indices and standard index form 3 1.3 ... 14 Revision for end of KS3 test 15 Review test (Which topics did you
not do well in the test?) Revisit them, ...
http://www.bgfl.org/bgfl/custom/resources_ftp/client_ftp/ks3/maths/coordinate_ga me ... http://www.bbc.co.uk/education/mathsfile/gameswheel.html Easy Counting Games http://www.abc.net.au/countusin/
BBC Math Revision http://www.bbc.co.uk/schools ... Basic Facts Test http://www.pcs.school.nz ...
hintsa6b for KS1 Mathematics SATS Practice Papers Where can you download free KS1 test papers - The Q&A wiki Is there a ks1 science sats test? There isn't a published test paper like Maths and
Maths Key Stage 3 Programme of Study . Year 7 Introductory project: ... Frogs End of year test & Final Project Students will look at length, ... Powers of x Statistics 3: Data investigation Solving
Problems and revision ‘Play to win’ Students will study the mathematics of strategy
KS3 Mathematics Course Outline: The KS3 Mathematics course is designed to bridge the gap between primary school and GCSE, and aims to support students in developing the key skills that they will
require when they begin their GCSE program in year 9.
Maths SATs test. The tests have ... Key Stage 3 Expectations REMEMBER – ALL CHILDREN ARE DIFFERENT . ... • English and Maths revision activities and games. USEFUL WEBSITES . ANY QUESTIONS? Title:
PowerPoint Presentation Author: Sigma Teaching Resources
CGP KS2 KS3 Revision Guides GCSE ... Search by subject to find the best revision and test practice resources for home and ... GCSE Success Edexcel Maths Foundation Revision Guide (GCSE Success
Revision Guides and Workbooks) 1843159767: £1.18: ...
maths post March 4, 2014 ... homework and revision purposes. ... We monitor students in terms of their average points scores at KS2 and KS3, alongside their score in a test of ability (CATs). These
are compared with results from end of term tests to check for
Revision days and sessions/ exam support period/revision breakfasts to continue (all) ... Use external test in Maths /Eng to assess KS3 progress (£8,000 Pupil Premium) More focused analysis of g and
t A/A* progress. (none—covered in staffing)
KS3 Bitesize covers the KS3 English, Maths and Science curricula, supporting each topic with media-rich Revise sections, together with ... interactive Revision and Test Bites to support each exam or
coursework topic, backed up by downloadable audio files, ...
Subject Revision Guidance English Reading skills will be assessed. The Reading exam will require students to show understanding of the text “Strange Lands” in a variety of
Mental Maths Test (Level 3-5) Maths: Written Paper B – Calculator (Level 3-5) ... revision activities that are highly recommended. However, ... The Key Stage 3 Bitesize is very helpful towards
preparing for the Level 6 tests.
KS3 DISCO DANCE MATHS ... LEEK REVISION YEAR 9 GWYLIAU HOMEWORK YEAR 8 TEACH THE TEACHERS! YEAR 6 SCIENCE LESSONS ... the Bunsen burners for the first time with Miss King to test various chemical
reactions to the flame in
Collins Revision KS3 Maths Levels 3-6 is an all-in-one revision guide and exam practice workbook for Key Stage 3. Written by experienced test markers, it shows how each student ... it may even be
helpful all the way through Key Stage 3. According to one 11
Key Stage 3 Age 11-14 (School Years 7-9) ... Maths Science Success Revision Guides & Workbooks Why? They are widely used in Primary schools and are matched to the National Curriculum. They provide
... Success KS2 SATs Revision & Test Practice Why?
Faculty Area Mathematics & Numeracy Course Title KS3 Year 9 Mathematics Year Group Year 9 ... Term 4 Spring Geometry and measures 3, Algebra 5, Solving problems and Revision Term 5 Summer Statistics
3 and revision, ... test 25 minutes. June, during maths lessons. Main progress review
New Maths Frameworking 3-year scheme of work ... Test 2 Chapters 4, 5 and 6 Algebra 1 Test Term 2 Ch 7. ... End of Key Stage 3 Assesment Tests Ch 14 Geometry and Measures 3 Ch 15 Solving problems Ch
16 Consolidation of KS3 work and
Students can attend Key Stage 3 Maths Club on Tuesday lunchtimes in D12 or the afterschool ... English Revision: Key Stage 3 – Letts KS3 English – HarperCollins Publishers ... topic and the summary
sheets to help them revise before each end of unit test.
KS3 SATs were abolished ... with the science revision in the purple books. Revisewise ... Mental Maths Test This is a taped test with 5 seconds, 10 seconds and 15 seconds available for different
sections. A possible 20 marks available for this test.
Year 8 Key Stage 3 Science Revision Guide Use this guide to check that you cover all the topics required for your year 8 end of year test. The test will
Further!information!can!be!foundin!‘KS3 Maths!information’!document.!% How%will%they%be%assessed?%% ... Year!Test,the!Spring!Cross!Year!Test!andthe!End! of!Year!Exam!(whichtakes!place!in!the!Summer!
... You child! will! be! given a! revision checklist! and!
This is a logic based Maths test, ... In the last two years we have run numerous revision sessions during school holidays, offered over 30 ... Key Stage 3 (Years 7 and 8) During Key Stage 3 we teach
by level and not by Year group.
Equipment you can purchase from the Maths Faculty: Revision Guides Work Books Calculators ... It provides the opportunity for students to test, ... Once into the site you can view the courses offered
by clicking on KS3/GCSE and your | {"url":"http://ebookily.org/pdf/maths-test-revision-ks3","timestamp":"2014-04-23T15:19:15Z","content_type":null,"content_length":"42444","record_id":"<urn:uuid:8f91f240-4be0-44fe-a426-035e07e6653c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
personal finance
Personal Finance
The economics of finance can be used to analyze some common personal financial issues. Most of these issues will not affect you until you are older, at which point it will help to be able to remember
the economic perspective.
Compound Interest
Suppose that you are 30 years old, with a family income of $50,000 a year. If you save $3000 a year, how much will you have after 35 years? The principal alone, without earning interest, will be 35
times $3000, or just $105,000. However, if you invest the money and earn a real annual return of 4 percent, at the end of 35 years you will have $221,000. The difference between $221,000 and $105,000
is the power of compound interest.
Below is a table that shows part of this calculation
│Year │savings│interest on last year's balance │this year's balance│
│1 │$3000 │-- │$3000 │
│2 │$3000 │$120 │$6120 │
│3 │$3000 │$245 │$9365 │
│4 │$3000 │$374 │$12,739 │
│...33│$3000 │$7524 │$198,629 │
│34 │$3000 │$7945 │$209,574 │
│35 │$3000 │$8383 │$220,957 │
In the real world, inflation tends to distort this sort of calculation. Because of inflation, you might be able to earn a return of 7 percent, ending up with a balance of over $400,000. Moreover, if
inflation is, say, 3 percent per year, then you should be able to save more than $3000 a year in later years. In fact, your savings should go up by 3 percent per year, which will further increase
your final balance. However, if inflation is 3 percent per year, that means that the cost of living after 35 years will be almost triple what it is today. Overall, a world in which the nominal
interest rate is 7 percent and inflation is 3 percent is like one in which the nominal interest rate is 4 percent and inflation is zero.
The point to appreciate about compound interest is that over a long period of time a relatively small annual rate of savings can add up. On the other hand, living beyond your means and going into
debt means that compound interest works against you. Adding a little more debt each year can lead you into a very deep hole. The compounding effect is even stronger, because interest rates for
consumers tend to be high (over 10 percent on many credit cards). If you do not pay your full credit card balance every month, you end up fighting a very strong current of compound interest flowing
against you.
Buying a Home
Our general formula for the profitability of buying a home is based on a comparison of the purchase price to the rent on an equivalent home. The formula is
profitability = rental rate + appreciation - interest rate
For example, if a house costs $150,000 and it could be rented for $6,000 a year, then the rental rate is $6,000/$150,000 = .04 or 4 percent. If it appreciates at a rate of 4 percent per year and the
interest rate is 7 percent, then the profitability is 4+4-7=1 percent, which means that it is profitable to buy the house rather than rent.
A major complicating factor with buying a house is that it costs a lot to buy and sell. Real estate sales commissions are around 6 percent. There are also a number of fees charged by lenders, title
insurance companies, and other service providers. Finally, the local government often collects transfer taxes and fees for recording the transaction.
The costs of buying and selling a home affect the home-buying decision in many ways. Basically, the sooner you have to sell a house, the less likely it is to be profitable to buy rather than to rent.
If you are likely to be moving soon because of a job change or a change in family status, it can be unwise to buy a house. When you are starting a family, it may be better to buy a house with an
extra bedroom now, rather than buy one house this year and another in two years when you have more children.
Because profitability depends so much on home price appreciation, one does not want to buy a house when prices are too high. In an efficient market, there should be no way of telling when prices are
out of line. However, house prices sometimes seem to reach irrational levels for short periods of time in specific markets. A sign that prices are too high is when the ratio of annual rent to
purchase price is unusually low, say less than 2 percent.
The riskiest properties to own are condominiums. Condos tend to be the shock absorbers of the housing market. When demand is high, condo prices soar. When demand falls off, condo prices drop the
Most young people do not have enough cash to buy a home. Therefore, you typically have to borrow most of the money to buy a house. The money that you borrow is called a mortgage loan. If you default
on a mortgage loan, the lender can take possession of your house. We say that the house is collateral for the mortgage loan. The collateral reduces the lender's risk, so that a mortgage loan costs
you less than any other loan that you might obtain.
A typical mortgage loan has a 30-year term, with payments made monthly. The monthly payment is designed to gradually reduce the mortgage balance to zero, just as an annuity payment is designed to
gradually exhaust savings. In fact, the formula for calculating a mortgage payment is pretty much the same as the formula for an annuity. The main difference is that the mortgage payment is monthly,
so that the interest rate has to be converted to a monthly rate.
Most mortgages are paid off before the 30 year term expires.
• Often, people move and sell their homes, at which point the proceeds from the sale are used to pay the mortgage.
• Sometimes, people refinance their mortgages. If you took out a mortgage loan at 8 percent, and rates happen to drop to 6.5 percent, you will take out a new loan at 6.5 percent to pay off the old
loan. Even if rates do not fall, some people refinance in order to take out a larger loan.
• As a family's financial position improves, they find it advantageous to pay off the mortgage loan early
The reason that the thirty-year term is popular is that by stretching out the payments over that period the monthly payments are kept low. However, if you are likely to pay off a mortgage loan in ten
years or less, it makes sense to take an adjustable-rate mortgage, where the interest rate can change after 3 years or 5 years. These loans carry lower interest rates than the standard thirty-year
fixed rate, but the rate can increase. If you were keeping the loan for ten years or more, the rate increase could be a big issue. However, few people keep their mortgage loans that long.
When you have a mortgage loan on your residence, you can deduct the interest expense from your income. You will hear it said that a mortgage loan is a great tax deduction, and some financial advice
gurus even recommend taking out the largest mortgage loan that you can. This is flawed advice, for several reasons.
1. The deductibility of home mortgage interest does not mean that taking out a mortgage loan puts money in your pocket. At best, it reduces the cost of the loan. If your mortgage rate is 7 percent,
then on an after-tax basis it might be closer to 5 percent.
2. The tax deduction has many limitations and restrictions. For many people, a mortgage ends up making only a small difference in tax liability.
3. If you take out a larger mortgage than you need, then that gives you money to invest. When you invest that money, you earn taxable income. The taxes on that income tend to cancel out the tax
savings from the larger mortgage.
Taxes, IRA's, and 401(K) plans
Income taxes do affect personal financial decisions. Other things equal, an investment with tax-exempt income is better than an investment where the income may be taxed. When the income is tax
exempt, your savings accumulate more effectively.
The best investment vehicles go even further to save on taxes, because the the money you put into the accounts is tax deductible. If you earn $50,000 and put $2000 into an Individual Retirement
Account (IRA), you can deduct the $2,000 from your taxable income as well as accumulate investment earnings tax-free until you retire. Thus, it pays to put money in an IRA.
Similarly, there are employer-sponsored retirement savings plans, called 401(K) plans, because the provision in the tax code is called 401(K). Like IRA's, they allow you to take an income tax
deduction for your savings. In addition, many companies have matching programs, where they will kick in additional money in proportion to what you save.
There is almost no valid reason not to take maximum advantage of 401(k) plans and IRA's. Because of the tax advantages, these are the best savings vehicles.
People who want to get the best returns over a long period should put some of their investment portfolio in the stock market. Your stock market portfolio should be in mutual funds that replicate the
performance of a major stock index. This approach is known as indexing.
Inside a tax-exempt account, such as an IRA, regular mutual funds, called index funds, provide excellent diversification at low cost. However, regular mutual funds are required to distribute income
each year, which has tax consequences. A new instrument, known as exchange-trade mutual funds, allows you to defer taxes until you sell your shares in the funds. An exchange-traded index fund is the
best investment vehicle when you are putting savings into a taxable account. | {"url":"http://arnoldkling.com/econ/saving/persfin.html","timestamp":"2014-04-18T08:18:05Z","content_type":null,"content_length":"10593","record_id":"<urn:uuid:44ae34e1-f62a-4d5b-b433-3df787f3e99b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
04. Iloczyn wektorowy - ang. (Vector product)
Vector product
In the space additionally to the scalar product of two vectors, which is a scalar size, there is another kind of vector multiplication. This new useful definition of the two vector products defines a
new vector.
Vector product of vectors $\vec{a}$ and $\vec{b}$ is defined as
$\vec{a}\times \vec{b}=\vec{n}|a||b|\sin \alpha$
• $|a|$, $|b|$ are the lengths of vectors $\vec{a}$, $\vec{b}$;
• $\alpha$ is the angle (smaller from $180 ^ \circ$) between $\vec {a}$ and attached at one point;
• vector $\vec {n}$ is a unit vector perpendicular to the plane defined by $\vec {a}$ and $\vec {b}$ (as shown close to the hand).
Figure 1: Illustration of the vector product of vectors $\vec {a}$ and $\vec {b}$ by the right hand rule.
The right hand rule:
In the vector product vectors have the following orientation:
• vector $\vec {a}$ as the index finger
• vector $\vec {b}$ as the middle finger
• vector $\vec {c}$ and the vector $\vec {n}$ as the thumb
It can also be interpreted as:
• The direction $\vec {n}$ we determine by the motion of screw in the clockwise direction , if we turn it from the vector $\vec {a}$ to the vector $\vec {b}$ along the smaller angle between them.
• winding a small hand on the big one on the face of the clock the vector product will have a perpendicular direction to the face of a clock facing us from the wall.
• winding (unlike before) a big hand on the small one on the face of the clock then the vector product will have a perpendicular direction to the face of a clock facing us to the wall.
Vector product can be illustrated as follows: [http://demonstrations.wolfram.com/CrossProductOfVectors/ -simulation in the format of cdf ]
• Since $\sin 0 = 0$ where $\alpha = 0$, we have that:
$\Vec {a} \times \vec {a} = 0$
We can even say more:
Vector product of parallel vectors is zero.
• When we change the order of winding, the thumb must be pointed down, or
$\vec {b} \times \vec {a} = - \vec {a} \times \vec {b}.$
• Vector multiplication of axis vector is convenient $\vec {i}$, $\vec {j}$, $\vec{k}$, which can be stored in the table
Read the table from ‘the left to the right", for example:
$\vec {i} \times \vec {j} = \vec {k}$ or $\vec {i} \times\vec{k} =-\vec {j}.$
• Vector product has the property of separation relatively to addition:
$\vec {a} \times \left (\vec {b} +\vec {c} \right) = \vec {a} \times \vec {b}+\vec {a} \times \vec {c}$
Note: Try to draw it.
• Formula for calculating the coordinates of the vector product is known, when we have the
coordinates of the vectors $\vec{a}=[a_x,a_y,a_z]$,$\vec{b}= [b_x,b_y,b_z]$.Then:
$\vec{a}\times\vec{b}=\vec{i}\left|\begin{array}{cc} a_y & a_z \\ b_y & b_z \end{array}\right| -\vec{j}\left|\begin{array}{cc} a_x & a_z \\ b_x & b_z\end{array}\right| +\vec{k}\left|\begin{array}{cc}
a_x & a_y \\ b_x & b_y\end{array}\right|,$
where the determinants of a second degree we count as follows:
$\[\left|\begin{array}{cc}a & b\\c & d\end{array}\right |=ad-bc$
We count $\vec {u} \times \vec {v}$ for $\vec {u} = [3, -1,0]$, $\vec {v} = [0, 2, 3].$
With this formula we get:
$\vec {u} \times \vec {v} = \vec {i} \left | \begin {array} {rr} -1 & 0 \\2 & 3 \end {array} \right | - \vec {j} \left | \begin {array} {rr} 3 & 0 \\0 & 3 \end {array} \right | + \vec {k} \left | \
begin {array} {rr} 3 & -1 \\0 & 2 \end {array} \right | = -3 \vec {i} -9 \vec {j}+ 6 \vec {k} = [-3, -9.6].$
Vector product has a lot of useful interpretation both physical and geometric. We are mentioning some of them below:
• the area of the parallelogram in the picture of the hand is equal to the length of the vector product $P = | \vec {a} \times \vec {b} |.$ Also on the area of a triangle built on vectors $\vec
{a}$, $\vec {b}$ can be calculated as: $P _ {\Delta} = \frac {1} {2} | \vec {a} \times \vec {b} |.$
• Using the concept of vector product we can define various physical quantities, such as angular momentum, torque, and also recorded a number of rights of mechanics and electrodynamics.
Calculate the area of a parallelogram, whose three vertices are the points of $O = (0,0,0)$, $A = (1,1,1)$, $B = (2,3,5)$ .
Parallelogram is ‘unfasten’ by the vectors: $\vec{OA} = [1,1,1]$, $\vec {OB} = [2,3,5]$. Then
$\vec{OA}\times \vec{OB}=\vec{i}\left|\begin{array}{rr} 1 & 1 \\ 3 & 5 \end{array}\right| -\vec{j}\left|\begin{array}{rr} 1 & 1 \\ 2 & 5 \end{array}\right|+\vec{k}\left|\begin{array}{rr} 1 & 1 \\ 2 &
3 \end{array}\right|=2\vec{i}-3\vec{j}+\vec{k}=[2,-3,1]$
$P=\left| \vec{OA}\times \vec{OB}\right|=\sqrt{2^2+(-3)^2+1^2}=\sqrt{14}.$
Figure 2: Illustration for the field parallelogram, whose three vertices are the points
$O = (0, 0, 0),$, $A = (1, 1, 1)$, $B = (2, 3, 5)$
Interesting information and notes
• Another interpretation: If the points of attachment of three vectors cover each other, then the observer located in the plane spanned by the vectors $\vec {a}$ and $\vec {b}$ looking in the
direction of the vector $\vec { c}$, can pass along the shortest path from the direction of the vector $\vec {a}$ to the direction $\vec {b}$ by doing the turn opposite to clockwise direction.
• Note: Please note that the rule applies to right-handed orientation. It can be accepted left handed rule, then we go clockwise and use the left hand rule.
• Right hand or left hand rule is sometimes called Fleming in honor of the British
physicist John Ambrose Fleming, who applied the rule in the study of electromagnetism
in the late nineteenth century.
• The vector product has many technological uses. It is used for example in determining the
Lorentz force, this is the force which acts on the electric charge in the electromagnetic field,
which is a system of two fields: the electric and magnetic fields. Since the force acts always transverse on the moving fraction relative to the velocity vector and vector induction, so
overall induction vector can be defined by the vector product. Thus, vector product can
be found in the equations defining the transformation of electric and magnetic fields in the
transition to a moving system.
• To calculate at some point a magnetic induction produced by any distribution of currents we divide each of the current into infinitesimal elements and calculate the contribution from each component
and then we add up them and get the vector of magnetic induction as the resultant vector. The contribution of each current element is given by the Biot-Savart law, which is used in electromagnetism
and fluid dynamics. In the formula known today as the Biot –Savart - Laplace law vector product appears.
• Articles may be used during mathematics and physics lessons in high schools
when discussing vector calculus and its applications.
• The vector unit ( versor) $\vec {e}$ is called a vector whose length is 1. Each
vector $\vec {a}$ can be represented as the product of the unit vector and its length.
• When we divide (non-zero) vector by the length, we get a unit vector:
$\vec {e} = \frac {\vec {a}} {| a |}}.$
• To describe the coordinate axes in three-dimensional space is often used versor axes: {tex}
\vec {i} = [1,0,0] {/tex}, $\vec {j} = [0,1,0]$ , $\vec {k} = [0,0,1]$.
• Each vector $ \vec {a} $ can be distributed into a sum of three versors multiplied by the
the vector coordinates $\vec {a} = [a_x, a_y, a_z]$:
$\vec {a} = a_x \vec {i} + a_y \vec {j} + a_z \vec {k}}.$
For example, $[1,2,3] = \vec {i}+2 \vec {j}+3 \vec {k}$.
• The length of the vector $\vec {a} = [x, y, z]$ is calculated using the formula:
$| \vec {a} | = \sqrt {x ^ 2 + y ^ 2 + z ^ 2}$.
Underline the correct answer.
1.Vector product is:
• the number
• vector
• unit vector.
2. Vector product in the right-hand coordinate system we set according to the rule of:
• right hand
• left hand
• the right foot.
3.Vector product $\left (\vec {k} +2 \vec {i} \right) \times \vec {i}$ is
• $- \vec {i}$
• $0$
• $\vec {j}$.
4.Vector product $\vec {i} \times \left (\vec {k} + \vec {j} \right)$ is
• $- \vec {i}$
• $\vec {k} - \vec {j}$
• $\vec {j} - \vec {k}$
Note to the student:
1. If you do not remember the previous topic read the reminder of the needed information.
2. Look carefully at theFigure 1, analyze the vectors position, practice on your own hand.
3. Then turn applets with attached links:
4. Practice calculating the vector product of the formula before proceeding.
Teacher's notes:
1. Tell students to read the required reminder messages.
2. Figure 1 should be discussed with the students, to analyze the position vectors, work out
on your own hand.
3. Then turn the applets with accompanying links: http://www.phy.syr.edu/courses/java
suite/crosspro.html http://demonstrations.wolfram.com/CrossProductOfVectors/
4. Practice with the students to calculate the vector product of the formula before proceeding.
5. Practice with the students how to calculate the volume of solids entering the coordinate system. | {"url":"http://innowacyjnenauczanie.netstrefa.pl/index.php/materialy-szkoleniowe/matematyka/geometria-analityczna-na-plaszczyznie-i-w-przestrzeniu/171-04-iloczyn-wektorowy-ang-vector-product","timestamp":"2014-04-19T05:37:12Z","content_type":null,"content_length":"63704","record_id":"<urn:uuid:02fde08a-f94e-4765-97dd-d0262253e1e7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What hapeens if the limit is being taken from the left?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5054f6e7e4b02986d370a29d","timestamp":"2014-04-16T22:59:12Z","content_type":null,"content_length":"59746","record_id":"<urn:uuid:85ea54ee-abca-4c1f-8840-a92a96a0e078>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] About Paradox Theory
T.Forster at dpmms.cam.ac.uk T.Forster at dpmms.cam.ac.uk
Mon Sep 19 17:47:15 EDT 2011
While we are on the subject of wellfoundedness and paradox, perhaps i might
mention an open problem that has been bothering me for some time. It is
easy to prove by $\in$-induction that every set has nonempty complement.
The proof is even constructive. (I know of no constructive proof by
$\in$-induction that every set has inhabited complement). The assertion
that $x$ has nonempty complement is parameter-free, and is stratified in
Quine's sense, and we can prove by $\in$-induction that every set has this
property. My question is this: is there any other formula $\phi(x)$ -
stratified and without parameters - for which we can prove $\forall x
phi(x)$ by $\in$-induction? Put it another way: is there any parameter-free
stratified $\phi$ s.t we have an elementary proof that (\forall x)[(\forall
y)(y \in x \to \phi(y)) \to \phi(x)]
My expectation is that the answer is `no', but i can't prove it - nor
can i find a counterexample!
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2011-September/015783.html","timestamp":"2014-04-20T23:28:38Z","content_type":null,"content_length":"3476","record_id":"<urn:uuid:b71107c8-850b-4c9c-8e37-02617226a726>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
up+down scalar meson
Why is there no meson made up by only up and down quarks but even under parity? Is there something that forbids its existence?
The pions are all axial (pseudoscalar) mesons. As we go higher in energy, there are such "flavour-pure" mesons. Is this a consequence of the almost-unbroken isospin SU(3)? If this is so, why can't I
find such "flavour-pure" mesons at low energies? | {"url":"http://www.physicsforums.com/showthread.php?p=4165774","timestamp":"2014-04-20T23:40:49Z","content_type":null,"content_length":"25334","record_id":"<urn:uuid:763f43b4-7c54-4fc3-a8f6-394ab78c6dd4>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Half Lives
We use integrated rate laws, and rate constants to relate concentrations and time. The rate law to use depends on the overall order of the reaction.
For a zero order reaction A
t[½] = [A[o]] / 2k
For a first order reaction A
t[½] = 0.693 / k
For a second order reaction 2A^2:
t[½] = 1 / k [A[o]]
To determine a half life, t[½], the time required for the initial concentration of a reactant to be reduced to one-half its initial value, we need to know:
• The order of the reaction or enough information to determine it.
• The rate constant, k, for the reaction or enough information to determine it.
• In some cases, we need to know the initial concentration, [A[o]]
Substitute this information into the equation for the half life of a reaction with this order and solve for t[½]. The equations are given above.
Converting a Half Life to a Rate Constant
To convert a half life to a rate constant we need to know:
• The half life of the reaction, t[½].
• The order of the reaction or enough information to determine it.
• In some cases, we need to know the initial concentration, [A[o]]
Substitute this information into the equation for the half life of a reaction with this order and solve for k. The equations are given above.
Graphical Relations and Half Lives
If we plot the concentration of a reactant versus time, we can see the differences in half lives for reactions of different orders in the graphs. We can identify a 0, 1^st, or 2^nd order reaction
from a plot of [A] versus t by the variation in the time it takes the concentration of a reactant to change by half.
For a second order reaction 2A^2: | {"url":"http://www.chem.purdue.edu/gchelp/howtosolveit/Kinetics/Halflife.html","timestamp":"2014-04-17T16:41:41Z","content_type":null,"content_length":"6903","record_id":"<urn:uuid:fcd9c2ee-5381-4c61-b88d-caecd3604f85>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using simultaneous equations to find probability of...
October 1st 2009, 07:04 PM
Using simultaneous equations to find probability of...
Hi all,
I can work the following with a tree diagram but not with simultaneous equations:
75% of patients who have cancer and 10% of normal patients (who do not have cancer) are diagonised with cancer by a MRI. If 15% of a population have cancer, what is the probability of patients
who are digonised with cancer by a MRI that really do have cancer?
Using a tree diagram, i get probabilty of patients being diagonised which actually have cancer
= (0.15 * 0.75)/ [(0.15 * 0.75) + (0.85 * 0.1)]
= 0.57
Thanks in advance,
October 4th 2009, 04:54 AM
mr fantastic
Hi all,
I can work the following with a tree diagram but not with simultaneous equations:
75% of patients who have cancer and 10% of normal patients (who do not have cancer) are diagonised with cancer by a MRI. If 15% of a population have cancer, what is the probability of patients
who are digonised with cancer by a MRI that really do have cancer?
Using a tree diagram, i get probabilty of patients being diagonised which actually have cancer
= (0.15 * 0.75)/ [(0.15 * 0.75) + (0.85 * 0.1)]
= 0.57
Thanks in advance,
Your answer looks OK. | {"url":"http://mathhelpforum.com/advanced-statistics/105553-using-simultaneous-equations-find-probability-print.html","timestamp":"2014-04-18T13:01:08Z","content_type":null,"content_length":"5041","record_id":"<urn:uuid:0455c3b6-de94-4c9e-a69a-9a95e42134ae>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Installing MATLAB and its launcher icon
up vote 3 down vote favorite
Possible Duplicate:
Add Matlab to main menu
I am trying to install MATLAB. I got it installed, but I am having a hard time getting the launcher icon to work using the instructions from here. I get a icon in my Applications > Programming
taskbar but when I click the Matlab launcher icon, it gives me an error saying:
Could not launch 'MATLAB R2010a'
Failed to execute child process
"matlab" (No such file or directory)
How can I fix this? As an alternative route, is there another way to launch matlab without the MATLAB launcher icon?
Thanks! :)
installation launcher matlab
for launch, try typing matlab in terminal. use tab to see if there is any autocomplete. – sazary Apr 2 '11 at 23:40
nope, does not work... – O_O Apr 3 '11 at 0:10
add comment
marked as duplicate by Marco Ceppi♦ Jun 5 '11 at 19:43
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
2 Answers
active oldest votes
As you said typing matlab doesn't work. I use it this way:
cd Documents/MATLAB
to change to MATLAB current folder.
up vote 2
down vote 2. Then I use
sh /home/my_name/Applications/R2010a/bin/matlab
where I have my MATLAB installed
Ah, I see the matlab file. This works executing the Matlab program. Now if only I can get that launcher icon to work... Don't know if you have any ideas to get that to work, do you?
:) Thanks, nevertheless. – O_O Apr 3 '11 at 1:25
1 Unfortunately, even I don't know how to get the launcher to work. I tried using 'Edit Menus' and adding 'New Item' to menu but even with the same command as given above it didn't
work for me. I tried removing sh but no luck! So I use the method shown above. – Chethan S. Apr 3 '11 at 1:35
add comment
Try to put this in luncher:
/usr/local/MATLAB/R2010a/bin/matlab -desktop
(or just change the root to your installation folder and put /matlab -desktop)
up vote 0 down vote
you can read more information here:
Add Matlab to main menu
add comment
Not the answer you're looking for? Browse other questions tagged installation launcher matlab or ask your own question. | {"url":"http://askubuntu.com/questions/33199/installing-matlab-and-its-launcher-icon","timestamp":"2014-04-19T07:56:16Z","content_type":null,"content_length":"65542","record_id":"<urn:uuid:93802933-3088-4f86-a6b2-0f04640729c1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lakeside, CA Statistics Tutor
Find a Lakeside, CA Statistics Tutor
...I know how to break down any and all the material covered on the exam to help any student achieve a high score. I have been drawing and painting for approximately 9 years. I was taught in a
classical manner of drawing that focuses on light, shade, and composition.
19 Subjects: including statistics, chemistry, calculus, physics
...If I have a chance to get familiar with the assignment and code, it means that I use less of our time together (for which I bill you) figuring out what's going on. I generally ask for at least
a day's advance notice of: 1) the assignment, 2) any code you have written, 3) any specifics about what...
22 Subjects: including statistics, English, grammar, Java
Hello, My name is Sarmad, and I am a math tutor. I have a Bachelor degree in mathematics and teaching credentials from San Diego State University. Also, I have worked for a long period of time as
a math tutor at various places.
8 Subjects: including statistics, calculus, geometry, algebra 2
Hello! My name is Eric, and I hold a Bachelor's degree in Mathematics and Cognitive Science from the University of California - San Diego. I began tutoring math in high school, volunteering to
assist an Algebra 1 class for 4 hours per week.
13 Subjects: including statistics, calculus, geometry, algebra 1
...I understand that many students suffer from 'math anxiety' and I would love to help alleviate that stress! If you are having trouble with algebra, I can help! I am most proficient in math, and
I am planning to go to graduate school to become a high school math teacher.
32 Subjects: including statistics, reading, chemistry, geometry
Related Lakeside, CA Tutors
Lakeside, CA Accounting Tutors
Lakeside, CA ACT Tutors
Lakeside, CA Algebra Tutors
Lakeside, CA Algebra 2 Tutors
Lakeside, CA Calculus Tutors
Lakeside, CA Geometry Tutors
Lakeside, CA Math Tutors
Lakeside, CA Prealgebra Tutors
Lakeside, CA Precalculus Tutors
Lakeside, CA SAT Tutors
Lakeside, CA SAT Math Tutors
Lakeside, CA Science Tutors
Lakeside, CA Statistics Tutors
Lakeside, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/lakeside_ca_statistics_tutors.php","timestamp":"2014-04-19T20:26:37Z","content_type":null,"content_length":"23941","record_id":"<urn:uuid:d90799ad-bc4b-4141-b576-924da5131c3b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Keplerian Orbital Elements
This Demonstration visualizes the influence of the Keplerian elements of a celestial body (e.g., a planet or asteroid orbiting around the Sun) on its orbit in 3-space.
Keplerian or osculating orbital elements are the natural set of variables to describe the motion of a celestial body (planet, asteroid, satellite) in 3-space: while in the 2-body problem the full
set of Cartesian coordinates changes with time, the corresponding Keplerian elements are all constant except for the mean anomaly . The semi-major axis and the eccentricity define the form of the
ellipse; the inclination , periapsis , and node define the orientation of the ellipse in 3-space. The only variable to the system is the mean anomaly , defining the position of the planet in its
Snapshot 1: form of the ellipse (change )
Snapshot 2: orientation of the ellipse in 3-space (change )
Snapshot 3: position of the body in the ellipse (change )
Many more general -body systems (solar system, lunar, or artificial satellite motion) can be modelled as perturbed two-body problems, where the Keplerian elements may oscillate around their mean | {"url":"http://demonstrations.wolfram.com/KeplerianOrbitalElements/","timestamp":"2014-04-16T21:55:22Z","content_type":null,"content_length":"43862","record_id":"<urn:uuid:a9f4a56e-c74a-486c-9a70-b27d51b3b780>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exact Sampling with Coupled Markov Chains
and Applications to Statistical Mechanics
For many applications it is useful to sample from a finite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain
whose stationary distribution is the desired distribution on this set; after the Markov chain has run for M steps, with M sufficiently large, the distribution governing the state of the chain
approximates the desired distribution. Unfortunately it can be difficult to determine how large M needs to be. We describe a simple variant of this method that determines on its own when to stop, and
that outputs samples in exact accordance with the desired distribution. The method uses couplings, which have also played a role in other sampling schemes; however, rather than running the coupled
chains from the present into the future, one runs from a distant point in the past up until the present, where the distance into the past that one needs to go is determined during the running of the
algorithm itself. If the state space has a partial order that is preserved under the moves of the Markov chain, then the coupling is often particularly efficient. Using our approach one can sample
from the Gibbs distributions associated with various statistical mechanics models (including Ising, random-cluster, ice, and dimer) or choose uniformly at random from the elements of a finite
distributive lattice.
Random Structures and Algorithms, volume 9 number 1&2, pp. 223--252, 1996. Copyright © 1996 by John Wiley & Sons.
DVI version
PostScript version
gzipped PostScript version with a 4-megabyte picture
This article's listing in the CORA database gives information on articles that cite this one, and articles that this one cites. | {"url":"http://research.microsoft.com/en-us/um/people/dbwilson/eus/","timestamp":"2014-04-16T23:05:06Z","content_type":null,"content_length":"3917","record_id":"<urn:uuid:0ead6923-15fe-4c92-98d7-0f6605d21d52>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Entropy Confusion
You should have, because you are completely wrong. Find me *any* discussion of statistical mechanics in this:
In what way?
True chemical engineers also need/use mechanical engineering theory of pipes pumps and fittings, structural engineering theory of pressure vessels and structures, and so on and so forth. So there is
much of this in chem eng literature.
This just proves my point about how much overlap there is between disciplines.
But all this would be at nought without the theory of the chemicals and their reactions that go into these plants.
It is often said in textbooks on physical chemistry that thermodynamics (read classical here) define what reactions are possible, but tell us nothing about the rates of these reactions. The reaction
may be thermodynamically feasible, but so slow as to be unusable.
For instance glass is soluble in pure water.
The catch is that the rate of solution is measured on the geological timescale.
The mathematics of these rates is definitely the province of statistical mechanics. I amsure you will find lots of reaction rate information in the references you mention amongst others.
Physical Chemists also use a slightly different notation when they discuss classical thermodynamics - it has much to commend it.
This is simply labelling some of the variables with subscripts to indicate the conditions, so for instance rather than using
[tex]\Delta Q\quad or\quad q[/tex]
[tex]\Delta {Q_v}\quad or\quad {q_v}[/tex] or [tex]\Delta {Q_p}\quad or\quad {q_p}[/tex]
are used to indicate conditions of constant volume or pressure.
This helps ensure the appropriate equations are used in calculating quantities such as enthalpy, entropy, free energy etc.
There is another entropy thread concurrent with this one where we are working through rather better without all this squabbling.
I had though my " attitude" one of evenhandedness to both CT and SM as both have their place, both supply answers unavailable to the other and both concur where they overlap. | {"url":"http://www.physicsforums.com/showthread.php?t=394929&page=2","timestamp":"2014-04-17T09:57:48Z","content_type":null,"content_length":"78538","record_id":"<urn:uuid:fb127bba-0865-4824-86a5-ecf8782651bd>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discovering representative models in large time series databases
, 2003
"... Several important time series data mining problems reduce to the core task of finding approximately repeated subsequences in a longer time series. In an earlier work, we formalized the idea of
approximately repeated subsequences by introducing the notion of time series motifs. Two limitations of thi ..."
Cited by 119 (21 self)
Add to MetaCart
Several important time series data mining problems reduce to the core task of finding approximately repeated subsequences in a longer time series. In an earlier work, we formalized the idea of
approximately repeated subsequences by introducing the notion of time series motifs. Two limitations of this work were the poor scalability of the motif discovery algorithm, and the inability to
discover motifs in the presence of noise. Here we address these limitations by introducing a novel algorithm inspired by recent advances in the problem of pattern discovery in biosequences. Our
algorithm is probabilistic in nature, but as we show empirically and theoretically, it can find time series motifs with very high probability even in the presence of noise or “don’t care ” symbols.
Not only is the algorithm fast, but it is an anytime algorithm, producing likely candidate motifs almost immediately, and gradually improving the quality of results over time.
"... Time series motifs are approximately repeated patterns found within the data. Such motifs have utility for many data mining algorithms, including rule-discovery, novelty-detection, summarization
and clustering. Since the formalization of the problem and the introduction of efficient linear time algo ..."
Cited by 8 (1 self)
Add to MetaCart
Time series motifs are approximately repeated patterns found within the data. Such motifs have utility for many data mining algorithms, including rule-discovery, novelty-detection, summarization and
clustering. Since the formalization of the problem and the introduction of efficient linear time algorithms, motif discovery has been successfully applied to many domains, including medicine, motion
capture, robotics and meteorology. In this work we show that most previous applications of time series motifs have been severely limited by the definition’s brittleness to even slight changes of
uniform scaling, the speed at which the patterns develop. We introduce a new algorithm that allows discovery of time series motifs with invariance to uniform scaling, and show that it produces
objectively superior results in several important domains. Apart from being more general than all other motif discovery algorithms, a further contribution of our work is that it is simpler than
previous approaches, in particular we have drastically reduced the number of parameters that need to be specified.
"... In this work, we introduce the new problem of finding time series discords. Time series discords are subsequences of a longer time series that are maximally different to all the rest of the time
series subsequences. They thus capture the sense of the most unusual subsequence within a time series. Ti ..."
Cited by 7 (0 self)
Add to MetaCart
In this work, we introduce the new problem of finding time series discords. Time series discords are subsequences of a longer time series that are maximally different to all the rest of the time
series subsequences. They thus capture the sense of the most unusual subsequence within a time series. Time series discords have many uses for data mining, including improving the quality of
clustering, data cleaning, summarization, and anomaly detection. As we will show, discords are particularly attractive as anomaly detectors because they only require one intuitive parameter (the
length of the subsequence) unlike most anomaly detection algorithms that typically require many parameters. While the brute force algorithm to discover time series discords is quadratic in the length
of the time series, we show a simple algorithm that is 3 to 4 orders of magnitude faster than brute force, while guaranteed to produce identical results. We evaluate our work with a comprehensive set
of experiments. In particular, we demonstrate the utility of discords with objective experiments on domains as diverse as Space Shuttle telemetry monitoring, medicine, surveillance, and industry, and
we demonstrate the effectiveness of our discord discovery algorithm with more than one million experiments, on 82 different datasets from diverse domains.
- IN PROC. OF SIAM INTERNATIONAL CONFERENCE ON DATA MINING (SDM’07 , 2007
"... The problem of efficiently finding images that are similar to a target image has attracted much attention in the image processing community and is rightly considered an information retrieval
task. However, the problem of finding structure and regularities in large image datasets is an area in which ..."
Cited by 5 (2 self)
Add to MetaCart
The problem of efficiently finding images that are similar to a target image has attracted much attention in the image processing community and is rightly considered an information retrieval task.
However, the problem of finding structure and regularities in large image datasets is an area in which data mining is beginning to make fundamental contributions. In this work, we consider the new
problem of discovering shape motifs, which are approximately repeated shapes within (or between) image collections. As we shall show, shape motifs can have applications in tasks as diverse as
anthropology, law enforcement, and historical manuscript mining. Brute force discovery of shape motifs could be untenably slow, especially as many domains may require an expensive rotation invariant
distance measure. We introduce an algorithm that is two to three orders of magnitude faster than brute force search, and demonstrate the utility of our approach with several real world datasets from
diverse domains.
- IEEE Trans. on Information Technology
"... Abstract — In this work we introduce the new problem of finding time series discords. Time series discords are subsequences of longer time series that are maximally different to all the rest of
the time series subsequences. They thus capture the sense of the most unusual subsequence within a time se ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract — In this work we introduce the new problem of finding time series discords. Time series discords are subsequences of longer time series that are maximally different to all the rest of the
time series subsequences. They thus capture the sense of the most unusual subsequence within a time series. While discords have many uses for data mining, they are particularly attractive as anomaly
detectors because they only require one intuitive parameter (the length of the subsequence) unlike most anomaly detection algorithms that typically require many parameters. While the brute force
algorithm to discover time series discords is quadratic in the length of the time series, we show a simple algorithm that is 3 to 4 orders of magnitude faster than brute force, while guaranteed to
produce identical results. We evaluate our work with a comprehensive set of experiments on electrocardiograms and other medical datasets.
- In Proceedings of the Tenth International Workshop on Multimedia Data Mining , 2010
"... The problem of identifying frequently occurring patterns, or motifs, in time series data has received a lot of attention in the past few years. Most existing work on finding time series motifs
require that the length of the patterns be known in advance. However, such information is not always availa ..."
Cited by 2 (0 self)
Add to MetaCart
The problem of identifying frequently occurring patterns, or motifs, in time series data has received a lot of attention in the past few years. Most existing work on finding time series motifs
require that the length of the patterns be known in advance. However, such information is not always available. In addition, motifs of different lengths may co-exist in a time series dataset. In this
work, we propose a novel approach, based on grammar induction, for approximate variable-length time series motif discovery. Our algorithm offers the advantage of discovering hierarchical structure,
regularity and grammar from the data. The preliminary results are promising. They show that the grammar-based approach is able to find some important motifs, and suggest that the new direction of
using grammar-based algorithms for time series pattern discovery might be worth exploring. human life. Some examples of such data include speech, electrocardiogram (ECG) signals, radar signals,
seismic activities, etc. In addition to the conventional definition of time series, i.e., measurements taken over time, recently, it has been shown that certain other multimedia data, e.g., images
and shapes [48, 49], and XML [19], can be converted to time series and mined with promising results. Figure 1 shows an example of how shapes can be converted to time series.
"... Time series motifs are pairs of individual time series, or subsequences of a longer time series, which are very similar to each other. As with their discrete analogues in computational biology,
this similarity hints at structure which has been conserved for some reason and may therefore be of intere ..."
Add to MetaCart
Time series motifs are pairs of individual time series, or subsequences of a longer time series, which are very similar to each other. As with their discrete analogues in computational biology, this
similarity hints at structure which has been conserved for some reason and may therefore be of interest. Since the formalism of time series motifs in 2002, dozens of researchers have used them for
diverse applications in many different domains. Because the obvious algorithm for computing motifs is quadratic in the number of items, more than a dozen approximate algorithms to discover motifs
have been proposed in the literature. In this work, for the first time, we show a tractable exact algorithm to find time series motifs. As we shall show through extensive experiments, our algorithm
is up to three orders of magnitude faster than brute-force search in large datasets. We further show that our algorithm is fast enough to be used as a subroutine in higher level data mining
algorithms for anytime classification, near-duplicate detection and summarization, and we consider detailed case studies in domains as diverse as electroencephalograph interpretation and
entomological telemetry data mining.
, 2006
"... Finding the most unusual time series subsequence: algorithms and applications ..."
"... Time series data is ubiquitous and plays an important role in virtually every domain. For example, in medicine, the advancement of computer technology has enabled more sophisticated patients
monitoring, either on-site or remotely. Such monitoring produces massive amount of time series data, which co ..."
Add to MetaCart
Time series data is ubiquitous and plays an important role in virtually every domain. For example, in medicine, the advancement of computer technology has enabled more sophisticated patients
monitoring, either on-site or remotely. Such monitoring produces massive amount of time series data, which contain valuable information for pattern learning and knowledge discovery. In this paper, we
explore the problem of identifying frequently occurring patterns, or motifs, in streaming medical data. The problem of frequent patterns mining has many potential applications, including compression,
summarization, and event prediction. We propose a novel approach based on grammar induction that allows the discovery of approximate, variable-length motifs in streaming data. The preliminary results
show that the grammar-based approach is able to find some important motifs in some medical data, and suggest that using grammar-based algorithms for time series pattern discovery might be worth
exploring. attack prediction [38]. In bioinformatics, it is well understood that overrepresented DNA sequences often have biological significance [9, 11, 12, 28, 32]. A substantial body of literature
has been devoted to techniques to discover such patterns [2, 3]. In a previous work, we defined the related concept of “time series motif ” [18], which are frequently occurring patterns in time
series data. Since then, a great deal of work has been proposed for the discovery of time series motifs | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=547490","timestamp":"2014-04-21T01:36:00Z","content_type":null,"content_length":"38449","record_id":"<urn:uuid:4d661818-e4b8-4f52-b8e7-25964b667485>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Emeryville Algebra 2 Tutor
...My GPA was 3.35. In other words, I can help with study skills.I tutored algebra 1 for Diablo Valley College for three years. I privately tutored six different students in the subject during
that time as well (ranging in age from 16 to 40). I have been tutoring all levels of math for the last seven years.
15 Subjects: including algebra 2, reading, calculus, writing
...Each challenge is a puzzle, and asking the right questions is our best tool. I prefer to guide a student to an answer, rather than me simply supplying it. For editing/proofreading, it is
important to me that the student understands the principles behind changes I suggest, and I am not comfortable handing back a document I've been through without discussing my corrections.
34 Subjects: including algebra 2, Spanish, English, reading
...I understand where the problems are, and how best to get past them and onto a confident path to math success. My undergraduate degree is in mathematics, and I have worked as a computer
professional, as well as a math tutor. My doctoral degree is in psychology.
20 Subjects: including algebra 2, calculus, geometry, biology
I hold a doctorate degree in Clinical Psychology and currently am completing postdoctoral hours for licensure. I am available to assist clients in Psychology and Statistics. My rates are
competitive and I'm available in the Oakland/Berkeley areas.
12 Subjects: including algebra 2, reading, algebra 1, English
...The students find that their arithmetic prowess gives them the confidence to master subjects such as algebra, geometry and trigonometry.I teach Statistics at a university to psychology and
criminology majors currently. I completed the examinations to become an Associate of the Society of Actuari...
10 Subjects: including algebra 2, calculus, statistics, geometry | {"url":"http://www.purplemath.com/emeryville_ca_algebra_2_tutors.php","timestamp":"2014-04-17T16:00:22Z","content_type":null,"content_length":"23947","record_id":"<urn:uuid:b44d41bb-1b1f-45ea-88a7-909c376aeb94>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Capacitor rated frequency (Hz): Capacitor rated frequency (Hz):
Capacitor rated Voltage (kV): Capacitor rated Voltage (kV):
Calculated rated Capacitance (uF): Capacitor Reactance (Ohms):
Capacitor kVar: Capacitor kVar:
Capacitor Reactance (Ohms): Capacitor Current Rating (Amps):
Capacitor Current Rating: Calculated Capacitance (uF):
For application of the above rated capacitors on systems that deviates from their nameplate value, use the calculators below. Calculator-3 uses calculator-1 input values. Calculator-4 uses
calculator-2 input values.
Known variables: Capacitor Voltage and Capacitor Frequency
Known variables: Capacitor Voltage and Capacitor Frequency | {"url":"http://nepsi.com/resources/calculators/calculation-of-capacitor-kvar.htm","timestamp":"2014-04-24T05:48:54Z","content_type":null,"content_length":"39882","record_id":"<urn:uuid:8e357b52-0098-423d-bb0d-7db971c4ab93>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Output Port block produces the baseband-equivalent time-domain response of an input signal traveling through a series of RF physical components. The Output Port block
1. Extracts the complex impulse response of the linear subsystem for baseband-equivalent modeling of the RF linear system.
The Output Port block also serves as a connecting port from an RF physical part of the model to the Simulink^®, or mathematical, part of the model. For more information about how the Output Port
block converts the physical modeling environment signals to mathematical Simulink signals, see Convert to and from Simulink Signals.
│ Note: Some RF blocks require the sample time to perform baseband modeling calculations. To ensure the accuracy of these calculations, the Input Port block, as well as the mathematical RF │
│ blocks, compare the input sample time to the sample time you provide in the mask. If they do not match, or if the input sample time is missing because the blocks are not connected, an error │
│ message appears. │
Linear Subsystem
For the linear subsystem, the Output Port block uses the Input Port block parameters and the interpolated S-parameters calculated by each of the cascaded physical blocks to calculate the
baseband-equivalent impulse response. Specifically, it
1. Determines the modeling frequencies f as an N-element vector. The modeling frequencies are a function of the center frequency f[c], the sample time t[s], and the finite impulse response filter
length N, all of which you specify in the Input Port block dialog box.
The nth element of f, f[n], is given by
2. Calculates the passband transfer function for the frequency range as
where V[S] and V[L] are the source and load voltages, and f represents the modeling frequencies. More specifically,
● Z[S] is the source impedance.
● Z[L] is the load impedance.
● S[ij] are the S-parameters of a two-port network.
The blockset derives the passband transfer function from the Input Port block parameters as shown in the following figure:
3. Translates the passband transfer function to baseband as H(f – f[c]), where f[c] is the specified center frequency.
The baseband transfer function is shown in the following figure.
4. Obtains the baseband-equivalent impulse response by calculating the inverse FFT of the baseband transfer function. For faster simulation, the block calculates the IFFT using the next power of 2
greater than the specified finite impulse response filter length. Then, it truncates the impulse response to a length equal to the filter length specified.
For the linear subsystem, the Output Port block uses the calculated impulse response as input to the DSP System Toolbox™ Digital Filter block to determine the output.
Nonlinear Subsystem
The nonlinear subsystem is implemented by AM/AM and AM/PM nonlinear models, as shown in the following figure.
The nonlinearities of AM/AM and AM/PM conversions are extracted from the power data of an amplifier or mixer by the equations
where AM[in] is the AM of the input voltage, AM[out] and PM[out] are the AM and PM of the output voltage, R[s] is the source resistance (50 ohms), R[l] is the load resistance (50 ohms), P[in] is the
input power, P[out] is the output power, andϕ is the phase shift between the input and output voltage.
│ Note: You can provide power data via a .amp file. See AMP File Format in the RF Toolbox™ documentation for information about this format. │
The following figure shows the original power data of an amplifier.
This figure shows the extracted AM/AM nonlinear conversion.
Dialog Box
Main Tab
Load impedance of the RF network described in the physical model to which it connects.
Visualization Tab
This tab shows parameters for creating plots if you display the Output Port mask after you perform one or more of the following actions:
● Run a model with two or more blocks between the Input Port block and the Output Port block.
● Click the Update Diagram button to initialize a model with two or more blocks between the Input Port block and the Output Port block.
For information about plotting, see Create Plots. | {"url":"http://www.mathworks.se/help/simrf/ref/outputport.html?nocookie=true","timestamp":"2014-04-23T06:42:00Z","content_type":null,"content_length":"45791","record_id":"<urn:uuid:ba07276a-5b17-4f34-8e15-e8de69ac75d7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
UWEE Tech Report Series
Maya R. Gupta
measure theory, Borel, sigma-algebra, probability measure
This tutorial is an informal introduction to measure theory for people who are interested in reading papers that use measure theory. The tutorial assumes one has had at least a year of college-level
calculus, some graduate level exposure to random processes, and familiarity with terms like ``closed'' and ``open.'' The focus is on the terms and ideas relevant to applied probability and
information theory. There are no proofs and no exercises.
Download the PDF version
Download the Gzipped Postscript version | {"url":"https://www.ee.washington.edu/techsite/papers/refer/UWEETR-2006-0008.html","timestamp":"2014-04-19T06:53:26Z","content_type":null,"content_length":"2711","record_id":"<urn:uuid:ef66bfde-4ea7-438f-a9c1-35b339586776>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
A formula under the hood of a columnar transposition cipher
July 30, 2010, 8:00 am
The return of Friday Random 10
Friday Random 10 has slipped out of the rotation lately, so let’s fix that. Hitting the random shuffle button on the iPhone, we have…
1. Delia’s Gone (Johnny Cash, American Recordings)
2. Guide Vocal (Genesis, Duke)
3. All Your Love (Otis Rush, Essential Chicago Blues)
4. Why Should I Feel Lonely (Robert Randolph & the Family Band, Unclassified)
5. Catch Me If I Try (David Wilcox, East Asheville Hardware)
6. House of Tom Bombadil (Nickel Creek, Nickel Creek)
7. Digital Man (Rush, Signals)
8. Fei Hua Dian Cui (Lui Pui-Yuen, China: Music of the Pipa)
9. Turn the Page (Rush, Hold Your Fire)
10. A Little Bluer Than That (Alan Jackson, Drive)
Here’s the video for “Delia’s Gone” (#1 on the list). The song is a classic “death ballad”, one of the standard idioms of country and folk music. And yet, when it came out in 1994, none of the
country music stations on radio or TV wanted to play it because…
October 15, 2010, 9:00 am
Friday random 10
Here’s some music for the end of the week, straight off the iPhone set to random shuffle:
1. Daughters (John Mayer, Heavier Things)
2. Custard Pie (Led Zeppelin, Physical Graffiti)
3. Far East Medley (Bela Fleck and the Flecktones, Live Art)
4. Heartbreak Hotel (Elvis Presley, Elvis 30 #1 Hits)
5. On Your Shore (Enya, Watermark)
6. Treasure of the Broken Land (Mark Heard, High Noon)
7. When It’s Good (Ben Harper, Diamonds on the Inside)
8. You Send Me (Steve Miller Band, Fly Like an Eagle)
9. When Love Comes Around (Alan Jackson, Drive)
10. Big Things Too (Veggie Tales, Veggie Tunes 2)
I have to focus this time on the first one in the list, John Mayer’s “Daughters”. People have many different opinions about John Mayer, not all of them good, but I’m a big fan — and mainly because of
this song. Mayer has a sort of reputation as a womanizer but his insights on girls and parenting in this song…
November 12, 2010, 4:04 pm
This week in screencasting: Optimization-palooza
My calculus class hit optimization problems this week — or it might be better to say the class got hit by optimization problems. These are tough problems because of all their many moving parts,
especially the fact that one of those parts is to build the model you plan to optimize. Most of my students have had calculus in high school, but too many calculus courses in high school as well as
college focus almost primarily on algorithms for computation and spend little to no time with how to create a model in the first place. Classes that are so structured are doing massive harm to
students in a number of ways, but that’s for another post or two.
Careful study of worked-out examples is an essential part of understanding optimization problems (though not the only part, and this alone isn’t sufficient). The textbook has a few of these. The
professor can provide more, but class time really …
August 26, 2011, 9:00 am
Friday Random 10, 8/26/11
Don’t look now, but it’s the return of the Friday Random 10. Ten songs selected at random from my family’s, um, eclectic iTunes library. Notice how I say “my family’s” library, so as to deflect
questions about why there are so many kids’ songs or Glee stuff coming up.
1. Candles (Glee cast version); Glee: The Music Presents the Warblers
2. Spiritual; Johnny Cash, Unchained
3. BWV Praeludium et Fuga in A; James Kibbie, Bach Organ Works: Preludes and Fugues
4. Sara; Fleetwood Mac, Greatest Hits
5. It Doesn’t Matter; Alison Krauss & Union Station, So Long So Wrong
6. Celebration; Kool & the Gang, Gold
7. Amazed; Lonestar, Lonely Grill
8. All the Way My Savior Leads Me; Rich Mullins, The World as Best I Remember It, vol. 2
9. Soul Refreshing; Robert Randolph & The Family Band, Unclassified
10. Jealous Hearted Man; Muddy Waters, Hard Again
Let’s focus this week on James Kibbie, a master organist a…
September 21, 2011, 9:00 am
Midweek recap, 9.21.2011
Interesting stuff from elsewhere on the web this week:
September 28, 2011, 4:00 am
Midweek recap, 09.28.2011
Good stuff from the internet this past week:
October 17, 2011, 7:30 am
Math Monday: TV Lawyers Solve NP-Complete Problem (part 1)
For the next couple of weeks, Math Monday here at the blog will feature a guest blogger. Ed Aboufadel is Professor of Mathematics and chair of the Mathematics Department at Grand Valley State
University, where I work. He’ll be writing a two-part series on a neat appearance of an NP-complete problem on network TV, adding yet another data point that mathematics is indeed everywhere. Thanks
in advance, Ed!
On the new USA-network TV series Suits [1], Harvey Specter is a senior partner at the law firm of Pearson Hardman, and Mike Ross is his new associate. Mike never went to law school, but he combines
a photographic, elephantine memory with near-genius intelligence to fake it well. Harvey is in on the deception, but none of the other partners know. During the eighth episode of the first season of
Suits (broadcast August 11, 2011), Harvey and Mike, working with Louis Litt, a…
October 24, 2011, 7:30 am
Math Monday: TV Lawyers Solve NP-Complete Problem (part 2)
This is the second installment of a two-part article from guest blogger Ed Aboufadel. Thanks again, Ed, for contributing.
In Part I, we learned of an instance of the NP-complete problem subset-sum [1] that was solved by three lawyers on an episode of the USA Network show Suits [2]. The problem was to go through a set
of deposits made to five banks in Liechtenstein and find a subset of deposits, where the total of the deposits was $152,375,242.18. Described as “simple mathematics” by one of the lawyers, the team
solved the problem in a relatively short length of time. They couldn’t use a quick approximation algorithm for subset-sum, since they needed the sum to be exactly equal to their target amount. So,
were they just lucky, smarter than the rest of us, or did they do something practically impossible?
Consider the following “back of the envelope” calculations. First,…
November 7, 2011, 7:45 am
Math Monday: Columnar transposition ciphers and permutations, oh my
I hope you enjoyed Ed’s guest posts on NP-complete problems on TV the last couple of Mondays. It’s always great to hear from others on math that they are thinking about. This week it’s me again, and
we’re going to get back to the notion of columnar transposition ciphers. In the first post about CTCs, we discussed what they are and in particular the rail fence cipher which is a CTC with two
columns. This post is going to get into the math behind CTCs, and in doing so we’ll be able to work with CTCs on several different levels.
A CTC is just one of many transposition ciphers, which is one of the basic cryptographic primitives. Transposition ciphers work by shuffling the characters in the message according to some predefined
rule. The way these ciphers work is easy to understand if we put a little structure on the situation.
First, label all the positions in the message from \(0\) to …
November 21, 2011, 7:45 am
A formula under the hood of a columnar transposition cipher
It’s been a couple of Math Mondays since we last looked at columnar transposition ciphers, so let’s jump back in. In the last post, we learned that CTC’s are really just permutations on the set of
character positions in a message. That is, a CTC is a bijective function \( \{0, 1, 2, \dots, L-1\} \rightarrow \{0, 1, 2, \dots, L-1\}\) where \(L\) is the length of the message. One of the big
questions we left hanging was whether there was a systematic way of specifying that function — for example, with a formula. The answer is YES, and in this post we’re going to develop that formula.
Before we start, let me just mention again that all of the following ideas are from my paper “The cycle structure and order of the rail fence cipher”, which was published in the journal Cryptologia.
However, the formula you’re about to see here is a newer (and I think improved) version of the one in the…
Search Casting Out Nines
□ The Chronicle Blog Network, a digital salon sponsored by The Chronicle of Higher Education, features leading bloggers from all corners of academe. Content is not edited, solicited, or
necessarily endorsed by The Chronicle. More on the Network...
Casting Out Nines through your favorite RSS reader: SUBSCRIBE | {"url":"http://chronicle.com/blognetwork/castingoutnines/category/weekly-features/page/2/","timestamp":"2014-04-19T15:06:54Z","content_type":null,"content_length":"96756","record_id":"<urn:uuid:b49e5ec1-7c15-43da-88d6-81021108c8a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
Signal recovery by proximal forward-backward splitting
Results 1 - 10 of 264
, 2009
"... We consider the class of Iterative Shrinkage-Thresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its
simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast Iterat ..."
Cited by 365 (4 self)
Add to MetaCart
We consider the class of Iterative Shrinkage-Thresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its
simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) which preserves the computational simplicity of
ISTA, but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring
demonstrate the capabilities of FISTA.
- IEEE Journal of Selected Topics in Signal Processing , 2007
"... Abstract—Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach
consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined wi ..."
Cited by 291 (15 self)
Add to MetaCart
Abstract—Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach
consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined with a sparseness-inducing (ℓ1) regularization term.Basis pursuit, the least absolute
shrinkage and selection operator (LASSO), waveletbased deconvolution, and compressed sensing are a few wellknown examples of this approach. This paper proposes gradient projection (GP) algorithms for
the bound-constrained quadratic programming (BCQP) formulation of these problems. We test variants of this approach that select the line search parameters in different ways, including techniques
based on the Barzilai-Borwein method. Computational experiments show that these GP approaches perform well in a wide range of applications, often being significantly faster (in terms of computation
time) than competing methods. Although the performance of GP methods tends to degrade as the regularization term is de-emphasized, we show how they can be embedded in a continuation scheme to recover
their efficient practical performance. A. Background I.
, 2002
"... This paper introduces an expectation-maximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is
achieved by promoting a reconstruction with low-complexity, expressed in terms of the wavelet coecients, taking a ..."
Cited by 233 (21 self)
Add to MetaCart
This paper introduces an expectation-maximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by
promoting a reconstruction with low-complexity, expressed in terms of the wavelet coecients, taking advantage of the well known sparsity of wavelet representations. Previous works have investigated
wavelet-based restoration but, except for certain special cases, the resulting criteria are solved approximately or require very demanding optimization methods. The EM algorithm herein proposed
combines the efficient image representation oered by the discrete wavelet transform (DWT) with the diagonalization of the convolution operator obtained in the Fourier domain. The algorithm alternates
between an E-step based on the fast Fourier transform (FFT) and a DWT-based M-step, resulting in an ecient iterative process requiring O(N log N) operations per iteration. Thus, it is the rst image
restoration algorithm that optimizes a wavelet-based penalized likelihood criterion and has computational complexity comparable to that of standard wavelet denoising or frequency domain deconvolution
methods. The convergence behavior of the algorithm is investigated, and it is shown that under mild conditions the algorithm converges to a globally optimal restoration. Moreover, our new approach
outperforms several of the best existing methods in benchmark tests, and in some cases is also much less computationally demanding.
, 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex
relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Cited by 192 (12 self)
Add to MetaCart
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex
relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem).
Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and
easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices {X k, Y k}
and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y k. There are two remarkable features making this attractive for low-rank matrix completion
problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X k} is empirically nondecreasing. Both these facts allow the
algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On
, 2008
"... Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute
shrinkage and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing ( ..."
Cited by 168 (27 self)
Add to MetaCart
Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage
and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is
to minimize an objective function that includes a quadratic (ℓ2) error term added to a sparsity-inducing (usually ℓ1) regularization term. We present an algorithmic framework for the more general
problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization
subproblem involving a quadratic term with diagonal Hessian (which is therefore separable in the unknowns) plus the original sparsity-inducing regularizer. Our approach is suitable for cases in which
this subproblem can be solved much more rapidly than the original problem. In addition to solving the standard ℓ2 − ℓ1 case, our framework yields an efficient solution technique for other
regularizers, such as an ℓ∞-norm regularizer and groupseparable (GS) regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS
problems show that our approach is competitive with the fastest known methods for the standard ℓ2 − ℓ1 problem, as well as being efficient on problems with other separable regularization terms.
- IEEE TRANSACTIONS ON IMAGE PROCESSING , 2007
"... Iterative shrinkage/thresholding (IST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear
inverse problems. This class of problems results from combining a linear observation model with a nonquadratic ..."
Cited by 96 (19 self)
Add to MetaCart
Iterative shrinkage/thresholding (IST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse
problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or wavelet-based regularization). It happens that the
convergence rate of these IST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is ill-conditioned or ill-posed. In this paper, we introduce
two-step IST (TwIST) algorithms, exhibiting much faster convergence rate than IST for ill-conditioned problems. For a vast class of nonquadratic convex regularizers ( norms, some Besov norms, and
total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a
monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over IST. The effectiveness
of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.
, 2009
"... Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed
sensing is already quite immense. Inspired by recent breakthroughs in the development of novel first-order ..."
Cited by 71 (1 self)
Add to MetaCart
Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed
sensing is already quite immense. Inspired by recent breakthroughs in the development of novel first-order methods in convex optimization, most notably Nesterov’s smoothing technique, this paper
introduces a fast and accurate algorithm for solving common recovery problems in signal processing. In the spirit of Nesterov’s work, one of the key ideas of this algorithm is a subtle averaging of
sequences of iterates, which has been shown to improve the convergence properties of standard gradient-descent algorithms. This paper demonstrates that this approach is ideally suited for solving
large-scale compressed sensing reconstruction problems as 1) it is computationally efficient, 2) it is accurate and returns solutions with several correct digits, 3) it is flexible and amenable to
many kinds of reconstruction problems, and 4) it is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters.
Comprehensive numerical experiments on realistic signals exhibiting a large dynamic range show that this algorithm compares favorably with recently proposed state-of-the-art methods. We also apply
the algorithm to solve other problems for which there are fewer alternatives, such as total-variation minimization, and
- IEEE Transaction on Image Processing , 2009
"... This paper studies gradient-based schemes for image denoising and deblurring problems based on the discretized total variation (TV) minimization model with constraints. We derive a fast
algorithm for the constrained TV-based image deburring problem. To achieve this task we combine an acceleration of ..."
Cited by 67 (1 self)
Add to MetaCart
This paper studies gradient-based schemes for image denoising and deblurring problems based on the discretized total variation (TV) minimization model with constraints. We derive a fast algorithm for
the constrained TV-based image deburring problem. To achieve this task we combine an acceleration of the well known dual approach to the denoising problem with a novel monotone version of a fast
iterative shrinkage/thresholding algorithm (FISTA) we have recently introduced. The resulting gradient-based algorithm shares a remarkable simplicity together with a proven global rate of convergence
which is significantly better than currently known gradient projections-based methods. Our results are applicable to both the anisotropic and isotropic discretized TV functionals. Initial numerical
results demonstrate the viability and efficiency of the proposed algorithms on image deblurring problems with box constraints. 1
, 2009
"... The goal of sparse approximation problems is to represent a target signal approximately as a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the
major practical algorithms for sparse approximation. Specific attention is paid to computational issues, ..."
Cited by 60 (0 self)
Add to MetaCart
The goal of sparse approximation problems is to represent a target signal approximately as a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major
practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical
guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and
relevant to a wealth of applications.
- THE JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS , 2004
"... Regularization of ill-posed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an
ℓ1 penalized functional is via an iterative soft-thresholding algorithm. We propose an alternative implem ..."
Cited by 58 (10 self)
Add to MetaCart
Regularization of ill-posed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1
penalized functional is via an iterative soft-thresholding algorithm. We propose an alternative implementation to ℓ1-constraints, using a gradient method, with projection on ℓ1-balls. The
corresponding algorithm uses again iterative soft-thresholding, now with a variable thresholding parameter. We also propose accelerated versions of this iterative method, using ingredients of the
(linear) steepest descent method. We prove convergence in norm for one of these projected gradient methods, without and with acceleration. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=312889","timestamp":"2014-04-17T19:46:01Z","content_type":null,"content_length":"41836","record_id":"<urn:uuid:7e1856e7-8571-4518-8188-4c37e00a2389>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Vibration Method for Discovering Density Varied Clusters
ISRN Artificial Intelligence
Volume 2012 (2012), Article ID 723516, 8 pages
Research Article
A Vibration Method for Discovering Density Varied Clusters
Department of Computer Engineering, Islamic University of Gaza, Palestine
Received 4 August 2011; Accepted 28 August 2011
Academic Editors: Z. He and J. A. Hernandez
Copyright © 2012 Mohammad T. Elbatta et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
DBSCAN is a base algorithm for density-based clustering. It can find out the clusters of different shapes and sizes from a large amount of data, which is containing noise and outliers. However, it is
fail to handle the local density variation that exists within the cluster. Thus, a good clustering method should allow a significant density variation within the cluster because, if we go for
homogeneous clustering, a large number of smaller unimportant clusters may be generated. In this paper, an enhancement of DBSCAN algorithm is proposed, which detects the clusters of different shapes
and sizes that differ in local density. Our proposed method VMDBSCAN first finds out the “core” of each cluster—clusters generated after applying DBSCAN. Then, it “vibrates” points toward the cluster
that has the maximum influence on these points. Therefore, our proposed method can find the correct number of clusters.
1. Introduction
Unsupervised clustering techniques are an important data analysis task that tries to organize the data set into separated groups with respect to a distance or, equivalently, a similarity measure [1].
Clustering has been applied to many applications in pattern recognition [2], imaging processing [3], machine learning [4], and bioinformatics [5].
Clustering methods can be categorized into two main types: fuzzy clustering and hard clustering. In fuzzy clustering, data points can belong to more than one cluster with probabilities [6]. In hard
clustering, data points are divided into distinct clusters, where each data point can belong to one and only one cluster. These data points can be grouped with many different techniques, such as
partitioning, hierarchical, density based, grid based, and model based.
Partitioning algorithms minimize a given clustering criterion by iteratively relocating data points between clusters until a (locally) optimal partition is attained. The most popular partition-based
clustering algorithms are the -means [7] and the -mediod [8]. The advantage of the partition-based algorithms is the use of an iterative way to create the clusters, but the limitation is that the
number of clusters has to be determined by user and only spherical shapes can be determined as clusters.
Hierarchical algorithms provide a hierarchical grouping of the objects. These algorithms can be divided into two approaches, the bottom-up or agglomerative and the top-down or divisive approach. In
case of agglomerative approach, at the start of the algorithm, each object represents a different cluster and at the end, all objects belong to the same cluster. In divisive method at the start of
the algorithm all objects belong to the same cluster, which is split, until each object constitutes a different cluster. Hierarchal algorithms create nested relationships of clusters, which can be
represented as a tree structure called dendrogram [9]. The resulting clusters are determined by cutting the dendrogram by a certain level. Hierarchal algorithms use distance measurements between the
objects and between the clusters. Many definitions can be used to measure distance between the objects, for example, Euclidean, City-block (Manhattan), Minkowski and so on.
Between the clusters, one can determine the distance as the distance of the two nearest objects in the two clusters (single linkage clustering) [10], or as the two furthest (complete linkage
clustering) [11], or as the distance between the mediods of the clusters. The disadvantage of the hierarchical algorithm is that after an object is assigned to a given cluster, it cannot be modified
later. Also only spherical clusters can be obtained. The advantage of the hierarchical algorithms is that the validation indices (correlation and inconsistency measure), which can be defined on the
clusters, can be used for determining the number of the clusters. The popular hierarchical clustering methods are CHAMELEON [12], BIRCH [13], and CURE [14].
Density-based algorithms like DBSCAN [15] and OPTICS [16] find the core objects at first and they are growing the clusters based on these cores and by searching for objects that are in a neighborhood
within a radius epsilon of a given object. The advantage of these types of algorithms is that they can detect arbitrary form of clusters and they can filter out the noise.
Grid-based algorithms quantize the object space into a finite number of cells (hyper-rectangles) and then perform the required operations on the quantized space. The advantage of this approach is the
fast processing time that is in general independent of the number of data objects. The popular grid-based algorithms are STING [17], CLIQUE [18], and WaweCluster [19].
Model-based algorithms find good approximations of model parameters that best fit the data. They can be either partitional or hierarchical, depending on the structure or model they hypothesize about
the data set and the way they refine this model to identify partitionings. They are closer to density-based algorithms in that they grow particular clusters so that the preconceived model is
improved. However, they sometimes start with a fixed number of clusters and they do not use the same concept of density. Most popular model-based clustering methods are EM [20].
Fuzzy algorithms suppose that no hard clusters exist on the set of objects, but one object can be assigned to more than one cluster. The best known fuzzy clustering algorithm is FCM (Fuzzy -MEANS) [
Categorical data algorithms are specifically developed for data where Euclidean, or other numerical-oriented, distance measures cannot be applied.
Rest of the paper is organized as follows. Section 2 provides related work on density-based clustering. Section 3 presents DBSCAN clustering algorithm is presented. In Section 4, the proposed
algorithm. In Section 5, simulation and results are presented and discussed. Finally, Section 6 presents conclusion and future work.
2. Related Work
The DBSCAN (Density-Based Spatial Clustering of Applications with Noise) [15] is a pioneer algorithm of density-based clustering. It requires user predefined two input parameters, which are radius
and minimum objects within that radius. The density of an object is the number of objects in its -neighborhood of that object. DBSCAN does not specify upper limit of a core object, that is, how much
objects may present in its neighborhood. So, due to this, the output clusters are having wide variation in local density so that a large number of smaller unimportant clusters may be generated.
OPTICS [16] algorithm is an improvement of DBSCAN to deal with variance density clusters. OPTICS does not assign cluster memberships, but this algorithm computes an ordering of the objects based on
their reachability distance for representing the intrinsic hierarchical clustering structure. Pei et al. [22] proposed a nearest-neighbor cluster method, in which the threshold of density (equivalent
to Eps of DBSCAN) is computed via the expectation-maximization (EM) [20] algorithm, and the optimum value of (equivalent to MinPts of DBSCAN) can be decided by the lifetime individual . As a result,
the clustered points and noise were separated according to the threshold of density and the optimum value of .
In order to adapt DBSCAN to data consisting of multiple processes, an improvement should be made to find the difference in the mth nearest distances of processes. Roy and Bhattacharyya [23] developed
new DBSCAN algorithm, which may help to find different density clusters that overlap. However, the parameters in this method are still defined by users. Lin et al. [24] introduced new approach called
GADAC, which may produce more precise classification results than DBSCAN does. Nevertheless, in GADAC, the estimation of the radius is dependent upon the density threshold , which can only be
determined in an interactive way.
Pascual et al. [25] developed density-based cluster method to deal with clusters of different sizes, shapes, and densities. However, the parameters of neighborhood radius , which is used to estimate
the density of each point, have to be defined using prior knowledge and finding Gaussian-shaped clusters and is not always suitable for clusters with arbitrary shapes.
Another enhancement of the DBSCAN algorithm is DENCLUE [25], based on an influence function that describes the impact of an object upon its neighborhood. The result of density function gives the
local density maxima value, and this local density value is used to form the clusters. It produces good clustering results even when a large amount of noise is present.
EDBSCAN (an Enhanced Density-Based Spatial Clustering of Application with Noise) [26] algorithm is another extension of DBSCAN; it keeps tracks of density variation which exists within the cluster.
It calculates the density variance of a core object with respect to its -neighborhood. If density variance of a core object is less than or equal to a threshold value and also satisfies the
homogeneity index with respect to its neighborhood, then it will allow the core object for expansion. But, it calculates the density variance and homogeneity index locally in the -neighborhood of a
core object.
DD_DBSCAN [27] algorithm is another enhancement of DBSCAN, which finds the clusters of different shapes and sizes which differ in local density but, the algorithm is unable to handle the density
variation within the cluster. DDSC [28] (a Density-Differentiated Spatial Clustering Technique) is proposed, which is again an extension of the DBSCAN algorithm. It detects clusters, which are having
nonoverlapped spatial regions with reasonable homogeneous density variations within them.
In VDBSCAN [29] (Varied Density-Based Spatial Clustering of Applications with Noise), the authors have also tried to improve the results using DBSCAN algorithm. The method computes -distance for each
object and sort them in ascending order, then plotted using the sorted values. The sharp change at the value of -distance corresponds to a suitable value.
CHAMELEON [12] finds the clusters in a data set by two-phase algorithm. In first phase, it generates a -nearest-neighbor graph. In the second phase, it uses an agglomerative hierarchical clustering
algorithm to find the cluster by combining the sub clusters.
Most of the algorithms are not robust to noise and outlier density-based algorithms are more important in this case. However, most of the density based clustering algorithms, are not able to handle
the local density variation. DBSCAN [15] is one of the most popular algorithms due to its high quality of noiseless output clusters. However, also failing to detect the density varied clusters, there
are many researches existing as an enhancement of DBSCAN for handling the density variation within the cluster.
3. DBSCAN Algorithm
The DBSCAN [30] is density fundamental cluster formation. Its advantage is that it can discover clusters with arbitrary shapes and sizes. The algorithm typically regards clusters as dense regions of
objects in the data spaces that are separated by regions of low-density objects. The algorithm has two input parameters, radius and MinPts. For understanding the process of the algorithm, some
concepts and definitions have to be introduced. The definition of dense objects is as follows.
Definition 1. The neighborhood within a radius of a given object is called the -neighborhood of the object.
Definition 2. If the -neighborhood of an object contains at least a minimum number of objects, then the object is called an -core object.
Definition 3. Given a set of data objects, , we say that an object is directly density reachable from object if is within the -neighborhood of and is a -core object.
Definition 4. An object is density reachable from object with respect to and in a given set of data objects, , if there is a chain of objects and such that is directly density reachable from with
respect to and , for .
Definition 5. An object is density-connected from object with respect to and in a given set of data objects, , if there is an object such that both and are density-reachable from o with respect to
and .
According to the above definitions, it only needs to find out all the maximal density-connected spaces to cluster the data objects in an attribute space. And these density-connected spaces are the
clusters. Every object not contained in any clusters is considered noise and can be ignored.
Explanation of DBSCAN Steps
(i)DBSCAN [31] requires two parameters: radius epsilon (Eps) and minimum points (MinPts). It starts with an arbitrary starting point that has not been visited. It then finds all the neighbor points
within distance Eps of the starting point. (ii)If the number of neighbors is greater than or equal to MinPts, a cluster is formed. The starting point and its neighbors are added to this cluster, and
the starting point is marked as visited. The algorithm then repeats the evaluation process for all the neighbors' recursively. (iii)If the number of neighbors is less than MinPts, the point is marked
as noise. (iv)If a cluster is fully expanded (all points within reach are visited), then the algorithm proceeds to iterate through the remaining unvisited points in the dataset.
4. The Proposed Algorithm
One of the problems with DBSCAN is that it is has wide density variation within a cluster.
To overcome this problem, new algorithm VMDBSCAN based on DBSCAN algorithm is proposed in this section. It first clusters the data objects using DBSCAN. Then, it finds the density functions for all
data objects within each cluster. The data object that has the minimum density function will be the core for that cluster. After that, it computes the density variation of a data object with respect
to the density of core object of its cluster against all densities of other core's clusters. According to the density variance, we do the movement for data objects toward the new core. New core is
one of other core's clusters, which has the maximum influence on the tested data object.
We intuitively present some definitions.
Definition 6. Suppose that and two data objects in a -dimensional feature space, . The influence function of data object on is a function : and can be defined as basic influence function :
The influence function we will choose will be function that can determine the distance between two data objects, as Euclidean distance function.
Definition 7. Given a -Dimensional feature space, , the density function at a data object is defined as the sum of all the influence to from the rest of data objects in .
According to Definitions 6 and 7, we can calculate the density function for each data point in the space.
Definition 8. Core, the core object for each cluster, is the object that has the minimum density function value according to Definition 7. That is, we can calculate the density function for each
object in the cluster, which is given initially by DBSCAN, and the object which has the minimum connection to all other objects will be the core for that cluster.
Definition 9. Total Density Function represents the difference among the data objects, which is based on the core. That is, the Total Density Function for data object is the difference between the
data object and the core of its cluster.
In addition, according to our initial clusters which are given by the density-based clustering methods, we can takeover the influence function (Definition 6) and density function (Definition 7) to
calculate the Total Density Function of the data objects by subtracting the value of their density function to the value of the core's:
4.1. Vibration Process
Our main idea is the vibration of data objects according to the density of the data object with respect to core (Definition 8), the core that represents each cluster, and measure of the Total Density
Function of each data object as in (5). Then, if its Total Density Function with respect to its core is greater than Total Density Functionfor some other cores, vibrate all points in that cluster
toward the core object which has the maximum influence on that object point, according to: where , is the current tested point, is the current tested core, is the learning rate, and : is the control
of reduction in sigma.
We use in the vibration equation to control the winner of the current cluster, and we can adapt it to get the best clustering result. is used in our formula to control the reduction in sigma, that
is, as the time increased, the movement (vibrate) of the point toward the new core is reduced.
Formally, we can describe our proposed algorithm as follows(1)Calculate the Density Function for all the data objects.(2)Do clustering for the data objects using traditional DBSCAN algorithm.(3)
Calculate the Density Function for all the data objects again, and then find out the core of each generated cluster.(4)For each data object, if its Total Density Function with respect to its core is
greater than with respect to other cores, then vibrate the data objects in that cluster.
The proposed method of the algorithm is described as pseudo code in Algorithm 1.
The first step initializes the value of learning rate it can take small values from ; is the number of data points in the data set. For each data point in the data set, we compute the Density
Function of this data point according to (3), and then store results in an array list of Point Density (). Line 5 of the algorithm calls the DBSCAN algorithm to make initial clustering. From lines
6–8, we find the core object for each cluster resulting from DBSCAN. Line 10 calculates the Total Density Function for each point with respect to its core object. Line 12 calculates the Total Density
Function for that point with respect to all other core objects. From line 13 to line 16 we check the effect of core objects on the data object if the effect of its core object is less than other core
objects then vibrate the whole points which data object belongs to toward the core .
5. Simulation and Results
We evaluated our proposed algorithm on several artificial and real data sets.
5.1. Artificial Data Sets
We use three artificial two-dimensional data sets, since the results are easily visualized. The first data set is shown in Figure 1. which consists of 226 data points with one cluster.
Figure 1(a) shows the original dataset plotting. In Figure 1(b), after applying the DBSCAN algorithm, with , , we get 2 clusters. In Figure 1(c), after applying our proposed algorithm with , we get
the correct number of clusters, that is, we have only 1 cluster. And we note that the points deleted by DBSCAN, as DBSCAN considered it then noise points, now appeared after applying our proposed
Figure 2(a) shows the original dataset plotting. Figure 2(b) shows the result of applying DBSCAN on the second dataset, with , and . The resulting clusters are 3 clusters. But, if we applied our
proposed algorithm Figure 2(c) with , we get the correct number of clusters, which are 2 clusters.
Figure 3(a) shows the original dataset plotting. In Figure 3(b), after applying the DBSCAN algorithm, with , , we get 4 clusters. In this dataset, DBSCAN treats some points as noise and removes them.
In Figure 3(c), after applying our proposed algorithm with , we get the correct number of clusters, that is, we have only 5 clusters.
5.2. Real Data Sets
We use the iris data set from the UCI (http://archive.ics.uci.edu/ml/datasets/Iris) which contains three clusters, 150 data points with 4 dimensions. For measuring the accuracy of our proposed
algorithm, we use an average error index in which we count the misclassified samples and divide it by the total number of samples. We apply the DBSCAN algorithm with and , and obtain an average error
index of 45.33%, while, when applying the VMDBSCAN algorithm with , we have an average error index of 20.00%.
We apply another data set, which is Haberman data set from UCI (http://archive.ics.uci.edu/ml/datasets/Haberman's+Survival) to show the efficiency of our proposed algorithm. The Haberman data set
contains tow clusters, 306 data points with 3 dimensions. The obtained results are shown in Table 1. We get an average error index of 33.33% when we apply DBSCAN algorithm with and , while, when
applying the VMDBSCAN algorithm with , we have an average error index of 27.78%.
We apply another data set, which is Glass data set from UCI (http://archive.ics.uci.edu/ml/datasets/Glass+Identification). The Glass data set contains six clusters, 214 data points with 9 dimensions.
The obtained results are shown in Table 1. We get an average error index of 66.82% when we apply DBSCAN algorithm with and , While, when applying the VMDBSCAN algorithm with , we have an average
error index of 62.15%. We note in this dataset the error rate result by using DBSCAN or VMDBSCAN is large. This is due to the fact that as the number of dimensions increases, the clustering
algorithms fail to find the correct number of clusters.
6. Conclusions and Future Works
We have proposed an enhancement algorithm based on DBSCAN to cope with the problems of one of the most used clustering algorithm. Our proposed algorithm VMDBSCAN gives far more stable estimates of
the number of clusters than existing DBSCAN over many different types of data of different shapes and sizes. Future work will focus on determining the best value of the parameter and improve the
results for high dimensions data sets.
1. A. K. Jain and R. C. Dubes, Algorithm for Clustering Data, Prentice Hall, Englewood Cliffs, NJ, USA, 1998.
2. B. BahmaniFirouzi, T. Niknam, and M. Nayeripour, “A new evolutionary algorithm for cluster analysis,” in Proceedings of the World Academy of Science, Engineering and Technology, vol. 36, December
3. M. E. Celebi, “Effective initialization of k-means for color quantization,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '09), pp. 1649–1652, Cairo, Egypt,
November 2009. View at Publisher · View at Google Scholar
4. M. B. Al-Zoubi, A. Hudaib, A. Huneiti, and B. Hammo, “New efficient strategy to accelerate k-means clustering algorithm,” American Journal of Applied Sciences, vol. 5, no. 9, pp. 1247–1250, 2008.
5. M. Borodovsky and J. McIninch, “Recognition of genes in DNA sequence with ambiguities,” BioSystems, vol. 30, no. 1–3, pp. 161–171, 1993.
6. J. Bezdek and N. Pal, Fuzzy models for pattern recognition, IEEE press, New York, NY, USA, 1992.
7. L. Kaufman and P. Rousseeuw, Finding Groups in Data: An Introduction to Cluster Analysis, John Wiley & Sons, New York, NY, USA, 1990.
8. L. Kaufman and P. J. Rousseeuw, Clustering by Means of Medoids. StatisticalData Analysis Based on the L1 Norm, Elsevier, 1987.
9. G. Gan, Ch. Ma, and J. Wu, Data Clustering: Theory, Algorithms, and Applications, ASA-SIAM Series on Statistics and Applied Probability, Society for Industrial and Applied Mathematics, 2007.
10. D. Defays, “An efficeint algorithm for a complete link method,” The Computer Journal, vol. 20, pp. 364–366, 1977.
11. R. Sibson, “SLINK: an optimally efficent algorithm for the single link cluster method,” The Computer Journal, vol. 16, no. 1, pp. 30–34, 1973.
12. G. Karypis, E. H. Han, and V. Kumar, “Chameleon: hierarchical clustering using dynamic modeling,” Computer, vol. 32, no. 8, pp. 68–75, 1999. View at Publisher · View at Google Scholar
13. T. Zhang, R. Ramakrishnan, and M. Livny, “BIRCH: an efficient data clustering method for very large databases,” ACM Special Interest Group on Management of Data, vol. 25, no. 2, pp. 103–114,
14. S. Guha, R. Rastogi, and K. Shim, “Cure: an efficient clustering algorithm for large databases,” in Proceedings of the ACM International Conference on Management of Data (SIGMOD '98), L. M. Haas
and A. Tiwary, Eds., pp. 73–84, ACM Press, Seattle, sWash, USA, June 1998.
15. M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” in Proceedings of the 2nd International Conference on
Knowledge Discovery and Data Mining (KDD '96), pp. 226–231, Portland, Ore, USA, 1996.
16. M. Ankerst, M. M. Breunig, H. P. Kriegel, and J. Sander, “OPTICS: ordering points to identify the clustering structure,” ACM Special Interest Group on Management of Data, vol. 28, no. 2, pp.
49–60, 1999.
17. W. Wang, J. Yang, and R. Muntz, “Sting: A statistical information grid approach to spatial data mining,” in Proceedings of the 23rd International Conference on Very Large Data Bases, 1997.
18. R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan, “Automatic subspace clustering of high dimensional data for data mining applications,” ACM Special Interest Group on Management of Data, vol.
27, no. 2, pp. 94–105, 1998.
19. G. Sheikholeslami, S. Chatterjee, and A. Zhang, “WaveCluster: a multi-resolution clustering approach for very large spatial databases,” in Proceedings of the 24th International Conference on Very
Large Data Bases (VLDB '98), pp. 428–439, 1998.
20. R. M. Neal and G. E. Hinton, “A new view of the EMalgorithm that justifies incremental, sparse and other variants,” in Learning in Graphical Models, M. I. Jordan, Ed., pp. 355–3681, Kluwer
Academic, Boston, Mass, USA, 1998.
21. J. C. Bezdek, R. Ehrlich, and W. Full, “FCM: the fuzzy c-means clustering algorithm,” Computers and Geosciences, vol. 10, no. 2-3, pp. 191–203, 1984.
22. T. Pei, A. X. Zhu, C. Zhou, B. Li, and C. Qin, “A new approach to the nearest-neighbour method to discover cluster features in overlaid spatial point processes,” International Journal of
Geographical Information Science, vol. 20, no. 2, pp. 153–168, 2006. View at Publisher · View at Google Scholar
23. S. Roy and D. K. Bhattacharyya, “An approach to find embedded clusters using density based techniques,” Lecture Notes in Computer Science, vol. 3816, pp. 523–535, 2005. View at Publisher · View
at Google Scholar
24. C. Y. Lin, C. C. Chang, and C. C. Lin, “A new density-based scheme for clustering based on genetic algorithm,” Fundamenta Informaticae, vol. 68, no. 4, pp. 315–331, 2005.
25. D. Pascual, F. Pla, and J. S. Sanchez, “Non parametric local density-based clustering for multimoda overlapping distributions,” in Proceedings of the Intelligent Data Engineering and Automated
Learning (IDEAL '06), pp. 671–678, Burgos, Spain, 2006.
26. A. Ram, A. Sharma, A. S. Jalal, R. Singh, and A. Agrawal, “An enhanced density based spatial clustering of applications with noise,” in Proceedings of the International Advance Computing
Conference (IACC '09), pp. 1475–1478, March 2009. View at Publisher · View at Google Scholar
27. B. Borach and D. K. Bhattacharya, “A clustering technique using density difference,” in Proceedings of the International Conference on Signal Processing, Communications and Networking, pp.
585–588, 2007.
28. B. Borah and D. K. Bhattacharyya, “DDSC: a density differentiated spatial clustering technique,” Journal of Computers, vol. 3, no. 2, pp. 72–79, 2008.
29. L. Peng, Z. Dong, and W. Naijun, “VDBSCAN: varied density based spatial clustering of applications with noise,” in Proceedings of the International Conference on Service Systems and Service
Management (ICSSSM '07), pp. 528–531, Chengdu, China, June 2007. View at Publisher · View at Google Scholar
30. D. Hsu and S. Johnson, “A vibrating method based cluster reducing strategy,” in Proceedings of the 5th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD '08), pp. 376–379,
Shandong, China, October 2008. View at Publisher · View at Google Scholar
31. J. H. Peter and A. Antonysamy, “Heterogeneous density based spatial clustering of application with noise,” International Journal of Computer Science and Network Security, vol. 10, no. 8, pp.
210–214, 2010. | {"url":"http://www.hindawi.com/journals/isrn/2012/723516/","timestamp":"2014-04-17T19:01:38Z","content_type":null,"content_length":"133216","record_id":"<urn:uuid:b6ced294-9ee2-4cdf-9df4-380de26ecea1>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Slides from "Tapping the Data Deluge with R" lightning talk #rstats #PAWCon
Here is my presentation from last night’s Boston Predictive Analytics Meetup graciously hosted by Predictive Analytics World Boston.
The talk is meant to provide an overview of (some) of the different ways to get data into R, especially supplementary data sets to assist with your analysis.
All code and data files are available at github: http://bit.ly/pawdata (https://github.com/jeffreybreen/talk-201210-data-deluge)
The slides themselves are on slideshare: http://bit.ly/pawdatadeck (http://www.slideshare.net/jeffreybreen/tapping-the-data-deluge-with-r)
October 4, 2012 at 9:38 AM
Many thanks for the terrific presentation.
In the file ’10-WDI.R’ (handouts on the Github) the indices at the
colnames(data)[3:6] = c(‘fertility’, ‘life expectancy’, ‘population’, ‘per capita GDP’)
should probably be [4:7] …
Again, many thanks for sharing the knowledge. | {"url":"http://jeffreybreen.wordpress.com/2012/10/02/tapping-the-data-deluge-with-r/?like=1&source=post_flair&_wpnonce=895db6364d","timestamp":"2014-04-16T19:00:08Z","content_type":null,"content_length":"66454","record_id":"<urn:uuid:41593902-3915-46e2-b6c5-e68cd0e9f7bc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] iterative conception/cumulative hierarchy
Christopher Menzel cmenzel at tamu.edu
Fri Feb 24 16:40:36 EST 2012
Am Feb 24, 2012 um 9:36 PM schrieb <kremer at uchicago.edu>:
> Here's an old paper by Jim van Aken (RIP) which explains the axioms of ZFC in terms of the idea of one entity presupposing others for its existence (so doing away with the notion of "forming sets" from the get-go).
> http://www.jstor.org/stable/2273911
> Michael Kremer
Yes, good call, Michael, this is a really nice paper. Along the same "stage theoretic" lines are of course the classic papers by Boolos* and Scott** that Van Aken references as well as the excellent 2004 OUP book Set Theory and Its Philosophy by Michael Potter.
Chris Menzel
*"The Iterative Conception of Set", Journal of Philosophy 68 (1971), 215-231
**"Axiomatizing Set Theory", in T. Jech (ed) Axiomatic Set Theory II, Proc. of Symp. of Pure Math 13, AMS, 207-214.
> ---- Original message ----
>> Date: Thu, 23 Feb 2012 08:13:32 -0600 (CST)
>> From: fom-bounces at cs.nyu.edu (on behalf of Nik Weaver <nweaver at math.wustl.edu>)
>> Subject: [FOM] iterative conception/cumulative hierarchy
>> To: fom at cs.nyu.edu
>> Chris Menzel wrote:
>>> The metaphor of "forming" sets in successive stages that is often
>>> invoked in informal expositions of the cumulative hierarchy is just
>>> that, a metaphor; some people find it helpful in priming the necessary
>>> intuitions for approaching the actual mathematics. But in ZF proper, the
>>> metaphor is gone; there are indeed "stages", or "levels", but these are
>>> fixed mathematical objects of the form V_a = U{P(V_b) | b < a}. The
>>> cumulative hierarchy is indeed "there all at once", just as you desire.
>> As I understand it, the *iterative conception* is the idea that sets
>> are formed in stages, and the *cumulative hierarchy* is the structure
>> this imposes on the set theoretic universe. The iterative conception
>> is universally explained in terms of "forming" sets in "stages" (often
>> with the scare quotes included). Once the explanation is complete this
>> language is then, universally, retracted.
>> Is "Sets are formed in stages --- but not really" not a fair summary
>> of the iterative conception?
>> Without invoking the "metaphor" of formation in stages, what is the
>> explanation of why we should understand the universe of sets to be
>> layered in a cumulative hierarchy?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20120224/847fcc96/attachment-0001.html>
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2012-February/016228.html","timestamp":"2014-04-18T10:34:13Z","content_type":null,"content_length":"6031","record_id":"<urn:uuid:e1e11b18-aff0-459b-9993-06a39b9fbc64>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discovery of Temporal Patterns – Learning Rules about the Qualitative Behaviour of Time Series
Results 1 - 10 of 34
, 2003
"... Several important time series data mining problems reduce to the core task of finding approximately repeated subsequences in a longer time series. In an earlier work, we formalized the idea of
approximately repeated subsequences by introducing the notion of time series motifs. Two limitations of thi ..."
Cited by 119 (21 self)
Add to MetaCart
Several important time series data mining problems reduce to the core task of finding approximately repeated subsequences in a longer time series. In an earlier work, we formalized the idea of
approximately repeated subsequences by introducing the notion of time series motifs. Two limitations of this work were the poor scalability of the motif discovery algorithm, and the inability to
discover motifs in the presence of noise. Here we address these limitations by introducing a novel algorithm inspired by recent advances in the problem of pattern discovery in biosequences. Our
algorithm is probabilistic in nature, but as we show empirically and theoretically, it can find time series motifs with very high probability even in the presence of noise or “don’t care ” symbols.
Not only is the algorithm fast, but it is an anytime algorithm, producing likely candidate motifs almost immediately, and gradually improving the quality of results over time.
, 2002
"... The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem.
However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously ..."
Cited by 72 (15 self)
Add to MetaCart
The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem.
However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns "motifs," because of their
close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time
series databases. In addition, it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we
carefully motivate, then introduce, a non-trivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach
on several real world datasets.
- In Proceedings of IEEE International Conference on Data Mining (ICDM’02 , 2002
"... The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem.
However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously ..."
Cited by 30 (0 self)
Add to MetaCart
The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem.
However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns "motifs", because of their
close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time
series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification.
- JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH , 2002
"... We develop, analyze, and evaluate a novel, supervised, specific-to-general learner for a simple temporal logic and use the resulting algorithm to learn visual event definitions from video
sequences. First, we introduce a simple, propositional, temporal, event-description language called AMA that ..."
Cited by 30 (3 self)
Add to MetaCart
We develop, analyze, and evaluate a novel, supervised, specific-to-general learner for a simple temporal logic and use the resulting algorithm to learn visual event definitions from video sequences.
First, we introduce a simple, propositional, temporal, event-description language called AMA that is sufficiently expressive to represent many events yet sufficiently restrictive to support learning.
We then give algorithms, along with lower and upper complexity bounds, for the subsumption and generalization problems for AMA formulas. We present a positive-examples -- only specific-to-general
learning method based on these algorithms. We also present a polynomial-time -- computable "syntactic" subsumption test that implies semantic subsumption without being equivalent to it. A
generalization algorithm based on syntactic subsumption can be used in place of semantic generalization to improve the asymptotic complexity of the resulting learning algorithm. Finally
- Intelligent Data Analysis , 2001
"... Observing a binary feature over a period of time yields a sequence of observation intervals. To ease the access to continuous features (like time series), they are often broken down into
attributed intervals, such that the attribute describes the series' behaviour within the segment (e.g. increasing ..."
Cited by 19 (2 self)
Add to MetaCart
Observing a binary feature over a period of time yields a sequence of observation intervals. To ease the access to continuous features (like time series), they are often broken down into attributed
intervals, such that the attribute describes the series' behaviour within the segment (e.g. increasing, high-value, highly convex, etc.). In both cases, we obtain a sequence of interval data, in
which temporal patterns and rules can be identified. A temporal pattern is defined as a set of labeled intervals together with their interval relationships described in terms of Allen's interval
logic. In this paper, we consider the evaluation of such rules in order to find the most informative rules. We discuss rule semantics and outline de ciencies of the previously used rule evaluation.
We apply the J-measure to rules with a modified semantics in order to better cope with different lengths of the temporal patterns. We also consider the problem of specializing temporal rules by
additional attributes of the state intervals.
, 2003
"... A new framework for analyzing sequential or temporal data such as time series is proposed. It differs from other approaches by the special emphasis on the interpretability of the results, since
interpretability is of vital importance for knowledge discovery, that is, the development of new knowl ..."
Cited by 17 (0 self)
Add to MetaCart
A new framework for analyzing sequential or temporal data such as time series is proposed. It differs from other approaches by the special emphasis on the interpretability of the results, since
interpretability is of vital importance for knowledge discovery, that is, the development of new knowledge (in the head of a human) from a list of discovered patterns. While traditional approaches
try to model and predict all time series observations, the focus in this work is on modelling local dependencies in multivariate time series. This
"... Agents need to know the effects of their actions. Strong associations between actions and effects can be found by counting how often they co-occur. We present an algorithm that learns temporal
patterns expressed as fluents, propositions with temporal extent. The fluent-learning algorithm is hierarch ..."
Cited by 16 (5 self)
Add to MetaCart
Agents need to know the effects of their actions. Strong associations between actions and effects can be found by counting how often they co-occur. We present an algorithm that learns temporal
patterns expressed as fluents, propositions with temporal extent. The fluent-learning algorithm is hierarchical and unsupervised. It works by maintaining co-occurrence statistics on pairs of fluents.
In experiments on a mobile robot, the fluent-learning algorithm found temporal associations that correspond to effects of the robot's actions.
- In: Proc. IEEE Int. Conf. on Data Engineering (ICDE05 , 2005
"... Efficiently and accurately searching for similarities among time series and discovering interesting patterns is an important and non-trivial problem. In this paper, we introduce a new
representation of time series, the Multiresolution Vector Quantized (MVQ) approximation, along with a new distance f ..."
Cited by 14 (4 self)
Add to MetaCart
Efficiently and accurately searching for similarities among time series and discovering interesting patterns is an important and non-trivial problem. In this paper, we introduce a new representation
of time series, the Multiresolution Vector Quantized (MVQ) approximation, along with a new distance function. The novelty of MVQ is that it keeps both local and global information about the original
time series in a hierarchical mechanism, processing the original time series at multiple resolutions. Moreover, the proposed representation is symbolic employing key subsequences and potentially
allows the application of text-based retrieval techniques into the similarity analysis of time series. The proposed method is fast and scales linearly with the size of
- International Journal of Knowledge-Based & Intelligent Engineering Systems , 2005
"... The understanding of complex muscle coordination is an important goal in human movement science. There are numerous applications in medicine, sports, and robotics. The coordination process can
be studied by observing complex, often cyclic movements, which are dynamically re-peated in an almost ident ..."
Cited by 10 (5 self)
Add to MetaCart
The understanding of complex muscle coordination is an important goal in human movement science. There are numerous applications in medicine, sports, and robotics. The coordination process can be
studied by observing complex, often cyclic movements, which are dynamically re-peated in an almost identical manner. The muscle activation is measured using kinesiological EMG. Mining the EMG data to
identify patterns, which explain the interplay and coordination of muscles is a very difficult Knowledge Discovery task. We present the Time Series Knowledge Min-ing framework to discover knowledge
in multivariate time series and show how it can be used to extract such temporal patterns.
- Proceedings of the 4th SIAM International Conference on Data Mining (SDM’04). SIAM , 2004
"... The detection of recurrent episodes in long strings of tokens has attracted some interest and a variety of useful methods have been developed. The temporal relationship between discovered
episodes may also provide useful knowledge of the phenomenon but as yet has received little investigation. This ..."
Cited by 9 (2 self)
Add to MetaCart
The detection of recurrent episodes in long strings of tokens has attracted some interest and a variety of useful methods have been developed. The temporal relationship between discovered episodes
may also provide useful knowledge of the phenomenon but as yet has received little investigation. This paper discusses an approach for finding such relationships through the proposal of a robust and
efficient search strategy and effective user interface both of which are validated through experiment. Keywords: Temporal Sequence Mining. 1 Introduction and Related Work While the mining of frequent
episodes is an important capability, the manner in which such episodes interact can provide further useful knowledge in the search for a description of the behaviour of a phenomenon. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=347766","timestamp":"2014-04-21T11:09:43Z","content_type":null,"content_length":"39498","record_id":"<urn:uuid:5b0e258f-35ad-454c-8338-31a7df65d45b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Shape of Code
If I am reading through the body of a function, what is the probability of a particular variable being the next one I encounter? A good approximation can be calculated as follows: Count the number of
occurrences of all variables in the function definition up to the current point and work out the percentage occurrence for each of them, the probability of a particular variable being seen next is
approximately equal to its previously seen percentage. The following graph is the evidence I give for this approximation.
The graph shows a count of the number of C function definitions containing identifiers that are referenced a given number of times, e.g., if the identifier x is referenced five times in one function
definition and ten times in another the function definition counts for five and ten are both incremented by one. That one axis is logarithmic and the bullets and crosses form almost straight lines
hints that a Zipf-like distribution is involved.
There are many processes that will generate a Zipf distribution, but the one that interests me here is the process where the probability of the next occurrence of an event occurring is proportional
to the probability of it having previously occurred (this includes some probability of a new event occurring; follow the link to Simon’s 1955 paper).
One can think of the value (i.e., information) held in a variable as having a given importance and it is to be expected that more important information is more likely to be operated on than less
important information. This model appeals to me. Another process that will generate this distribution is that of Monkeys typing away on keyboards and while I think source code contains lots of random
elements I don’t think it is that random.
The important concept here is operated on. In x := x + 1; variable x is incremented and the language used requires (or allowed) that the identifier x occur twice. In C this operation would only
require one occurrence of x when expressed using the common idiom x++;. The number of occurrences of a variable needed to perform an operation on it, in a given languages, will influence the shape of
the graph based on an occurrence count.
One graph does not provide conclusive evidence, but other measurements also produce straightish lines. The fact that the first few entries do not form part of an upward trend is not a problem, these
variables are only accessed a few times and so might be expected to have a large deviation.
More sophisticated measurements are needed to count operations on a variable, as opposed to occurrences of it. For instance, few languages (any?) contain an indirection assignment operator (e.g.,
writing x ->= next; instead of x = x -> next;) and this would need to be adjusted for in a more sophisticated counting algorithm. It will also be necessary to separate out the effects of global
variables, function calls and the multiple components involved in a member selection, etc.
Update: A more detailed analysis is now available.
Readability, an experimental view
January 20th, 2009 No comments
Readability is an attribute that source code is often claimed to have, but what is it? While people are happy to use the term they have great difficulty in defining exactly what it is (I will
eventually get around discussing my own own views in post). Ray Buse took a very simply approach to answering this question, he asked lots of people (to be exact 120 students) to rate short snippets
of code and analysed the results. Like all good researchers he made his data available to others. This posting discusses my thoughts on the expected results and some analysis of the results.
The subjects were first, second, third year undergraduates and postgraduates. I would not expect first year students to know anything and for their results to be essentially random. Over the years,
as they gain more experience, I would expect individual views on what constitutes readability to stabilize. The input from friends, teachers, books and web pages might be expected to create some
degree of agreement between different students’ view of what constitutes readability. I’m not saying that this common view is correct or bears any relationship to views held by other groups of
people, only that there might be some degree of convergence within a group of people studying together.
Readability is not something that students can be expected to have explicitly studied (I’m assuming that it plays an insignificant part in any course marks), so their knowledge of it is implicit.
Some students will enjoy writing code and spends lots of time doing it while (many) others will not.
Separating out the data by year the results for first year students look like a normal distribution with a slight bulge on one side (created using plot(density(1_to_5_rating_data)) in R).
year by year this bulge turns (second year):
into a hillock (final year):
It is tempting to interpret these results as the majority of students assigning an essentially random rating, with a slight positive bias, for the readability of each snippet, with a growing number
of more experienced students assigning less than average rating to some snippets.
Do the student’s view converge to a common opinion on readability? The answers appears to be No. An analysis of the final year data using Fleiss’s Kappa shows that there is virtually no agreement
between students ratings. In fact every Interrater Reliability and Agreement function I tried said the same thing. Some cluster analysis might enable me to locate students holding similar views.
In an email exchange with Ray Buse I learned that the postgraduate students had a relatively wide range of computing expertise, so I did not analyse their results.
I wish I had thought of this approach to measuring readability. Its simplicity makes it amenable for use in a wide range of experimental situations. The one change I would make is that I would
explicitly create the snippets to have certain properties, rather than randomly extracting them from existing source.
cluster analysis, experimental, random, readability, students
Unexpected experimental effects
January 16th, 2009 No comments
The only way to find out the factors that effect developers’ source code performance is to carry out experiments where they are the subjects. Developer performance on even simple programming tasks
can be effected by a large number of different factors. People are always surprised at the very small number of basic operations I ask developers to perform in the experiments I run. My reply is
that only by minimizing the number of factors that might effect performance can I have any degree of certainty that the results for the factors I am interested in are reliable.
Even with what appear to be trivial tasks I am constantly surprised by the factors that need to be controlled. A good example is one of the first experiments I ever ran. I thought it would be a
good idea to replicate, using a software development context, a widely studied and reliably replicated human psychological effect; when asked to learn and later recall/recognize a list of words
people make mistakes. Psychologists study this problem because it provides a window into the operation structure of the human memory subsystem over short periods of time (of the order of at most
tens of seconds). I wanted to find out what sort of mistakes developers would make when asked to remember information about a sequence of simple assignment statements (e.g., qbt = 6;).
I carefully read the appropriate experimental papers and had created lists of variables that controlled for every significant factor (e.g., number of syllables, frequency of occurrence of the words
in current English usage {performance is better for very common words}) and the list of assignment statements was sufficiently long that it would just overload the capacity of short term memory (
about 2 seconds worth of sound).
The results contained none of the expected performance effects, so I ran the experiment again looking for different effects; nothing. A chance comment by one of the subjects after taking part in the
experiment offered one reason why the expected performance effects had not been seen. By their nature developers are problem solvers and I had set them a problem that asked them to remember
information involving a list of assignment statements that appeared to be beyond their short term memory capacity. Problem solvers naturally look for patterns and common cases and the variables in
each of my carefully created list of assignment statements could all be distinguished by their first letter. Subjects did not need to remember the complete variable name, they just needed to
remember the first letter (something I had not controlled for). Asking around I found that several other subjects had spotted and used the same strategy. My simple experiment was not simple enough!
I was recently reading about an experiment that investigated the factors that motivate developers to comment code. Subjects were given some code and asked to add additional functionality to it. Some
subjects were given code containing lots of comments while others were given code containing few comments. The hypothesis was that developers were more likely to create comments in code that already
contained lots of comments, and the results seemed to bear this out. However, closer examination of the answers showed that most subjects had cut and pasted chunks (i.e., code and comments) from the
code they were given. So code the percentage of code in the problem answered mimicked that in the original code (in some cases subjects had complicated the situation by refactoring the code).
The sound of code
January 15th, 2009 No comments
Speech, it is claimed, is the ability that separates humans from all other animals, yet working with code is almost exclusively based on sight. There are instances of ‘accidental’ uses of sound,
e.g., listening to disc activity to monitor a programs process or in days of old the chatter of other mechanical parts.
Various projects have attempted to intentionally make use of sound to provide an interface to the software development process, including:
People like to talk about what they do and perhaps this could be used to overcome developers dislike of writing comments. Unfortunately automated processing of natural language (assuming the
speech to text problem is solved) has not reached the stage where it is possible to automatically detect when the topic of conversation has changed or to figure out what piece of code is being
discussed. Perhaps the reason why developers find it so hard to write good comments is because it is a skill that requires training and effort, not random thoughts that happen to come to mind.
Rather than relying on the side-effects of mechanical vibration it has been proposed that programs intentionally produce audio output that aids developers monitor their progress. Your authors
experience with interpreting mechanically generated sound is that it requires a great deal of understanding of a program’s behavior and that it is a very low bandwidth information channel.
Writing code by talking (i.e., voice input of source code) initially sounds attractive. As a form of input speech is faster than typing, however computer processing of speech is still painfully
slow. Another problem that needs to be handled is the large number of different ways in which the same thing can and is spoken, e.g., numeric values. As a method of output reading is 70% faster
than listening.
Unless developers have to spend lots of time commuting in person, rather than telecommuting, I don’ see a future for speech input of code. Audio program execution monitoring probably has market is
specialist niches, no more.
I do see a future for spoken mathematics, which is something that people who are not a mathematicians might want to do. The necessary formating commands are sufficiently obtuse that they require too
much effort from the casual user.
comments, mathematics, natural language, sound
Incorrect spelling
January 11th, 2009 No comments
While even a mediocre identifier name can provide useful information to a reader of the source a poorly chosen name can create confusion and require extra effort to remember. An author’s good intent
can be spoiled by spelling mistakes, which are likely to be common if the developer is not a native speaker of the English (or whatever natural language is applicable).
Identifiers have characteristics which make them difficult targets for traditional spell checking algorithms; they often contain specialized words, dictionary words may be abbreviated in some way
(making phonetic techniques impossible) and there is unlikely to be any reliable surrounding context.
Identifiers share many of the characteristics of search engine queries, they contain a small number of words that don’t fit together into a syntactically correct sentence and any surrounding context
(e.g., previous queries or other identifiers) cannot be trusted. However, search engines have their logs of millions of previous search queries to fall back on, enabling them to suggest (often
remarkably accurate) alternatives to non-dictionary words, specialist domains and recently coined terms. Because developers don’t receive any feedback on their spelling mistakes revision control
systems are unlikely to contain any relevant information that can be mined.
One solution is for source code editors to require authors to fully specify all of the words used in an identifier when it is declared; spell checking and suitable abbreviation rules being applied at
this point. Subsequent uses of the identifier can be input using the abbreviated form. This approach could considerably improve consistency of identifier usage across a project’s source code (it
could also flag attempts to use both orderings of a word pair, e.g., number count and count number). The word abbreviation mapping could be stored (perhaps in a comment at the end of the source) for
use by other tools and personalized developer preferences stored in a local configuration file. It is time for source code editors to start taking a more active role in helping developers write
readable code.
editing source, IDE, identifier, search engine, sounds-like, spelling
Semantic pattern matching (Coccinelle)
January 8th, 2009 No comments
I have just discovered Coccinelle a tool that claims to fill a remarkable narrow niche (providing semantic patch functionality; I have no idea how the name is pronounced) but appears to have a lot of
other uses. The functionality required of a semantic patch is the ability to write source code patterns and a set of transformation rules that convert the input source into the desired output. What
is so interesting about Coccinelle is its pattern matching ability and the ability to output what appears to be unpreprocessed source (it has to be told the usual compile time stuff about include
directory paths and macros defined via the command line; it would be unfair of me to complain that it needs to build a symbol table).
Creating a pattern requires defining identifiers to have various properties (eg, an expression in the following example) followed by various snippets of code that specify the pattern to match (in the
following <… …> represents a bracketed (in the C compound statement sense) don’t care sequence of code and the lines starting with +/- have the usual patch meaning (ie, add/delete line)). The tool
builds an abstract syntax tree so urb is treated as a complete expression that needs to be mapped over to the added line).
expression lock, flags;
expression urb;
spin_lock_irqsave(lock, flags);
- usb_submit_urb(urb)
+ usb_submit_urb(urb, GFP_ATOMIC)
spin_unlock_irqrestore(lock, flags);
Coccinelle comes with a bunch of predefined equivalence relations (they are called isomophisms) so that constructs such as if (x), if (x != NULL) and if (NULL != x) are known to be equivalent, which
reduces the combinatorial explosion that often occurs when writing patterns that can handle real-world code.
It is written in OCaml (I guess there had to be some fly in the ointment) and so presumably borrows a lot from CIL, perhaps in this case a version number of 0.1.3 is not as bad as it might sound.
My main interest is in counting occurrences of various kinds of patterns in source code. A short-term hack is to map the sought-for pattern to some unique character sequence and pipe the output
through grep and wc. There does not seem to be any option to output a count of the matched patterns … yet
C, equivalence relation, pattern matching, preprocessing, semantic matching, source transformation, symbol table, tool
What I changed my mind about in 2008
January 4th, 2009 No comments
A few years ago The Edge asked people to write about what important issue(s) they had recently changed their mind about. This is an interesting question and something people ought to ask themselves
every now and again. So what did I change my mind about in 2008?
1. Formal verification of nontrivial C programs is a very long way off. A whole host of interesting projects (e.g., Caduceus, Comcert and Frame-C) going on in France has finally convinced me that
things are a lot closer than I once thought. This does not mean that I think developers/managers will be willing to use them, only that they exist.
2. Automatically extracting useful information from source code identifier names is still a long way off. Yes, I am a great believer in the significance of information contained in identifier names.
Perhaps because I have studied the issues in detail I know too much about the problems and have been put off attacking them. A number of researchers (e.g., Emily Hill, David Shepherd, Adrian Marcus,
Lin Tan and a previously blogged about project) have simply gone ahead and managed to extract (with varying amount of human intervention) surprising amounts of useful from identifier names.
3. Theoretical analysis of non-trivial floating-point oriented programs is still a long way off. Daumas and Lester used the Doobs-Kolmogorov Inequality (I had to look it up) to deduce the probability
that the rounding error in some number of floating-point operations, within a program, will exceed some bound. They also integrated the ideas into NASA’s PVS system.
You can probably spot the pattern here, I thought something would not happen for years and somebody went off and did it (or at least made an impressive first step along the road). Perhaps 2008 was
not a good year for really major changes of mind, or perhaps an earlier in the year change of mind has so ingrained itself in my mind that I can no longer recall thinking otherwise.
The 30% of source that is ignored
January 3rd, 2009 No comments
Approximately 30% of source code is not checked for correct syntax (developers can make up any rules they like for its internal syntax), semantic accuracy or consistency; people are content to shrug
their shoulders at this this state of affairs and are generally willing to let it pass. I am of course talking about comments; the 30% figure comes from my own measurements with other published
measurements falling within a similar ballpark.
Part of the problem is that comments often contain lots of natural language (i.e., human not computer language) and this is known to be very difficult to parse and is thought to be unusable without
all sorts of semantic knowledge that is not currently available in machine processable form.
People are good at spotting patterns in ambiguous human communication and deducing possible meanings from it, and this has helped to keep comment usage alive, along with the fact that the information
they provide is not usually available elsewhere and comments are right there in front of the person reading the code and of course management loves them as a measurable attribute that is cheap to do
and not easily checkable (and what difference does it make if they don’t stay in sync with the code).
One study that did attempt to parse English sentences in comments found that 75% of sentence-style comments were in the past tense, with 55% being some kind of operational description (e.g., “This
routine reads the data.”) and 44% having the style of a definition (e.g., “General matrix”).
There is a growing collection of tools for processing natural language (well at least for English). However, given the traditionally poor punctuation used in comments, the use of variable names and
very domain specific terminology, full blown English parsing is likely to be very difficult. Some recent research has found that useful information can be extracted using something only a little more
linguistically sophisticated than word sense disambiguation.
The designers of the iComment system sensibly limited the analysis domain (to memory/file lock related activities), simplified the parsing requirements (to looking for limited forms of requirements
wording) and kept developers in the loop for some of the processing (e.g., listing lock related function names). The aim was to find inconsistencies between the requirements expressed in comments and
what the code actually did. Within the Linux/Mozilla/Wine/Apache sources they found 33 faults in the code and 27 in the comments, claiming a 38.8% false positive rate.
If these impressive figures can be replicated for other kinds of coding constructs then comment contents will start to leave the dark ages.
comments, English, faults, inconsistency, measure, parsing | {"url":"http://shape-of-code.coding-guidelines.com/2009/01/","timestamp":"2014-04-16T11:14:06Z","content_type":null,"content_length":"64538","record_id":"<urn:uuid:70701ec9-6b00-40e7-a899-a1446288fa75>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Euless Algebra Tutor
Find an Euless Algebra Tutor
...But education isn't all work and no play. Many people go through school thinking of it as a chore, not realizing how fun and exciting it can be. My approach is simple: make it fun and make it
30 Subjects: including algebra 1, algebra 2, chemistry, reading
...Here is a highlight of my education accomplishments: I have worked with WyzAnt Tutoring from September 2011 to the present. While here, I 1) serve a diverse clientele of 111 customers, 2) earn
17 client recommendations and a 98% customer satisfaction rating, 3) Conducted over 278 tutorials, 4) ...
40 Subjects: including algebra 1, writing, algebra 2, reading
...It has allowed me to fully understand where I started and where I finished, and am able to go back and review work that I have already done in order to gain a better understanding of where I am
going. I will be waiting, with much anticipation, for your email and the chance we will have to meet. Math came alive for me when I got to algebra.
17 Subjects: including algebra 1, algebra 2, chemistry, biology
...I have competed and won awards in several activities, such as; the CFA Research Challenge, business-case studies, and investment research projects. Along the way I have mastered several
software programs such as Microsoft Office and SQL. I generally love to compete and am encouraging of others who want to perform their best.
11 Subjects: including algebra 1, accounting, German, Microsoft Word
...Currently, I am an instructor at a world-renowned institute in TX. I have tutoring experience with students from middle school/high school/college for math and biology.I was born in and grew up
in south Korea. Before coming to the US for graduate school, I was always educated in Korean.
6 Subjects: including algebra 2, calculus, algebra 1, biology | {"url":"http://www.purplemath.com/euless_algebra_tutors.php","timestamp":"2014-04-18T08:57:38Z","content_type":null,"content_length":"23770","record_id":"<urn:uuid:813bfbf4-dd79-4d3e-9109-0792c7c56566>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
6. Let W= (a): a + b + c = 0). Is W a subspace of R^3 (b) (c)
Last edited by mr fantastic; March 1st 2011 at 05:33 PM. Reason: Re-titled.
the question must be, $W=\left \{ (a,b,c)\in \mathbb{R}^{3}\mid a+b+c=0 \right \}$ is it a subspace of $\mathbb{R}^{3}$ ? Now can you answer these questions ? is it closed under addition ? is it
closed under scalar multiplication ?
abc is not horizontal, it's vertical if it changes anything. And to be honest I have no idea how to do this problem. My teacher had this problem on the homework and we literally did not learn
anything yet. I would ask for the answer but it's not fair to ask that on here.
Let $X=(a,b,c)$ and $X'=(a',b',c')$ elements of $W$.We have also $X+X'= (a+a',b+b',c+c')$ element of $W$.Indeed, $(a+a')+(b+b')+(c+c')=(a+b+c)+(a'+b'+c')=0$ This proves that $W$ is closed under
addition,try to prove that is also closed under scalar multiplication. | {"url":"http://mathhelpforum.com/advanced-algebra/173117-subspaces.html","timestamp":"2014-04-19T19:40:23Z","content_type":null,"content_length":"36912","record_id":"<urn:uuid:7bda0c32-d32d-4227-a0a1-29fbddd96afa>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Muhlenberg Township, PA Math Tutor
Find a Muhlenberg Township, PA Math Tutor
...I always work hard to incorporate student interests into my lessons. I currently hold certification in the area of Elementary Education. Part of this process was to take several courses on
elementary level content.
15 Subjects: including geometry, English, prealgebra, reading
...Now I retired and teach in community. I also provide volunteer tutoring at Literacy council for math. My teaching style is student centered.
2 Subjects: including algebra 1, chemistry
...It turned out that a little one on one attention, as well as a lot of encouragement throughout the process gave unexpected results, and all of my students started growing before my eyes, and
became more confident. They had improved their grades considerably, and some of them even went on to atte...
7 Subjects: including geometry, prealgebra, precalculus, trigonometry
...Having a background of teaching in major universities and tutoring at community colleges, I have developed a direct method over the years. From my Masters work, Medical coursework, and PhD
coursework, I have realized in order to truly understand a subject, one must begin from the basic step and ...
38 Subjects: including SPSS, Microsoft Excel, anatomy, ESL/ESOL
...I love to learn and I love to help others learn. My areas of expertise are English, computer technology, piano, pre-algebra and algebra 1. I believe that everyone has the potential for growth
through a combination of hard work and enriching experiences.
17 Subjects: including prealgebra, algebra 1, algebra 2, geometry
Related Muhlenberg Township, PA Tutors
Muhlenberg Township, PA Accounting Tutors
Muhlenberg Township, PA ACT Tutors
Muhlenberg Township, PA Algebra Tutors
Muhlenberg Township, PA Algebra 2 Tutors
Muhlenberg Township, PA Calculus Tutors
Muhlenberg Township, PA Geometry Tutors
Muhlenberg Township, PA Math Tutors
Muhlenberg Township, PA Prealgebra Tutors
Muhlenberg Township, PA Precalculus Tutors
Muhlenberg Township, PA SAT Tutors
Muhlenberg Township, PA SAT Math Tutors
Muhlenberg Township, PA Science Tutors
Muhlenberg Township, PA Statistics Tutors
Muhlenberg Township, PA Trigonometry Tutors
Nearby Cities With Math Tutor
Bernharts, PA Math Tutors
Exeter Township, PA Math Tutors
Greenfield Manor, PA Math Tutors
Laureldale, PA Math Tutors
Mount Penn, PA Math Tutors
Mt Penn, PA Math Tutors
Muhlenburg Park, PA Math Tutors
Ontelaunee, PA Math Tutors
Reading Station, PA Math Tutors
Ruscombmanor Twp, PA Math Tutors
South Heidelberg Twp, PA Math Tutors
Spring Valley, PA Math Tutors
Temple, PA Math Tutors
West Reading, PA Math Tutors
Wyomissing, PA Math Tutors | {"url":"http://www.purplemath.com/Muhlenberg_Township_PA_Math_tutors.php","timestamp":"2014-04-18T13:47:45Z","content_type":null,"content_length":"24193","record_id":"<urn:uuid:9b7c2841-7d39-4ca9-bbdf-8f3aa5dbfd5d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analyses in program Ground Loss
Analyses performed in the program "Ground Loss" can be divided into the following groups:
• analysis of the shape of subsidence trough above excavations
• analysis of building damage
The failure analysis of building is based on the shape of subsidence trough.
Analysis of subsidence trough
The analysis of subsidence trough consists of several sequential steps:
• determination of the maximum settlement and dimensions of subsidence trough for individual excavations
• back calculation of the shape and dimensions of subsidence trough providing it is calculated at a given depth below the terrain surface.
• determination of the overall shape of subsidence trough for more excavations
• post-processing of other variables (horizontal deformation, slope)
The analysis of maximum settlement and dimensions of subsidence trough can be carried out using either the theory of volume loss or the classical theories (Peck, Fazekas, Limanov).
Volume loss
The volume loss method is a semi-empirical method based partially on theoretical grounds. The method introduces, although indirectly, the basic parameters of excavation into the analysis (these
include mechanical parameters of a medium, technological effects of excavation, excavation lining etc) using 2 comprehensive parameters (coefficient k for determination of inflection point and a
percentage of volume loss VL). These parameters uniquely define the shape of subsidence trough and are determined empirically from years of experience.
Settlement expressed in terms volumes
The maximum settlement S[max], and location of inflection point L[inf] are provided by the following expressions:
A - excavation area
Z - depth of center point of excavation
k - coefficient to calculate inflection point (material constant)
VL - percentage of volume loss
The roof deformation u[a] follows from:
r - excavation radius
VL - percentage of volume loss
Recommended values of parameters for volume loss analysis
Data needed for the determination of subsidence trough using the volume loss method:
Coefficient to calculate inflection
point k
Soil or rock k
cohesionless soil 0,3
normaly consolidated clay 0,5
overconsolidated clay 0,6-0,7
clay slate 0,6-0,8
quartzite 0,8-0,9
Percentage of volume loss VL
Technology VL
TBM 0,5-1
Sequential excavation method 0,8-1,5
Several relationships were also derived to determine the value of lost volume VL based on stability ratio N defined by Broms and Bennermarkem:
σ[v] - verall stress along excavation axis
σ[t] - excavation lining resistance (if lining is installed)
S[n] - undrained stiffness of clay
For N < 2 the soil/rock in the vicinity of excavation is assumed elastic and stable. For N ∈ < 2,4 local plastic zones begin to develop in the vicinity of excavation, for N ∈ < 4,6 a large plastic
zone develops around excavation and for N = 6 the loss of stability of tunnel face occurs. Figure shows the dependence of stability ration and lost volume VL.
Classical theory
Convergence analysis of an excavation and calculation of the maximum settlement in a homogeneous body are the same for all classical theories. The subsidence trough analyses then differ depending on
the assumed theory (Peck, Fazekas, Limanov).
When calculating settlement the program first determines the radial loading of a circular excavation as:
σ[z] - geostatic stress in center of excavation
K[r] - coefficient of pressure at rest of cohesive soil
The roof u[a] and the bottom u[b] deformations of excavation follow from:
Z - depth of center point of excavation
r - excavation radius
E - modulus of elasticity of rock/soil in vicinity of excavation
ν - Poisson's number of rock/soil in vicinity of excavation
The maximum terrain settlement and the length of subsidence trough are determined as follows:
Z - depth of center point of excavation
r - excavation radius
E - modulus of elasticity of rock/soil in vicinity of excavation
ν - Poisson's number of rock/soil in vicinity of excavation
When the tunnel roof displacement is prescribed the maximum settlement is provided by the following expression:
Z - depth of center point of excavation
r - excavation radius
u[a] - tunnel roof displacement
ν - Poisson's number of rock/soil in vicinity of excavation
Analysis for layered subsoil
When determining a settlement of layered subsoil the program first calculates the settlement at the interface between the first layer above excavation and other layers of overburden S[int] and
determines the length of subsidence trough along layers interfaces. In this case the approach complies with the one used for a homogeneous soil.
Next (as shown in Figure) the program determines the length of subsidence trough L at the terrain surface.
Analysis of settlement for layered subsoil
The next computation differs depending on the selected analysis theory:
Solution after Limanov
Limanov described the horizontal displacement above excavation with the help of lost area F:
L - length of subsidence trough
F - volume loss of soil per 1 m run determined from:
L[int] - length of subsidence trough along interfaces above excavation
S[int] - settlement of respective interface
Solution after Fazekas
Fazekas described the horizontal displacement above excavation using the following expression:
L - length of subsidence trough
L[int] - length of subsidence trough along interfaces above excavation
S[int] - settlement of respective interface
Solution after Peck
Peck described the horizontal displacement above excavation using the following expression:
L[int] - length of subsidence trough along interfaces above excavation
S[int] - settlement of respective interface
L[inf] - distance of inflection point of subsidence trough from excavation axis at terrain surface
Shape of subsidence trough
The program offers two particular shapes of subsidence troughs – according to Gauss or Aversin.
Curve based on Gauss
A number of studies carried out both in the USA and Great Britain proved that the transverse shape of subsidence trough can be well approximated using the Gauss function. This assumption then allows
us to determine the horizontal displacement at a distance x from the vertical axis of symmetry as:
S[i] - settlement at point with coordinate x[i]
S[max] - maximum terrain settlement
L[inf] - distance of inflection point
Curve based on Aversin
Aversin derived, based on visual inspection and measurements of underground structures in Russia, the following expression for the shape of subsidence trough:
S[i] - settlement at point with coordinate x[i]
S[max] - maximum terrain settlement
L - reach of subsidence trough
Coefficient of calculation of inflection point
When the classical methods are used the inputted coefficient k[inf] allows the determination of the inflection point location based on L[inf]=L/k[inf]. In this case the coefficient k[inf] represents
a very important input parameter strongly influencing the shape and slope of subsidence trough. Its value depends on the average soil or rock, respectively, in overburden – literature offers the
values of k[inf] in the range 2,1 - 4,0.
Based on a series of FEM calculations the following values are recommended:
gravel soil G1-G3 k[inf] = 3,5
sand and gravel soil S1-S5,G4,G5, rocks R5-R6 k[inf] = 3,0
fine-grained soil F1-F4 k[inf] = 2,5
fine-grained soil F5-F8 k[inf] = 2,1
The coefficient for calculation of inflection point is inputted in the frame "Project".
Subsidence trough with several excavations
The principal of superposition is used when calculating the settlement caused by structured or multiple excavations. Based on input parameters the program first determines subsidence troughs and
horizontal displacements for individual excavations. The overall subsidence trough is determined subsequently.
Other variables, horizontal strain and gradient of subsidence trough, are post-processed from the overall subsidence trough.
Analysis of subsidence trough at a depth
A linear interpolation between the maximal value of the settlement S[max] at a terrain surface and the displacement of roof excavation u[a] is used to calculate the maximum settlement S at a depth h
below the terrain surface in a homogeneous body.
Analysis of subsidence trough at a depth
The width of subsidence trough at an overburden l is provided by:
L - length of subsidence trough at terrain surface
r - excavation radius
Z - depth of center point
z - analysis depth
The values l and S are then used to determine the shape of subsidence trough in overburden above an excavation.
Calculation of other variables
A vertical settlement is accompanied by the evolution of horizontal displacements which may cause damage to nearby buildings. The horizontal displacement can be derived from the vertical settlement
providing the resulting displacement vectors are directed into the center of excavation. In such a case the horizontal displacement of the soil is provided by the following equation:
x - distance of point x from axis of excavation
s(x) - settlement at point x
Z - depth of center point of excavation
The horizontal displacements are determined in a differential way along the x axis and in the transverse direction they can be expressed using the following equation:
x - distance of point x from axis of excavation
s(x) - settlement at point x
Z - depth of center point of excavation
L[inf] - distance of inflection point
Analysis of failure of buildings
The program first determines the shape and dimensions of subsidence trough and then performs analysis of their influence on buildings.
The program offers four types of analysis:
• determination of tensile cracks
• determination of gradient damage
• determination of a relative deflection of buildings (hogging, sagging)
• analysis of the inputted section of a building
Tensile cracks
One of the causes responsible for the damage of buildings is the horizontal tensile strain. The program highlights individual parts of a building with a color pattern that corresponds to a given
class of damage. The maximum value of tensile strain is provided in the text output.
The program offers predefined zones of damage for masonry buildings. These values can be modified in the frame "Settings". Considerable experience with a number of tunnels excavated below build-up
areas allowed for elaborating the relationship between the shape of subsidence trough and damage of buildings to such precision that based on this it is now possible to estimate an extent of
compensations for possible damage caused by excavation with accuracy acceptable for both preparation of contractual documents and for contractors preparing proposals for excavation of tunnels.
Recommended values for masonry buildings from one to six floors are given in the following table.
Horizontal strains (per mille)
Proportional h.s. (per mille) Damage Description
0.2 – 0.5 Microcracks Microcracks
0.5 – 0.75 Little damage - superficial Cracks in plaster
0.75 – 1.0 Little damage Small cracks in walls
1.0 – 1.8 Medium damage, functional Cracks in walls, problems with windows and doors
1.8 – Large damage Wide open cracks in bearing walls and beams
Gradient damage
One of the causes leading to the damage of buildings is the slope subsidence trough. The program highlights individual parts of a building with a color pattern that corresponds to a given class of
damage. The maximum value of tensile strain is provided in the text output.
The program offers predefined zones of damage for masonry buildings. These values can be modified in the frame "Settings". Considerable experience with a number of tunnels excavated below build-up
areas allowed for elaborating the relationship between the shape of subsidence trough and damage of buildings to such precision that based on this it is now possible to estimate an extent of
compensations for possible damage caused by excavation with accuracy acceptable for both preparation of contractual documents and for contractors preparing proposals for excavation of tunnels.
Recommended values for masonry buildings from one to six floors are given in the following table.
Gradient Damage Description
1:1200 - 800 Microcracks Microcracks
1:800 - 500 Little damage - superficial Cracks in plaster
1:500 - 300 Little damage Small cracks in walls
1:300 - 150 Medium damage, functional Cracks in walls, problems with windows and doors
1:150 - 0 Large damage Wide open cracks in bearing walls and beams
Relative deflection
Definition of the term relative deflection is evident from the figure. The program searches regions on buildings with the maximum relative deflection both upwards and downwards. Clearly, from the
damage of building point of view the most critical is the relative deflection upwards leading to "tensile opening" of building.
Relative deflection
Verification of the maximum relative deflection is left to the user – the following tables list the ultimate values recommended by literature.
Type of Type of damage Ultimate relative deflection Δ/l
structure Burland and Wroth Meyerhof Polshin a Tokar ČSN 73 1001
Cracks in walls For L/H = 1 - 0.0004 0.0004 0.0004 0.0015
Unreinforced For L/H = 5 - 0.0008
bearing walls Cracks in bearing structures For L/H = 1 - 0.0002 – – –
For L/H = 5 - 0.0004
Failure of a section of building
In a given section the program determines the following variables:
• maximum tensile strain
• maximum gradient
• maximum relative deflection
• relative gradient between inputted points of a building
Evaluation of the analyzed section is left to the user – the following tables list the recommended ultimate values of relative rotation and deflection.
Type of structure Type of damage Ultimate relative gradient
Skempton Meyerhof Polshin a Tokar Bjerrum ČSN 73 1001
Frame structures and reinforced bearing walls Structural 1/150 1/250 1/200 1/150
Cracks in walls 1/300 1/500 1/500 1/500 1/500
Type of Type of damage Ultimate relative deflection Δ/l
structure Burland and Wroth Meyerhof Polshin a Tokar ČSN 73 1001
Cracks in walls For L/H = 1 - 0.0004 0.0004 0.0004 0.0015
Unreinforced For L/H = 5 - 0.0008
bearing walls Cracks in bearing structures For L/H = 1 - 0.0002 – – –
For L/H = 5 - 0.0004 | {"url":"http://www.groundloss.com/theory/","timestamp":"2014-04-19T19:33:30Z","content_type":null,"content_length":"40801","record_id":"<urn:uuid:35f3f74a-9aab-4892-9f03-de6b34267a6f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: ways a curving line can cross itself
Replies: 2 Last Post: Nov 8, 2010 6:18 PM
Messages: [ Previous | Next ]
digory ways a curving line can cross itself
Posted: Oct 31, 2010 11:47 PM
Posts: 2
Registered: 10/ if you draw a line on a piece of paper
31/10 (not a perfectly straight one but a line that can curve as it likes)
and if that line bends around and crosses itself once
there are two possible configurations
either it will hide both the start and end of the line inside the loop
or it will leave them outside the loop
if it crosses itself twice there are twelve possible configurations
these twelve form three families
if A is a time that the line passes through the first crossing
and B is a time it passes through the second crossing
these three families are
AABB (which has four members)
ABAB (which has two)
ABBA (which has six)
each letter appears twice because that is what a crossing is, a time the line comes to the same place twice.
there is no such family as BAAB for example since it is just the wrong way to write ABBA
using this notation it would appear that their are 15 possible families of three-crossing figures but two are equivalent to each other (if the start and end of the line are the same)
and two have no members because they are impossible to actually draw.
-these are very interesting shapes to play with i encourage you to take a stroll in this little world and draw them yourself (i'll post a picture if i can get my scanner working)
-has this been studied? if so do you know what this set of figures is called? or is it equivalent to some other system that has been studied?
-the closest thing i could find was knot theory but that is quite different, as far as i can tell it has only a cosmetic similarity.
Date Subject Author
10/31/10 ways a curving line can cross itself digory
11/8/10 Re: ways a curving line can cross itself Dan Cass
11/8/10 Re: ways a curving line can cross itself digory | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2166405","timestamp":"2014-04-19T00:06:26Z","content_type":null,"content_length":"19693","record_id":"<urn:uuid:8c9893b0-3434-415c-b1cd-8bedc0a18ccb>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Position as a function of time.
Next: Straightforward example Up: Motion with constant acceleration Previous: Velocity as a function
How do we find out how the position of a object varies with time, when it is moving under the influence of constant acceleration?
Well we can start by using how the velocity depends on time. How does it? We just figured it out, it's eqn. 3.11. So how can we get from v to x? We use the definition of instantaneous velocity, eq.
3.3. As in the above, we just turn around this equation, in symbols:
So now we have a slightly more complicated differential equation to solve. Following the same line of reasoning as above, we ask: what function has as its derivative at and the
Again we have some arbitrary constant D that we can't determine with the information given. We need yet another initial condition to determine what that constant is.
So we follow the same steps as above. We say that the position of the object at t=0 is given, call it t=0 eq. 3.13 becomes D. It's just 3.13 gives
So if you know the initial position, the initial velocity, and the acceleration, then you can determine the position of the object as a function of time.
Joshua Deutsch
Mon Jan 6 00:05:26 PST 1997 | {"url":"http://physics.ucsc.edu/~josh/6A/book/notes/node22.html","timestamp":"2014-04-20T13:19:15Z","content_type":null,"content_length":"4084","record_id":"<urn:uuid:e956eee5-5903-4236-b4c0-4d05c0007769>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scotts Valley Math Tutor
...Also you will have to make sense of one disagreement between a couple of scientists. These are qualities or techniques that will be useful the rest of your life because as an informed reader
you will have to analyze scientific articles written in newspaper or on the web. Taking this test normal...
32 Subjects: including calculus, physics, statistics, ADD/ADHD
...I enjoy tutoring for the ACT because I feel that this test assesses students more on their math knowledge than on their test-taking strategy. I have had excellent results with the students I
have helped (there's a review from one of them on my profile page), and my experience/familiarity with th...
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have tutored more than 80 students and my recent student told me about this platform; therefore, I am here. Most of my students have received a better grade and much better understanding in
accounting. I can tutor on weekends and weekday nights.
7 Subjects: including algebra 1, prealgebra, accounting, Chinese
...BTW, Every AP calculus student I've tutored through the 2012 exams has gotten a 5. Also I currently teach Single and Multi-variable calculus at a community college. I've tutored kids and adults
in Prealgebra.
14 Subjects: including statistics, algebra 1, algebra 2, calculus
...Some of my greatest academic strengths are in essay writing and editing, biological sciences (anatomy and physiology), math (particularly word problem solving), reading and phonetics,
psychology and childhood development. In addition to my personal education, I have a child with ADHD and underst...
13 Subjects: including algebra 1, prealgebra, reading, English | {"url":"http://www.purplemath.com/scotts_valley_ca_math_tutors.php","timestamp":"2014-04-19T20:00:00Z","content_type":null,"content_length":"23775","record_id":"<urn:uuid:25ec3e29-60c1-4e0f-8294-fbe75217b855>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
inverse of function
September 20th 2012, 10:00 PM
inverse of function
Hi every one
I'm Reza civil engineering student
I faced a problem I need to inverse this function
y = a*x + b*x^n
in the domain of [0,inf)
and i tried matlab and there was no answer
can anyone help me please?
September 20th 2012, 10:43 PM
Re: inverse of function
For arbitrary n, that won't have an inverse funtion that you can right down, even when such an inverse function exists.
I'll ignore all the special cases (a = 0, b = 0, solvability if n=2, what it means if n=1 or n = 0, etc.):
For there to even be an inverse, the function must be one to one on that domain, so you function must be strickly increasing or decreasing on $(0, \infty)$.
Checking that, $\frac{dy}{dx} = a + bnx^{n-1}$. Just by inspection, if a and b are both positive, or negative, then $\frac{dy}{dx}$ will be always positive or negative on $(0, \infty)$.
If a and b have opposite sign, then find the real solutions for $\frac{dy}{dx} = 0$. Get $x_0 = \left( \frac{-a}{bn} \right)^{\frac{1}{n}}$.
Will have that $x_0 >0$, and $\frac{d^2y}{dx^2}(x_0) = bn(n-1)x_{0}^{n-2} e 0$, so it's either a local maxima or local minima - either way, y isn't one-to-one near $x_0$.
Thus, when both a and b are non-zero, and n>2, yone to one on the domain of positive reals if and only if a and b have the same sign.
Note that the reason you can't write down the inverse, even when it has one, is that the practical procedure is to switch x & y, then solve for y, your inverse function.
Here that means: Solve for y: $x = ay + by^n$, or rewritten: $by^n + ay - x = 0$. There's no general formula for that for all n.
September 20th 2012, 11:43 PM
Re: inverse of function
thanks for your answer it was so help full
a and b in my function is positive so this function is increasing over the domain.
and i m asking if n is a real number between 3.5 and 5, is there any inverse function for this function?(i mean for a specific n)
September 21st 2012, 12:45 AM
Re: inverse of function
"i m asking if n is a real number between 3.5 and 5, is there any inverse function for this function?(i mean for a specific n)"
Usually n stands for an integer, but looking over it, everything from before still holds in your case when n is a real number between 3.5 and 5, a and b are positive, and x >=0.
Unfortunately, that includes that there won't be a pretty formula for the inverse function. It exists, but it won't have a nice pretty formula.
There are techniques to numerically estimate the inverse, if you intended to use it in a computer program.
You could also produce a power series near a point that would converge to the inverse function in some neighborhood of that point.
If you have a practical need to know the inverse, then you'll be ok using a computer program. If you've a theoretical need for the inverse function, then just knowing that it exists tells you a
great deal (for theoretical purposes, the functions observed usually don't have a pretty formula describing them).
September 21st 2012, 01:31 AM
Re: inverse of function
"There are techniques to numerically estimate the inverse, if you intended to use it in a computer program.
You could also produce a power series near a point that would converge to the inverse function in some neighborhood of that point"
I actually need it for coding.
in my coding i get the y value form input and i should calculate x by this formula and output the result
can you explain more or suggest some article to me which i can read and find out what to do?
best regards
September 21st 2012, 02:53 AM
Re: inverse of function
I don't know where to look in the computer science literature - I'd begin googling from scratch. The types of data you're expecting will dictate the algorithm you'll want to use.
$[0, \infty)$ is a pretty big range. I'll relate some ideas:
Method 1:
If you'll have random huge numbers, maybe you'll approximate $x = ay + by^n$ as $y_0 = (\frac{x}{b})^{1/n}$, and then use Newton's method to get close to the solution:
Use Newton's Method, starting with $y_0$ to find $\tilde{y}$ s.t. $g(\tilde{y}) = 0$, where $g(y) = by^n +ay -x$, and $x$ is a constant.
Method #2:
If you'll be using it in succession with several numbers that are close to each other (29.021, 29.022, 29.023, etc), then a Taylor polynomial (implicit differentiation to get the derivatives of
the inverse function) would seem natural. One problem with this is that you'll want to bound the error, which requires bounding the derivatives of the inverse function over a range. That would
take some thought.
Method #3
There's always the brute method of testing the original function and seeing if it's too high or two low, kinda like a binary search. This nicely exploits the fact that your function is always
Ex: 2 < sqrt(7) < 3. Try 2.5. 2.5^2 = 6.255 = too low. Thus 2.5 < sqrt(7) < 3. Try 2.7. 2.7^2 = 7.29 = too high. Thus 2.5 < sqrt(7) < 2.7. Etc.
I'm sure there are lots of other ways, and combinations of these ways, but this isn't something I know about - I'm just talking off the top of my head here. Also, it depends on your expectations
of the data, and your accuracy demands, and your speed requirements. I'm sure there's, somewhere, some good computer science literature about this. Google time!
September 21st 2012, 04:41 AM
Re: inverse of function
thank you very very much it was so helpful
Best regards | {"url":"http://mathhelpforum.com/calculus/203803-inverse-function-print.html","timestamp":"2014-04-19T12:27:10Z","content_type":null,"content_length":"13537","record_id":"<urn:uuid:b1d5ed72-2e0e-4fa8-87ac-13dfbf79a001>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
CFD - Why F1 Teams Change Wings For Monza - Angle of Attack
My CFD Article on Why F1 Teams Change Wings for Monza was very popular and I got a request to go in to more detail, so in this F1 CFD Article I will focus on Angle of Attack and Drag Reduction.
In case you haven't read it yet, this old Article Covers the Coefficient of Lift itself, which is integral in the understanding of Induced Drag.
At Monza, the most important thing an F1 Team can do is to reduce Drag. You've seen this in articles by Matt Somers, Craig Scarborough, F1Technical, and others. One of the things the F1 Teams will do
in search of Drag Reduction is reduce the Angle of Attack of their Wings. Particularly the Rear Wings on the Formula One Car.
It is very intuitive that reducing the Angle of Attack on an F1 Rear Wing would help with Drag Reduction. You already know this from sticking your hand out the window. What I want to do is show you
some of the Engineering behind this. Want to be a Motorsports Engineering some day? You have to be able to handle some equations.
First, we will look at the Drag Polar like we did in the previous, more general article on Monza F1 Wings.
The Coefficient of Drag includes the Zero Lift Drag, and the Induced Drag. The Induced Drag is dependent on the Coefficient of Lift, the Span Efficiency, and the Aspect Ratio of the F1 Rear Wing
Where is Angle of Attack in that equation? It's in the Coefficient of Lift, the only item in the equation that gets squared. This means from an outside glance, the biggest Drag Reduction can likely
come from a reduction in the Coefficient of Lift. Since the Coefficient of Lift depends on the Angle of Attack, chances are it's a pretty important parameter. Lets run some CFD Simulations on this F1
Wing and see how they check with the Theory.
Here is an Animation I made from some XFOIL CFD Simulations on the NACA 2412 Airfoil (the one I'll be using for the Angle of Attack for F1 Rear Wings at Monza Article). I inverted the Wing Section
such that it represents an F1 Rear Wing for Downforce rather than an Aircraft Wing for Lift. The simulation below is not on a Finite Wing, but rather on a Wing Section or Airfoil.
The Angle of Attack of the F1 Race Car Rear Wing is incrementally increased in this CFD Simulation. As the Angle of Attack increases the Coefficient of Lift increases near linearly (you can see this
immediately above, and in the charts below). This is the Lift Slope in action. The XFOIL CFD Simulation is in agreement with the Airfoil Theory.
For every extra degree of Angle of Attack the Lift Coefficient increases, up to a certain point. In the case of the XFOIL CFD Simulation that seems to be somewhere around say 16 degrees. In the image
immeduately above it is shown at 15 degrees.
Why the angle of attack changes the Lift Coefficient this way is beyond the scope of this article. If you are interested in knowing more, look up things like Thin Flat Plate Airfoils, Thin Airfoils,
Thin Cambered Airfoils, and Lifting Line Theory (for finite wings). Finite Wings by the way are wings with a Span less than infinity. So the CFD Simulation in XFOIL was on an Airfoil whereas the
simulation I am showing in Paraview was a Low Aspect Ratio Finite Race Car Wing. I'll likely write an article on these topics at a later date.
So I have gone as far as I will in this CFD Article showing that an increase in Angle of Attack of an F1 Rear Wing will increase the Coefficient of Lift in a Linear relationship. Now I will talk
about why the increase in the Coefficient of Lift increases Drag.
Lift, or Downforce, is created mostly due to a difference in Pressure between the top and bottom surfaces of a Race Car Wing. I say mostly since Viscous/Shear effects also apply but in general the
Pressure Difference is stronger. When this Pressure Difference is created, air at the High Pressure side of the Race Car Wing wants to move to the Low Pressure side. This creates Vortices, and
Induced Drag. There are other factors in Induced Drag as seen in the equation above, but the Coefficient of Lift effects it the most (due to the squared relation).
You can see the effects of this sometimes on Formula One cars on humid days. Here, you can see the effects from Animations I made using CFD and ParaView.
Basically the larger the Pressure Difference the stronger the Vortices and Induced Drag, and an increase in Angle of Attack tends to create an increase in Pressure Difference. This known phenomenon
is backed up by the CFD results on this simulated Formula One Rear Wing shown above.
Finally the image below shows the Lift, Drag, L/D, and Lift Coefficients for the Formula One Race Car Rear Wing resulting from the CFD Simulations at varying Angles of Attack.
These trends make sense [Note: The Drag was multiplied by 5 and the Coefficient of Lift was multiplied by 100 so everything could fit on one graph]. The CFD Simulation on this NACA 2412 representing
a Formula One Rear Wing is validating the theory. The Angle of Attack causes a Linear increase of the Lift Coefficient. The slope of this line (CL line) is the Lift Slope, which is dependent on
things such as the Airfoil or Wing Section chosen, and the Aspect Ratio of the Wing (in the case of Finite Wings). The Drag fits a Second Order Polynomial. Since the Coefficient of Lift is increasing
Linearly, the Coefficient of Drag should be increasing to the Second Power as indicated by the Drag Polar Equation shown above.
Air Density, Velocity, and Planform Area were all kept constant, the only thing changed was the Angle of Attack which changed the Coefficient of Drag due to the Induced Drag portion of the Drag
In this CFD Article I talked about the trends in Formula One where race teams seek Drag Reduction, and I focused on reductions in Angle of Attack to accomplish this. The Theory of the Drag Polar was
introduced, and CFD Simulations were run on an NACA 2412 Profile Finite Wing was used to represent an F1 Rear Wing in order to check against the Theory. The CFD Simulations were in agreement with the
Drag Polar Equation.
Hopefully you understand the Engineering behind low Angle of Attack Formula One Rear Wings at Monza a bit better now. I'd love to hear your feedback in the comment sections below. If you have any
questions I'll do my best to answer them. | {"url":"http://consultkeithyoung.com/content/cfd-why-f1-teams-change-wings-monza-angle-attack","timestamp":"2014-04-16T21:51:31Z","content_type":null,"content_length":"121246","record_id":"<urn:uuid:37a44970-bec7-45e7-9d3e-ef8ab651fa95>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to solve the 36 Cube puzzle – hints & solution
For Christmas, I got the ThinkFun 36 Cube
After some tries, I came to the conclusion that this puzzle is the work of the devil and that I should not waste more brain cycles on solving it. So I wrote a little python script to solve the puzzle
for me.
My program quickly came up with a correct placement for 34 towers – but it failed to find the complete solution.
[('P', 5), ('Y', 3), ('O', 2), ('B', 1), ('R', 4), ('G', 6)]
[('Y', 4), ('O', 1), ('P', 6), ('R', 2), ('G', 5), ('B', 3)]
[('O', 6), ('B', 5), ('R', 3), ('G', 4), ('P', 1), ('Y', 2)]
[('R', 1), ('G', 2), ('Y', 5), ('P', 3), ('B', 6), ('O', 4)]
[('B', 2), ('P', 4), ('G', 1), ('Y', 6), ('O', 3), ('R', 5)]
[('G', 3), ('R', 6), ('B', 4), ('O', 5), ('X', 2), ('X', 1)]
P = Purple, Y = Yellow, O = Orange, B = Blue, R = Red, G = Green, X = Empty
The number is the size of the tower.
As you can see, I didn’t waste much time on making the output pretty
After spending lots of time verifying that my program was working correctly, I became impatient and googled for help. I found an answer, but it revealed to much, taking all the fun.
Therefore, I split my solution into multiple hints. If you are stuck, reveal just one of them at a time and try to figure it out by yourself. It is way more rewarding!
Hint #1 (show): Your assumptions are probably wrong.
Now go back and try to solve it. I’m waiting here.
Hint #2 (show): The assumption that all towers of the same size only differ in color is wrong.
Hint #3 (show): There are two towers which do fit on slots where the other towers of the same size do not fit.
Hint #4 (show): The two special towers are the yellow one of heigh 5 and the orange one of height 6.
Hint #5 (show): The yellow tower of height 5 has to go to position (1,2) and the orange tower of height 6 to (3,2) in my coordinate system.
Even if you uncovered all hints, the puzzle is still far from solved. You can still tinker with it forever.
Spoiler alert: Don’t uncover the solution, unless your are really desperate!
49 thoughts on “How to solve the 36 Cube puzzle – hints & solution”
1. Daniel,
I came across your “36 Cube” solution page and enjoyed it a great deal. I don’t know whether my name is familiar, but it might be if you read the very fine print on either the puzzle packaging or
the rules pamphlet, where I’m cited as the inventor of the puzzle.
Truth be told, I once rued the very concept of solution pages such as yours, but I liked the way you handled the process and slowly unwrapped the solution. I couldn’t have asked for more, and I
certainly hope that you enjoyed the puzzle.
Although the puzzle has been out for a year-plus now, I have yet to see the solution process described as I originally described it to ThinkFun. Specifically, I was envisioning that people would
get to 34, either repeatedly on their own or perhaps with computer assistance. (Yes, I took some pleasure in the fact that the puzzle couldn’t be solved by computer alone.) There are many
different ways to achieve 34 correct towers, and the common denominator is that they all have an intractable set of four towers that are just plain wrong — two colors and two heights, but no way
to make them flat. Of course, that’s the situation in a 2×2 version of this puzzle, and in fact the 2×2 and 6×6 sizes are the only cases where no “Euler square” exists. But once you reduce to the
2×2 case you can see that you can make the towers flat with a little hijinx. That’s precisely what I did in designing my original prototype, and I thought that maybe the characterization of the
“34s” would be a common route to success, but thus far I haven’t heard it mentioned.
Anyway, congrats and thanks again for the write-up.
2. Derrick,
I’m glad you like my solution and the way I present it. I don’t know if anyone who finds this page actually reveals the first hint and then goes back to figure it out, but I think it is better to
give people the choice.
I don’t know if I would have come up with a complete solution on my own, but I enjoyed the puzzle a lot anyway and I think even if one knows the solution there is still a lot of fun in store. For
example, one could code a more elegant solver, as my approach is rather brute force, or dive into the mathematical background of the puzzle.
Thanks a lot for taking the time to comment!
3. Hi Derrick and Daniel – this is Andrea at ThinkFun – I enjoyed reading Daniel’s posting and of course Derrick’s response – I will be sure that Bill Ritchie @ ThinkFun sees this so that we can
continue to evolve the best “hints” and unraveling of solution – Cheers! Andrea
4. Hi Folks
In case you’re interested, I’ve included the VB.net source code for solving the puzzle. It’s probably similar to Daniel’s but I wrote it from scratch just for fun. Turns out there are four unique
solutions (not counting just swapping all the towers of two colors.)
Derrick, I’m curious that you seemed to be alluding to the idea that there was another way to solve this puzzle than a depth first search (either human or computer). Can you elaborate?
Sub Main()
Dim h(,) As Short = { _
{4, 2, 1, 0, 3, 5}, _
{3, 0, 4, 1, 4, 2}, _
{5, 4, 2, 3, 0, 1}, _
{0, 1, 5, 2, 5, 3}, _
{1, 3, 0, 5, 2, 4}, _
{2, 5, 3, 4, 1, 0} _
Dim u(5, 5) As Boolean ‘color, height
Dim ux(5, 5) As Boolean ‘color, x
Dim uy(5, 5) As Boolean ‘color, y
Dim c(5, 5) As Short
Dim x As Short = 0
Dim y As Short = 0
For i As Short = 0 To 5
For j As Short = 0 To 5
u(i, j) = False
ux(i, j) = False
uy(i, j) = False
c(i, j) = -1
Dim ct As Short = 0
‘find next one for x,y
Dim h1 As Short = h(x, y)
For c1 As Short = c(x, y) + 1 To 5
If Not u(c1, h1) AndAlso Not ux(c1, x) AndAlso Not uy(c1, y) Then
ct += 1
c(x, y) = c1
If x = 5 And y = 5 Then
For i As Short = 0 To 5
For j As Short = 0 To 5
Console.Write(c(j, i) & ” “)
Console.WriteLine(“(” & ct & “)”)
Continue Do
End If
u(c1, h1) = True
ux(c1, x) = True
uy(c1, y) = True
x += 1
If x = 6 Then
x = 0
y += 1
End If
Continue Do
End If
c(x, y) = -1
x -= 1
If x = -1 Then
x = 5
y -= 1
If y = 0 Then ‘no point rolling back to the first row, which is arbitary.
Exit Do
End If
End If
Dim c2 As Short = c(x, y)
Dim h2 As Short = h(x, y)
u(c2, h2) = False
ux(c2, x) = False
uy(c2, y) = False
End Sub
5. Daniel, can you elaborate on what you meant in the clues? I didn’t read them properly until now, and I find it strange that I came to the same solution as you, without handling any exceptions. Is
there really something unique about some of the pieces? I thought the unique thing was that some of the rows of the base had more than one base of a particular height (thus getting around Euler’s
pesky problem)
6. Hi Colin,
thanks for your comprehensive response and your solution.
I think the fields (1,2) and (3,2) are special. At (1,2), any tower of height 6 matches, but only the yellow tower of height 5. At position (3,2), any tower of height 5 matches, but only the
violet tower. I hope that answers your question.
7. OK, time for a confession. I never touched the actual puzzle. I got my tower heights from your solution (discarding your color information). So I never realized that there was a ‘trick’ and just
threw the processing power at it. I assumed that the base was obviously not a latin square. But apparently it does appear to be one. Consequently, I retract my original comments
So this isn’t a puzzle in the genre of rush hour, it’s a ‘find the mechanical trick’ puzzle.
Incidentally I think thinkfun should make a version of this where no towers are special, but the base can be shuffled to create challenges of varying difficulty. What do you think?
I’d be interested to know if you think this would have been a better puzzle if there were no secrets.
8. Thank you for the hints!!! I looked through the first three till the catch hit me
9. Hi, Daniel
Thank you for your hints.
I tried to use your script with Python 3.1.2 without success. It only runs in version 2.7.
My 36 cube – selled in Germany – differs from yours: the special towers are yellow (height 5 position 1,2) and orange (height 6, position 3,2). The positions are the same.
Next I modified your script changing the sequence of the colors.
The first color must be one without having a special tower!
The second color always must be one having a special tower!
Now you may change the positions of the colors.
Result: 48 solutions to 36 cube.
If you like you may redesign your script to generate all posible solution at once – without printing the temporary results
Thanks again for your effort.
10. After trying to solve this puzzle for 15 hours (with coloured crayons and paper) I wanted to explain to the person who gave me this puzzle that it couldn’t be done. You can only solve 34 towers.
I was really agitated.
While explaining it, I said: ‘look, you can only solve it if this tower could fit here.’ And I took the yellow 5 and orange 6 and tried to swap them. To my great surprise it fitted.
11. i think ur solution is slightly wrong (i haven’t run the code) but in the solution above …
[('P', 5), ('Y', 3), ('B', 2), ('O', 1), ('R', 4), ('G', 6)]
[('B', 4), ('P', 1), ('Y', >>5), ('G', 2), ('O', >>5), ('R', 3)]
[('R', 6), ('G', 5), ('P', 3), ('Y', 4), ('B', 1), ('O', 2)]
[('G', 1), ('R', 2), ('O', >>6), ('B', 3), ('Y', >>6), ('P', 4)]
[('Y', 2), ('O', 4), ('R', 1), ('P', 6), ('G', 3), ('B', 5)]
[('O', 3), ('B', 6), ('G', 4), ('R', 5), ('P', 2), ('Y', 1)]
but in those two places the number is repeated in the same row …
12. No, this is not a mistake, as explained in hint #4 and #5. Let me know if it is still unclear.
13. // 36 cubes is a constraint game with a twist
// given a set of blocks with numbers 1 to 6
// a base is provided that allows different colors to be placed with different height posts
// arange in a 6×6 all blocks with unique colors in each vertical and horizontal row
// Additionally, all placed posts must be of the same height
// if you make the assumption that the base is a latin square
// i.e. that every position accepts one height for any color
// then the following constraint program (written in FOID) will demonstrate that no solution exists
// send me some email and I’ll send you the working FOID solution
// bmatichuk @ gmail.com
// Bruce Matichuk, 2010
int Num
Color = {Blue;Green;Orange;Purple;Red;Yellow}
int Row
int Column
// every color has one height
! color[Color] num[Num] : Block(color,num).
// latin square assumption for base
! x[Row] y[Column] : ?1n : ?c: (ValidNum(x,y,n) & Block(c,n) & Position(c,x,y,n).
// The next two lines assume that block heights and colors are unique in each row and column
! x y1 y2 c1 c2 n1 n2: (Position(c1,x,y1,n1) & Position(c2,x,y2,n2) & y1 ~= y2) => (c1 ~= c2 & n1~=n2).
! x1 x2 y c1 c2 n1 n2: (Position(c1,x1,y,n1) & Position(c2,x2,y,n2) & x1 ~= x2) => (c1 ~= c2 & n1~=n2).
// This last line states that each block usage for a given height and position is unique
! x1 x2 y1 y2 c n1 n2 : (Position(c,x1,y1,n1) & Position(c,x2,y2,n2) & x1~=x2 & y1~=y2) => n1~=n2.
// if you assume that the base is a latin square, then the first row can be any choice of color array
Num = {1..6}
Row = {1..6}
Column = {1..6}
// according to latin square assumption for base
ValidNum = {
14. When I first got my puzzle, I was eager to try it out, and got all the way to 34 pieces without “cheating”, although the 2 pieces I had left were not the 5 and 6 pieces, so there was no way I was
going to stumble on the solution by randomly swapping pieces.
I then hit the internet to see if there were any hints on building a program to brute force the solution (or at least point me in the right direction as I do enjoy solving things on my own). In
my search I discovered the wiki page for the Graeco-Latin Square (http://en.wikipedia.org/wiki/Graeco-Latin_square) and in there, Euler’s 36 Officer’s problem. When I found that, I was thoroughly
confused. I thought there was no way, either the wiki was wrong (which I highly doubted), or the inventor of this puzzle has proven a conjecture incorrect after it had been proven correct (which
I also has my suspicions of).
I then read a blurb that the inventor had said about how “It struck me as the basis for a potentially great 3-D puzzle”, that’s when I got suspicious of the puzzle. I went back to it, and
remembered that I had noticed when I first inspected the puzzle that some of the base towers were different shapes even though they were for the same height tower pieces. I then tried every tower
of the same height on every base, and lo and behold, found the two pieces that fit over the two odd bases (mine are orange and yellow, btw). I then cursed the inventor for playing such an evil
trick on us poor unsuspecting puzzlers and then worked backward like it was an ordinary logic puzzle until I had the solution.
It’s a great puzzle, and your program is awesome, as well as your handling of the solution.
My hat’s off to the inventor, for being devious enough to unleash this on the world. It should come with a warning though, that states that this puzzle is impossible.
15. @Bruce: thanks for your code! I never heard about FOID before and unfortunately can’t find any information about it.
@Benjam: thanks for the kind words and congratulations on solving the puzzle on your own!
16. I’ve been playing with your program, and have gotten it to work in Python 3.1.3 with a few minor edits. It still returns the solution you got, so I’m assuming it’s still working properly (python
is not my native language)
Version that works in 3.1.3:
I am going to continue to edit the program to see if I can force it to find all possible solutions to the puzzle.
17. Thanks Benjam for the 3.x version. I’m currently running 2.x, but I’ll look into it and try to make it compatible with both versions.
18. So after a late night and writing my own program to solve the cube using a different method, I have found that the puzzle has 4 unique solutions, with any other solution just being a color swap
of the original 4 solutions.
My program is here: http://36cube.pastebin.com/qBYVWxxY
I first discovered that there are only 2 solutions for the ‘special’ colors (for me: Yellow and Orange), and funnily enough, they both have the same footprint, which makes solving the rest a
little easier: just solve for one set, and multiply end value by 2.
My next color was Red, and it has 8 unique solutions. I sorted them and gave them each a letter A-H.
I then solved Green for each of the 8 Reds and found that although it has 3 solutions for each Red, it only has 8 unique solutions, with each of them being identical to the original Red 8, so I
also labeled them with A-H.
This continued with the Blue and Purple colors, each having 8 unique solutions, and after finding the combinations that work together, I came up with 2 unique solutions: ACFH and BDEG. (see my
code for what those letters actually mean)
And when multiplied by the original 2 Yellow and Orange solutions, that gives us a grand total of 4 solutions.
If anybody finds any other solutions that are not in my set of 4, please let me know.
I had fun doing this (lost some sleep, but who doesn’t when doing this kind of stuff), thanks for the inspiration.
19. 1 jan 2011. My turn to try this. Indeed it is soon clear that the amount of possibilities seems overwhelming. Ignoring the mechanical irregularity and starting with one colour at any point, there
are -as it turns out-4 different ways to finish that colour. So there are 6×4 different ways to sove the first colour. Though I did a brave attempt to find these manually I missed 3 of them, so I
wrote a program in matlab to find the remaining ones. More programming showed:
24 Solutions with 1 colour
20. 1 jan 2011. My turn to try this.
Indeed it is soon clear that the amount of possibilities seems overwhelming.
Ignoring the mechanical irregularity and starting with any colour at any point, there are -as it turns out-4 different ways to finish that colour. So there are 6×4 different ways to sove the
first colour. Though I did a brave attempt to find these manually I missed 3 of them, so I wrote a program in matlab to find the remaining ones. Putting these combinations in partly transparent
matrices, it is then possible to graphically prove there is no solution for 6 overlapping matrices.
These 24 1-colour solutions are with tower hight “5″ in the upper left corner:
Positions Tower Hight
’123564′ ’513624′
’142356′ ’526431′
’146532′ ’524613′
’153246′ ’543261′
’215643′ ’431562′
’235461′ ’451326′
’261435′ ’462315′
’264153′ ’465132′
’321654′ ’612534′
’345612′ ’621543′
’351426′ ’642351′
’354162′ ’645123′
’412365′ ’136425′
’416523′ ’134652′
’436251′ ’154236′
’463215′ ’163245′
’512634′ ’236514′
’532416′ ’256341′
’536142′ ’254163′
’563124′ ’263154′
’621345′ ’312465′
’624513′ ’315642′
’645321′ ’321456′
’654231′ ’345216′
More programming showed:
24 Solution with 1 colour
120 Solutions with 2 colours
160 Solutions with 3 colours
30 Solutions with 4 colours
0 Solutions with 5 colours
0 Solutions with 6 colours
Though at the very beginning I considered there might be a mechanical trick at hand, because the shapes of the base look unneccesary complex and irregular, I decided to ignore that until proven
that there is no solution without it. Proof is possible without computer assistance, so no reason yet to reject this approach as part of the game. Though it takes some patience, accurate and
patient mortals are able to do this. Cheating with a computerprogram or googling euler is faster, fair and more convincing. Even better would have been some consolidation that a mechanical trick
is required to solve this game. I remember looking for and interpreting the “complete set of rules of the game” again and again. What would “genius” mean ? Even if you start looking for a
mechanical secret, it is hard to see because of the tiny details and the fact that all 36 base positions are in the same gray. Even if you know (discover by tryal) there are in the specimen you
happened to buy two special locations, there is no affirmation that&how this is required to solve the puzzle. If you start with the two special towers in the right position. Does that allow to
solve the puzzle without still sequentially trying too many combinations ?
It leaves me struggling with issues like: Is this fun?/Which hints could be added without spoiling the concept ?/ What percentage of the buyers will reach the ‘aha’ moment ?/How to cope with the
disappointment of those who don’t ?/How can it be made more “fair”?/Can this be a comercial success ?/How should this thing be named: a game, a puzzle, a diabolic thing, a toy ?/
It is certainly not a game, as the rules are not sufficiently explicit. The rules are only reveiled after the solution has been found. I have another mechanical trick in mind involving a saw,
that just as well should be part of the set of possible solutions.
Apparently my overall approach is way too serious. In an uncomfortable manner, it does not fit in my way of thinking. This experience reveils a rainbow of possiblities challenging one to try
everything completely different in the future.
I did have the promised ‘aha’ moment. It was instructive, but I’m still not sure I found it that rewarding. I perceive no real solution, but an intriguing quest for the holy grail. This is an
interesting piece of engineering, but it does not belong in a toy store.
21. After further tinkering with my program, I have taught it to output the 4 unique solutions to the puzzle.
Here is my script (based on and inspired by the OP script but with heavy modifications): http://36cube.pastebin.com/KiJMjzTa
Run that on your box, and you end up with this…
[0] => ABCDEFFCDEBACFEBADEABFDCBDFACEDEACFB
[1] => ABCDEFFCBEDACFEBADEADFBCBDFACEDEACFB
[2] => ABCDEFCADFBEEFABCDFEBCDABDEAFCDCFEAB
[3] => ABCDEFCABFDEEFABCDFEDCBABDEAFCDCFEAB
I’ll leave it as an exercise to the reader to decipher what each of the letters represent, but I’ll give you a hint, the color/letter combinations are different for each solution.
22. Thanks Paul and Benjam for your valuable contributions. I feel like I should have made this a Wiki instead of a blog post.
@Benjam: out of curiosity, what was the reason you rewrote it in PHP?
23. Pingback: iohelix » 36 Cube
24. I rewrote it in PHP because Python is not my native language. I could have written it in Python, but it would have taken me quite a bit longer just to convert my thoughts into Python syntax.
Nothing more.
I was actually in the process of converting my program into Ruby (another language I’m trying to learn) thinking that it would have better luck at actually completing a full run when I finally
got my PHP program to behave a little better (not throw “out of memory” errors, and run in less than the time it took to cook a lasagna).
25. this is sucks. it had co
26. this game sucks it has a grade for that:… 10!!! it had cost me lots of manyand I can’t find out the solusoin! but thank you guys! you helped me al lot
27. Daniel,
Hello. Thanks for the site. You write “orange tower of height 5 has to go to position (1,2) and the red tower of height 6 to (3,2) in my coordinate system.” But in your picture http://
daniel.hepper.net/blog/wp-content/36cube_complete_solution-300×225.jpg, the position (1,2) is occupied by an yellow tower and position (3,2) is occupied by orange tower.
Did you mean to say, “yellow tower of height 5 at position (1,2) and orange tower of height 6 at (3,2)”.
Am I missing something?
Thanks for your help,
28. Thankyou, Daniel!
I finally solved this monster! Thanks so much for your site and the hints. I confess I ended up viewing all of the hints to work it out, but the colours on mine were different, and I just played
around with my 5 and 6 high pieces till I found the two irregularities. From there it was a lot easier (or perhaps I was lucky)!
For the life of me I couldn’t find the “coordinate system” you mentioned, so “3,2″ etc didn’t mean much to me. I later realised it is referred to in the solution picture, but of course I didn’t
want to look at that! I’m sure it’s clear in the scripting, too, if you understand those things, but I couldn’t figure it out. Just FYI.
For others like me, the coordinate system seems to be as below. The corner that holds a 5-high tower is at (0,0).
(0,0) (0,1) (0,2) (0,3) (0,4) (0,5)
(1,0) (1,1) (1,2) (1,3) (1,4) (1,5)
(2,0) (2,1) (2,2) (2,3) (2,4) (2,5)
(3,0) (3,1) (3,2) (3,3) (3,4) (3,5)
(4,0) (4,1) (4,2) (4,3) (4,4) (4,5)
(5,0) (5,1) (5,2) (5,3) (5,4) (5,5)
Thanks again and feel free to remove parts of my comment if needed!
29. Hey my bro, first of all i thank to u to tell the solution of colur puzzle but i don’t get the total solution plz…..tell me an easy solution for it. Plz plz plz plz plz……
30. I’m an avid puzzler but not one who would seek out a math or computer solution. I just solve by logic and patience. My grandkids gave me this Christmas 2009 and I’ve been working on it off and on
ever since. Frustrating? Yep! But I only now resorted to Googling for a solution. I’ve had 34 right a jillion times. The mechanical “trick” pieces both annoy me as “unfair” and make me feel a bit
better about my inability to solve it. Anyway, it make a great toy for a toddler to play with her Grammie. I’ve probably been helping my little genius create new neural pathways while I’ve been
pushing myself closer to insanity.
31. I enjoyed this puzzle a lot, but I could never get past the 4th row and still make the puzzle work (and I tried for 2 years!). I was amazed that ThinkFun would put in a trick like that with the
yellow 5 block fitting on a 6 block spot and the orange 6 block fitting on a 5 space. I liked the trick though and now I see why I could never figure out the puzzle before. Thanks for this page!
32. I think your hint number four is incorrect. “Hint #4 (show): The two special towers are the orange one of heigh 5 and the red one of height 6.” The Red-6 and Orange-5 towers are identical their
standard counterparts. I found it to be the Orange-6 and Yellow-5 that deviate slightly from the norm.
Based upon my inspection there are 4 abnormalities in the board/towers:
Base-1.2 is fatter than other same-height bases (compare with 3.4 or 0.5)
Base-3.2 is thinner at its bottom than other same-height bases (compare with 1.4 or 2.1)
Yellow-5 has an extra thick vertical line along its inside
Orange-6 has no/minuscule vertical line along its inside.
This allows these two pieces to fit on either of those bases (in addition to the other “correct” height bases).
33. Jeff, you are absolutely correct! The pictures and the program output were correct, but my hints were wrong for about two years now, although Chell pointed it out a year ago
I’ve updated the post, thanks for letting me know!
34. Hah. I encountered this puzzle recently and I, too, wrote a python script to try to solve this….but turns out the puzzle is just a silly trick ;).
Here’s what I came up with. The simple test works so I assume the code would solve it if it wasn’t impossible.
The code had a bug, Spec asked me to update the link: http://pastebin.com/TM4efCJ6 Thanks, Spec!
- Daniel
35. My 10 year old solved it in less than an hour, and the second time, in less than 15 minutes. Go figure”
36. I think it’s a shame that there is a trick to find the answer. I spent more than 8 hours trying to solve it, finding every possible case.
I came to find that there were a problem with one of the -supposed- 5 height slot, but since I had a green 6 height tower with a fabrication defect, i thought that was just an other problem of
After all that time spent on it, I has enough data wrote down on a paper to proof there were no solution, so I googled it and found this blog post.
I understood that this particularity with the 5 height slot wasn’t a mistake, so I retroed it, and, guess what? Another problem of fabrication caused my yellow 5 height tower not to fit in the 6
eight spot. So my only solution was to put a piece of paper in the yellow tower so that it would fit in the slot. But of course, there were not any chance for me to find that out without a
solution …
So, yeah, next time, thinkfun should make something without tricks, especially if the trick may not work.
Besides, if it can be solved with computer assistance, what’s the deal with it? It’s a better experience for a programer to make a program to solve it than to ask a monkey brained guy to try
every solution.
37. it makes me very easy to make the puzzle and make me to so happy to complete the puzzle and it was so happy to do it very easy.
38. Thank you so much! I wrangled with this problem for quite some time (getting up to 30 towers) before I switched to writing the code for it. But the computer said that there were no solutions.
Without your hints, I would have never realized that those two fit into places that I didn’t think they could. Thanks!
39. I have been so stubborn about finding a solution to this problem. Yeah, of course I got to 34 a billion times, but the more times I attempted it, the more quickly I realized when my attempt was
going to fall short. I have sat down and drawn it on paper and identified all the 1,2s that are across from 2,1s etc. and tried to solve it by resolving all of those conflicts first. As the 3rd
anniversary of my receiving it approached (Christmas 2009), I thought of a new and obsessive solution strategy. Yesterday I started with a fixed 6 tower and found every possible arrangement of a
single color from that fixed 6 tower. Then I went through and tried every single combination possible.
I laugh in the face at you who tried for 8 hours and then went to Google*. I have had this puzzle for 3 years with 5 different homes, 9 different roommates (one of whom’s now-husband solved it
within 15 minutes when I was not home in 2009**). I have abstained from looking for any clues or programs (it just seemed less like cheating if I ran the “programs” by hand). I have explained the
game close to fifty times to friends who have visited.
Finally, after exhausting every single combination yesterday and then hoping to stumble upon it somehow this morning, I rose my white flag. It took me three hints. The thing is, I’ve always
thought that the 5 tower spot in the middle looked different, but rationalized that it was because it was on the inside. Even if I had accidentally put the trick tower on the correct trick base,
I think I would have assumed it was a mistake and continued looking for a solution that didn’t exist. I just believed the rules were solid. I believed that this was like my rubik’s cubes or
While satisfied that I am not stupid, I don’t know how I feel about the solution sitting in front of me. I’m pretty sure I’m disappointed. It’s not a puzzle, it’s a trick. The parameters were not
what they pushed me to believe. I’ve got a traditional puzzle on my dining room table and I sure hope that the solution there doesn’t include pieces that overlap or require cutting or something.
I think I would have found it funny if I had gotten frustrated and Googled after a week or so of trying. Now I think I feel jipped. Thanks for the clues and for a place to vent my frustration
like a crazy person who spent a ridiculous amount of time on a game with a punchline!
*I mean it wasn’t a constant obsession or anything, but I probably messed with it once or twice a month for 3 years with maybe 3 days of long hours of work.
**He recently told me in reference to the game: “just remember the game-makers are not your friend. they are evil” …fyi I didn’t get it!
40. We’re a gaggle of volunteers and starting a brand new scheme in our community. Your website offered us with useful information to work on. You have performed an impressive activity and our whole
neighborhood can be thankful to you.
41. i still have a challenge, is this game the same as the block o game;the one with 6 colours on 6 faces of a cube that one should place the same colors together.quite challenging to me
42. I am thankful for the puzzle, if only because it helped me (again) overcome my aversion to (some say fear of) recursion programming. After a rather futile attempt with a random number generator,
and many iterations of optimization (I enjoyed that process very much) I ended up with a very simple recursive algorithm that allowed me to watch the computer walk through every possible
combination (I used VBA and Excel, mainly because of the visualization). I did notice the strange pairs of 2 and 3, 1 and 4, and 5 and 6 that made many solutions stop at 32 towers, but I thought
it was just not yet right. (only much later did I think to search on the interwebs, finding this blog as well as the Wikipedia site about Euler’s 36 officers.
I looked at the matrix through the eyes of a Sudoku user, and found that there were only five areas of sorts that had the numbers from 1 to 6 in a 2×3 grouping, and that these areas did not have
the same orientation. Would be fun to devise a sort of four dimensional Sudoku…
43. OK, I got 32 the first time I tried this, then I got 34 the next ten times I did it, and something started to bug me about it. I didn’t take the trouble to write a program, but I did spend a few
hours with a pencil and a calculator, and realized that something was indeed amiss.
I have to say I found the “solution” to this puzzle disappointing. In effect the puzzle is deceptive, presenting as a problem in combinatorial logic, but in fact relying on the more or less
chance discovery of an unspecified constructional anomaly. Operating from the given instructions the puzzle is insoluble; nothing in the instructions so much as hints that there is a mechanical
“secret” to the puzzle. The only parameters presented as important are the colors, the heights of the towers, and the desired outcome — why would anyone go looking at the /insides/ of the towers?
In effect, it’s a little like being presented with an impossible entanglement puzzle (also called “nail” puzzles), and then being told that the “solution” is to hacksaw apart a particular bar in
the puzzle and re-weld it after the puzzle is disassembled. Or like playing a game of chess and — just as you are about to checkmate your opponent — having him remove an outer shell from his
bishop, revealing that is was really a disguised queen. Or like solving a crossword puzzle with acceptable answers for all the English-language clues, only to find that information has been
witheld which specifies that all your answers were supposed to have been given in French.
I could think of a lot of ways to make conventional puzzles far more difficult, or even impossible, with similar ruses, which nonetheless violate the implied premise of the puzzle. As an exercise
in a sort of scaled-down Sherlock Holmsian kind of detection, I suppose the 36-cube has a place. But really, it fails as a straight “logic” puzzle.
Somehow Devil’s Dice and Rubik’s Cubes manage to be as challenging without relying on hidden mechanical gimicks.
44. Urg. I had a fun time coding this puzzle up, but I was stumped when my program said it was unsolvable. It’s sitting on my coworker’s desk, so I was only working with the model of it I had in my
head. I understand some of the wording on this puzzle instructions now, but I really think it should have said that this puzzle “may not be what it seems,” or something like that. Maybe that’s
just me being mad at myself for not thinking of it.
45. I can confirm what other people have written: There are 0 solutions if you don’t swap the 2 trick pieces (yellow and orange). If you do swap them, there are 4 distinct solutions, times 4!
permutations of the remaining 4 colors, giving 4 * 4! = 96 solutions.
This is based on writing a short and straightforward program in ECLiPSe (http://eclipseclp.org), which ran instantly. If you’re going to use a computer, why not use a language that has constraint
satisfaction built into it? You merely have to state the constraints and you’re done. The ic_global constraint library in ECLiPSe includes powerful methods — more powerful than the ones people on
this site have coded up — for ruling out values based on “alldifferent” and “occurrences” constraints. I teach a computer science course on this stuff.
This infuriatingly devious puzzle is a worthy successor to the impossible 14-15 puzzle that swept across and beyond America in the 1880′s. http://www2.research.att.com/~aarcher/Research/
46. Thanks so much, without hints #4 and 5 I could have never found the answer
47. which is that programming language.
48. I don’t think there are any “special” pieces… at least not in the set I got. I don’t think there are any trick towers, trick bases, or manufacturing anomalies. I’ll call the manufacturer and ask
If you look inside the “yellow 5″ piece in my set, the inside lines look a little thicker than the lines inside the other pieces of the same length. BUT, the yellow 5 piece fits fine on *all six*
of the bases of the right height. The orange 5 also fits on all six of the bases of the right height. Has anyone else verified that the yellow 5 piece is not supposed to fit on all six bases? ALL
36 of the bases in my set have deep grooves.
49. Amazing site. A lot of beneficial information here. I am sending it to some buddies ans in addition sharing in delicious. And clearly, thank you to your effort! | {"url":"http://daniel.hepper.net/blog/2010/01/how-to-solve-the-36-cube-puzzle/","timestamp":"2014-04-21T02:13:27Z","content_type":null,"content_length":"140402","record_id":"<urn:uuid:1c4a3aaf-9fba-4ffc-88f4-24ff13cb4c5c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computational complexity of topological K-theory
up vote 16 down vote favorite
I am a novice with K-theory trying to understand what is and what is not possible.
Given a finite simplicial complex $X$, there of course elementary ways to quickly compute the cohomology of $X$ with field coefficients. However, it is known that computing the rational homotopy
groups of $X$ is at least $NP$-hard (in fact, at least $\# P$-hard), simply because it is possible to reduce NP-hard problems to the computations of some rational homotopy groups.
For any of: (1) Real K-theory, (2) Complex K-theory, (3) p-adically completed K-theory, is there an algorithm to compute $K^0$ of a finite simplicial complex? Is this algorithm polynomial-time?
If there is no known algorithm, is there at least evidence that any theoretical algorithm, should it exist, must be at least $NP$-hard? In other words, is there any way to reduce an $NP$-hard problem
to the calculation of some $K$ group?
kt.k-theory-homology at.algebraic-topology computational-complexity
Are you familiar with the Atiyah-Hirzebruch SS? – Sean Tilson Mar 27 '13 at 20:30
Yes, but only as a black-box. I do not know an algorithm to compute differentials or solve extension problems. Is this possible? – Jeremy Hahn Mar 27 '13 at 20:37
5 Probably not. The first differential is easy (for KU), it's determined by a cohomology operation, but I think the higher ones are algorithmically intractable and would describe higher-order
cohomology operations. Maybe one should try to describe vector bundles on finite simplicial complexes combinatorially instead? – Tilman Mar 27 '13 at 21:42
6 I have thought about this occasionally. There is some reason to think that there might be a tractable combinatorial construction of a minimal Kan complex of homotopy type $\Omega^\infty(KU/2)$.
From that it should be possible to determine the computational complexity of $(KU/2)^*(X)$. Odd primes might be possible as well. I have various kinds of calculations related to this, but no real
conclusion. – Neil Strickland Mar 28 '13 at 7:38
Dominique Arlettaz has a result regarding the order of differentials in the Atiyah-Hirzebruch spectral sequence, see "The order of the differentials in the Atiyah-Hirzebruch spectral sequence". I
1 would imagine that one might be able to make certain conclusions if you assumed enough about the torsion of the homology of the space. However, on rereading your question I realize this is not so
much what you were interested in. Thought I'd mention the paper of Arlettaz just in case though. – Sean Tilson Mar 28 '13 at 23:16
show 2 more comments
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged kt.k-theory-homology at.algebraic-topology computational-complexity or ask your own question. | {"url":"http://mathoverflow.net/questions/125745/computational-complexity-of-topological-k-theory","timestamp":"2014-04-17T18:41:29Z","content_type":null,"content_length":"53220","record_id":"<urn:uuid:5bb9e0ed-0c7a-416a-8d13-819fb973d2c3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sihem Mesnager
Publications (32)17.28 Total impact
[show abstract] [hide abstract]
ABSTRACT: In this paper, the relation between binomial Niho bent functions discovered by Dobbertin et al. and o-polynomials that give rise to the Subiaco and Adelaide classes of hyperovals is
found. This allows to expand the class of bent functions that corresponds to Subiaco hyperovals, in the case when $m\equiv 2 (\bmod 4)$.
IACR Cryptology ePrint Archive. 01/2012; 2012:20.
IACR Cryptology ePrint Archive. 01/2012; 2012:33.
International Symposium on Artificial Intelligence and Mathematics (ISAIM 2012), Fort Lauderdale, Florida, USA, January 9-11, 2012; 01/2012
[show abstract] [hide abstract]
ABSTRACT: We show that any Boolean function, in even dimension, equal to the sum of a Boolean function $g$ which is constant on each element of a spread and of a Boolean function $h$ whose
restrictions to these elements are all linear, is semibent if and only if $g$ and $h$ are both bent. We deduce a large number of infinite classes of semibent functions in explicit bivariate
(respectively, univariate) polynomial form.
IEEE Transactions on Information Theory 01/2012; 58(5):3287-3292. · 2.62 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: This paper consists of two main contributions. First, the Niho bent function consisting of 2r exponents (discovered by Leander and Kholosha) is studied. The dual of the function is
found and it is shown that this new bent function is not of the Niho type. Second, all known univariate representations of Niho bent functions are analyzed for their relation to the completed
Maiorana-McFarland class M. In particular, it is proven that two families do not belong to the completed class M. The latter result gives a positive answer to an open problem whether the class H
of bent functions introduced by Dillon in his thesis of 1974 differs from the completed class M.
IEEE Transactions on Information Theory 01/2012; 58(11):6979-6985. · 2.62 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: Kloosterman sums have recently become the focus of much research, most notably due to their applications in cryptography and coding theory. In this paper, we extensively investigate the
link between the semibentness property of functions in univariate forms obtained via Dillon and Niho functions and Kloosterman sums. In particular, we show that zeros and the value four of binary
Kloosterman sums give rise to semibent functions in even dimension with maximum degree. Moreover, we study the semibentness property of functions in polynomial forms with multiple trace terms and
exhibit criteria involving Dickson polynomials.
IEEE Transactions on Information Theory 12/2011; · 2.62 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: Kloosterman sums have recently become the focus of much research, most notably due to their applications in cryptography and their relations to coding theory. Very recently Mesnager has
showed that the value 4 of binary Kloosterman sums gives rise to several infinite classes of bent functions, hyper-bent functions and semi-bent functions in even dimension. In this paper we
analyze the different strategies used to find zeros of binary Kloosterman sums to develop and implement an algorithm to find the value 4 of such sums. We then present experimental results showing
that the value 4 of binary Kloosterman sums gives rise to bent functions for small dimensions, a case with no mathematical solution so far. KeywordsKloosterman sums–elliptic curves–Boolean
functions–Walsh-Hadamard transform–maximum nonlinearity–bent functions–hyper-bent functions–semi-bent functions
07/2011: pages 61-78;
[show abstract] [hide abstract]
ABSTRACT: We extensively investigate the link between the semi- bentness property of some Boolean functions in polynomial forms and Kloosterman sums.
Coding and Cryptology - Third International Workshop, IWCC 2011, Qingdao, China, May 30-June 3, 2011. Proceedings; 01/2011
IACR Cryptology ePrint Archive. 01/2011; 2011:364.
[show abstract] [hide abstract]
ABSTRACT: Bent functions are maximally nonlinear Boolean functions with an even number of variables. They were intro- duced by Rothaus in 1976. For their own sake as interesting combinatorial
objects, but also because of their relations to coding theory (Reed-Muller codes) and applications in cryptography (design of stream ciphers), they have attracted a lot of research, specially in
the last 15 years. The class of bent functions contains a subclass of functions, introduced by Youssef and Gong in 2001, the so-called hyper-bent functions, whose properties are still stronger
and whose elements are still rarer than bent functions. Bent and hyper-bent functions are not classified. A complete classification of these functions is elusive and looks hopeless. So, it is
important to design constructions in order to know as many of (hyper)-bent functions as possible. This paper is devoted to the constructions of bent and hyper-bent Boolean functions in polynomial
forms. We survey and present an overview of the constructions discovered recently. We extensively investigate the link between the bentness property of such functions and some exponential sums
(involving Dickson polynomials) and give some conjectures that lead to constructions of new hyper-bent functions. Index Terms—Bent functions, Boolean function, covering ra- dius, cubic sums,
Dickson polynomials, hyper-bent functions, Kloosterman sums, maximum nonlinearity, Reed-Muller codes, Walsh-Hadamard transformation.
IEEE Transactions on Information Theory 01/2011; 57:5996-6009. · 2.62 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: Bent functions are maximally nonlinear Boolean functions and exist only for functions with even number of inputs. This paper is a contribution to the construction of bent functions over
$${\mathbb{F}_{2^{n}}}$$ (n = 2m) having the form $${f(x) = tr_{o(s_1)} (a x^ {s_1}) + tr_{o(s_2)} (b x^{s_2})}$$ where o(s i ) denotes the cardinality of the cyclotomic class of 2 modulo 2 n − 1
which contains s i and whose coefficients a and b are, respectively in $${F_{2^{o(s_1)}}}$$ and $${F_{2^{o(s_2)}}}$$. Many constructions of monomial bent functions are presented in the literature
but very few are known even in the binomial case. We prove that the exponents s 1 = 2 m − 1 and $${s_2={\frac {2^n-1}3}}$$, where $${a\in\mathbb{F}_{2^{n}}}$$ (a ≠ 0) and $${b\in\mathbb{F}_{4}}$$
provide a construction of bent functions over $${\mathbb{F}_{2^{n}}}$$ with optimum algebraic degree. For m odd, we give an explicit characterization of the bentness of these functions, in terms
of the Kloosterman sums. We generalize the result for functions whose exponent s 1 is of the form r(2 m − 1) where r is co-prime with 2 m + 1. The corresponding bent functions are also
hyper-bent. For m even, we give a necessary condition of bentness in terms of these Kloosterman sums.
Designs Codes and Cryptography 01/2011; 59:265-279. · 0.78 Impact Factor
IACR Cryptology ePrint Archive. 01/2011; 2011:373.
Cryptography and Coding - 13th IMA International Conference, IMACC 2011, Oxford, UK, December 12-15, 2011. Proceedings; 01/2011
[show abstract] [hide abstract]
ABSTRACT: Computed is the dual of the Niho bent function consisting of 2r exponents that was found by Leander and Kholosha. The algebraic degree of the dual is calculated and it is shown that
this new bent function is not of the Niho type. This note is a follow-up of the recent paper by Carlet and Mesnager.
[show abstract] [hide abstract]
ABSTRACT: It is a difficult challenge to find Boolean functions used in stream ciphers achieving all of the necessary criteria and the research of such functions has taken a significant delay
with respect to cryptanalyses. Very recently, an infinite class of Boolean functions has been proposed by Tu and Deng having many good cryptographic properties under the assumption that the
following combinatorial conjecture about binary strings is true: Conjecture 0.1. Let S t,k be the following set: St,k = {(a,b) Î ( \mathbb Z / (2k-1) \mathbb Z )2 | a + b = t and w(a) + w(b) < k
}. {S_{t,k}}= \left\{{(a,b)} {\in} \left( {{\mathbb Z} / {(2^k-1)} {\mathbb Z}} \right)^2 | a + b = t ~{\rm and }~ w(a) + w(b) < k \right\}. Then: |St,k| £ 2k-1. |{S_{t,k}}| \leq 2^{k-1}. The
main contribution of the present paper is the reformulation of the problem in terms of carries which gives more insight on it than simple counting arguments. Successful applications of our tools
include explicit formulas of |St,k|\left|{S_{t,k}}\right| for numbers whose binary expansion is made of one block, a proof that the conjecture is asymptotically true and a proof that a family of
numbers (whose binary expansion has a high number of 1s and isolated 0s) reaches the bound of the conjecture. We also conjecture that the numbers in that family are the only ones reaching the
09/2010: pages 346-358;
IACR Cryptology ePrint Archive. 01/2010; 2010:486.
[show abstract] [hide abstract]
ABSTRACT: Bent functions are maximally nonlinear Boolean functions with an even number of variables. These combinatorial objects, with fascinating properties, are rare. The class of bent
functions contains a subclass of functions the so-called hyper-bent functions whose properties are still stronger and whose elements are still rarer. In fact, hyper-bent functions seem still more
difficult to generate at random than bent functions and many problems related to the class of hyper-bent functions remain open. (Hyper)-bent functions are not classified. A complete
classification of these functions is elusive and looks hopeless. In this paper, we contribute to the knowledge of the class of hyper-bent functions on finite fields \mathbbF2n\mathbb{F}_{2^n}
(where n is even) by studying a subclass \mathfrak Fn\mathfrak {F}_n of the so-called Partial Spreads class PS − (such functions are not yet classified, even in the monomial case). Functions of \
mathfrak Fn\mathfrak {F}_n have a general form with multiple trace terms. We describe the hyper-bent functions of \mathfrak Fn\mathfrak {F}_n and we show that the bentness of those functions is
related to the Dickson polynomials. In particular, the link between the Dillon monomial hyper-bent functions of \mathfrak Fn\mathfrak {F}_n and the zeros of some Kloosterman sums has been
generalized to a link between hyper-bent functions of \mathfrak Fn\mathfrak {F}_n and some exponential sums where Dickson polynomials are involved. Moreover, we provide a possibly new infinite
family of hyper-bent functions. Our study extends recent works of the author and is a complement of a recent work of Charpin and Gong on this topic.
Arithmetic of Finite Fields, Third International Workshop, WAIFI 2010, Istanbul, Turkey, June 27-30, 2010. Proceedings; 01/2010
International Journal of Information and Coding Theory 01/2010; 1(2).
[show abstract] [hide abstract]
ABSTRACT: One of the classes of bent Boolean functions introduced by John Dillon in his thesis is family H. While this class corresponds to a nice original construction of bent functions in
bivariate form, Dillon could exhibit in it only functions which already belonged to the well-known Maiorana–McFarland class. We first notice that H can be extended to a slightly larger class that
we denote by H. We observe that the bent functions constructed via Niho power functions, for which four examples are known due to Dobbertin et al. and to Leander and Kholosha, are the univariate
form of the functions of class H. Their restrictions to the vector spaces ωF2n/2, ω∈F2n⋆, are linear. We also characterize the bent functions whose restrictions to the ωF2n/2ʼs are affine. We
answer the open question raised by Dobbertin et al. (2006) in [11] on whether the duals of the Niho bent functions introduced in the paper are affinely equivalent to them, by explicitly
calculating the dual of one of these functions. We observe that this Niho function also belongs to the Maiorana–McFarland class, which brings us back to the problem of knowing whether H (or H) is
a subclass of the Maiorana–McFarland completed class. We then show that the condition for a function in bivariate form to belong to class H is equivalent to the fact that a polynomial directly
related to its definition is an o-polynomial (also called oval polynomial, a notion from finite geometry). Thanks to the existence in the literature of 8 classes of nonlinear o-polynomials, we
deduce a large number of new cases of bent functions in H, which are potentially affinely inequivalent to known bent functions (in particular, to Maiorana–McFarlandʼs functions).
IACR Cryptology ePrint Archive. 01/2010; 2010:567.
Top Journals
• 2010–2011
□ French National Centre for Scientific Research
Lutetia Parisorum, Île-de-France, France
• 2005
□ Université de Vincennes - Paris 8
Saint-Denis, Île-de-France, France
• 2004
□ Portail des Mathématiques Jussieu / Chevaleret
Lutetia Parisorum, Île-de-France, France | {"url":"http://www.researchgate.net/researcher/74606077_S_Mesnager","timestamp":"2014-04-21T01:34:31Z","content_type":null,"content_length":"314010","record_id":"<urn:uuid:1a287df0-cac6-491a-a096-a4590194d460>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the pullback stability of a quotient map with respect to a closure
On the pullback stability of a quotient map with respect to a closure operator
Lurdes Sousa
There are well-known characterizations of the hereditary quotient maps in the category of topological spaces, (that is, of quotient maps stable under pullback along embeddings), as well as of
universal quotient maps (that is, of quotient maps stable under pullback). These are precisely the so-called pseudo-open maps, as shown by Arhangel'skii, and the bi-quotient maps of Michael, as shown
by Day and Kelly, respectively. In this paper hereditary and stable quotient maps are characterized in the broader context given by a category equipped with a closure operator. To this end, we derive
explicit formulae and conditions for the closure in the codomain of such a quotient map in terms of the closure in its domain.
Keywords: closure operator, quotient, pullback, closed morphism, open morphism, final morphism.
2000 MSC: 18A32, 18A30, 18A20, 54C10, 54B30.
Theory and Applications of Categories, Vol. 8, 2001, No. 6, pp 100-113.
TAC Home | {"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/TAC/volumes/8/n6/8-06abs.html","timestamp":"2014-04-18T00:23:00Z","content_type":null,"content_length":"2271","record_id":"<urn:uuid:b605b3eb-5283-4ef5-9f9c-845e9b6f206d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to prove an IFF (If and only If) statement!
January 15th 2013, 06:53 AM #1
Dec 2012
How to prove an IFF (If and only If) statement!
How would you go about proving these:
For all integers n, n is even if and only if n -1 is odd
if d|(a+b) and d|a, where a,b and d are integers, d is not equal to 0
Re: How to prove an IFF (If and only If) statement!
Suppose n is even then n = 2m for some m.
therefore n = 2m
obviously n-1=2m - 1 is odd
if n- 1 is odd then (n-1) + 1 = is n which is even for n-1 is odd.
we can also say that for n-1 being odd implies em-1 is odd where n = 2m for some m.
thus (n-1) = (2m-1)-1 = 2m-2 = 2(m-1) which is even.
Hence the proof
January 15th 2013, 07:49 AM #2
Super Member
Jul 2012 | {"url":"http://mathhelpforum.com/discrete-math/211345-how-prove-iff-if-only-if-statement.html","timestamp":"2014-04-19T18:17:28Z","content_type":null,"content_length":"31797","record_id":"<urn:uuid:ddcaad6e-6e71-48b2-8889-fb0c9603bab5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |