content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
MathGroup Archive: October 2005 [00448]
[Date Index] [Thread Index] [Author Index]
Re: VECTOR coordinate transformation?
• To: mathgroup at smc.vnet.net
• Subject: [mg61320] Re: [mg61242] VECTOR coordinate transformation?
• From: "David Park" <djmp at earthlink.net>
• Date: Fri, 14 Oct 2005 22:23:17 -0400 (EDT)
• Sender: owner-wri-mathgroup at wolfram.com
Not exactly, but it is easy to generate the conversion, at least from
spherical to Cartesian.
Define the Cartesian components of a point in spherical coordinates. Here r
is the radius from the origin, phi is the colatitude (angle from the z axis)
and theta is the longitude.
p[r_, \[Phi]_, \[Theta]_] = r*{Sin[\[Phi]]*Cos[\[Theta]],
Sin[\[Phi]]*Sin[\[Theta]], Cos[\[Phi]]}
To calculate the vector transformation from spherical to Cartesian
coordinates we just calculate the Jacobian matrix of the point function with
respect to the spherical coordinates. We can do that with Outer.
sphericalToCartesian[r_, \[Phi]_, \[Theta]_] =
Outer[D[#1, #2] & , p[r, \[Phi], \[Theta]], {r, \[Phi], \[Theta]}]
{{Cos[\[Theta]]*Sin[\[Phi]], r*Cos[\[Theta]]*Cos[\[Phi]],
(-r)*Sin[\[Theta]]*Sin[\[Phi]]}, {Sin[\[Theta]]*Sin[\[Phi]],
r*Cos[\[Phi]]*Sin[\[Theta]], r*Cos[\[Theta]]*Sin[\[Phi]]},
{Cos[\[Phi]], (-r)*Sin[\[Phi]], 0}}
Let's convert the vector at spherical coordinates {1,Pi/2,0} with spherical
components {1,0,0}. We just multiply the matrix times the vector.
sphericalToCartesian[1, Pi/2, 0] . {1, 0, 0}
{1, 0, 0}
If you want to go the other way you have to invert the matrix. And if you
want the transformation matrices to be expressed in Cartesian coordinates it
takes a little work.
This can probably be done more elegantly with a tensor calculus program.
David Park
djmp at earthlink.net
From: na [mailto:na at na.na.com]
To: mathgroup at smc.vnet.net
Is there any built-in command in Mathematica, which converts a vector's
coordinate system say from Cartesian to Spherical, or Spherical to
Cylindrical? By the way, I know about CoordinatesToCartesian[] and
CoordinatesFromCartesian[] already, and those are for "points" not
"vectors". I'd like to know if there an equivalent command for VECTORS.
Thank you for your help!
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Oct/msg00448.html","timestamp":"2014-04-18T23:44:50Z","content_type":null,"content_length":"36062","record_id":"<urn:uuid:4bdf4887-64d2-4296-b7b6-feb9ef42c41d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Does the vanishing of the Poisson bracket on $S(\mathfrak{g})^{\mathfrak{g}}$ inspire the disover of Duflo's isomorphism theorem?
up vote 6 down vote favorite
For any finite dimensional Lie algebra $\mathfrak{g}$, we know that the universal enveloping algebra $U(\mathfrak{g})$ is a deformation of the symmetric algebra $S(\mathfrak{g})$. In fact let's
define $$ U_t(\mathfrak{g}):=\text{T}(\mathfrak{g})/(X\otimes Y-Y\otimes X-t[X,Y]). $$ Then $S(\mathfrak{g})=U_0(\mathfrak{g})$ and $U(\mathfrak{g})=U_1(\mathfrak{g})$. Moreover we have the
symmetrization map $$ I_{PBW}:S(\mathfrak{g})\longrightarrow U_t(\mathfrak{g}) $$ which pulls back the product on $U_t(\mathfrak{g})$ to a product on $S(\mathfrak{g})$. We call it the star product
and denote it by $*_t$.
Obviously $*_t$ is different from the original product on $S(\mathfrak{g})$. In fact we can prove that the first order deformation of the product is governed by the $\textit{Poisson bracket}$ on $S(\
mathfrak{g})$. More precisely the Poisson bracket is defined to be $ \text{{a,b}} := c^k _ {ij} X_k \cdot \partial^i a \cdot \partial^j b$ ( $c^k _ {ij}$ is the structure constant of $\mathfrak{g}$ )
and we can prove that $$ a *_t b= ab+\frac{t}{2}\text{{a,b}}+O(t^2). $$
Furthermore, we have the following result
1. The Poisson bracket vanishes on the invariant subalgebra $S(\mathfrak{g})^{\mathfrak{g}}$. This is almost the definition.
2. The symmetrization map $I_{PBW}$ maps $S(\mathfrak{g})^{\mathfrak{g}}$ isomorphically (as vector spaces, not as algebras) onto the center $Z(U(\mathfrak{g}))$.
3. (Duflo's Isomorphism Theorem) We can precompose a map $D: S(\mathfrak{g})\rightarrow S(\mathfrak{g})$ such that the composition restrict to $S(\mathfrak{g})^{\mathfrak{g}}$ is an $\textit
{algebraic isomorphism}:~S(\mathfrak{g})^{\mathfrak{g}}\rightarrow Z(U(\mathfrak{g}))$.
The Duflo's Isomorphism Theorem is of course highly non-trivial and we can refer to Calaque and Rossi's book http://math.univ-lyon1.fr/~calaque/LectureNotes/LectETH.pdf, as well as well as many other
resources, for further discussions.
I usually wonder that (maybe historically, maybe not) how people could expect that there is an algebraic isomorphism between $S(\mathfrak{g})^{\mathfrak{g}}$ and $Z(U(\mathfrak{g}))$. The thing we
can notice is that the first order deformation, which is the Poisson bracket, vanishes. We know that it is a necessary condition (at least it should vanish in the second Hochschild cohomology) to
find an algebraic isormorphism.
My question is: Does the vanishing of the Poisson bracket plays an important role in finding and proving Duflo's isomorphism theorem? Or it is just an literally first step?
add comment
1 Answer
active oldest votes
My question is: Does the vanishing of the Poisson bracket plays an important role in finding and proving Duflo's isomorphism theorem? Or it is just an literally first
Let $A_0$ be a Poisson algebra and $A$ a deformation quantization of $A_0$ (assume we are in a context when it exists).
Assume you have a quantization map $Q:A_0\to A$, by which I mean a section of the classical limit map $A\to A/(\hbar)=A_0$.
Then for any two elements $a,b\in A_0$, $[Q(a),Q(b)]=\hbar\{a,b\}+O(\hbar^2)$.
up vote 6 down vote Hence if you want to have $Q(ab)=Q(a)Q(b)$ you must at least assume that $\{a,b\}=0$.
My (non-)answer to your question is then:
the vanishing of the Poisson bracket is a necessary requirement if you want a statement of Duflo-type. It is just a first step.
The actual history comes from the Harish-Chandra isomomorphism. Duflo noticed that the original formula could be written for any Lie algebra, without any use of roots and
similar stuff specific to the semi-simple case.
add comment
Not the answer you're looking for? Browse other questions tagged lie-algebras deformation-theory rt.representation-theory quantization ho.history-overview or ask your own question.
|
{"url":"http://mathoverflow.net/questions/127589/does-the-vanishing-of-the-poisson-bracket-on-s-mathfrakg-mathfrakg-in/130091","timestamp":"2014-04-18T10:55:07Z","content_type":null,"content_length":"55084","record_id":"<urn:uuid:4dfed5e2-1b26-48e1-9286-6b5acf9debc2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Area of a Triangle
October 15th 2010, 05:27 PM #1
Mar 2010
Area of a Triangle
The formula for the area of a triangle can be derived before the formula for the area of a parallelogram is proved.
(a) Using only the formula for the area of a rectangle, find the area of a right triangle with legs a and b.
(b) Use your result from part (a) to derive the formula for the area of a triangle by using a sum or difference of the areas of right triangles.
I've got (a). I am having trouble with (b). I have no idea...
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/geometry/159768-area-triangle.html","timestamp":"2014-04-20T19:50:48Z","content_type":null,"content_length":"29025","record_id":"<urn:uuid:9eda75f4-42ec-4d7d-a783-d0f64f316d28>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Problems 4.10
4.10 "Output" of a M/M/1 system In this problem you are asked to prove the important theorem of Section 4.10 for the case of a M/M/1 queueing system. Let a queueing system with a single server have
Poisson arrivals at a rate
Hint: What is the pdf for the time between service completions when the server is continually busy?
|
{"url":"http://web.mit.edu/urban_or_book/www/book/chapter4/problems4/4.10.html","timestamp":"2014-04-21T15:13:57Z","content_type":null,"content_length":"2295","record_id":"<urn:uuid:cb638b51-4945-4f01-96bc-cce60046c201>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
|
diff eqn with cos and i?
March 11th 2009, 11:02 AM
diff eqn with cos and i?
Here it is...
d2y/dx2 + 4y=cos(x) , y(0)=1, dy/dx(0)=1. If I am correct, lambda is for this one 2i and -2i. I'm not sure if i'm doing this right... in any case, what do with this cos(x)?
How to solve this? And what do u get? I'm kinda stuck...
March 11th 2009, 01:55 PM
Yes on 2i, -2i. Your complimentary solution is
$y_c = c_1 \sin 2x + c_2 \cos 2x$. It obtain a particualr solution, try a form $y_p = A \sin x + B \cos x$, sub into the ODE and compare terms. This will give you an A and B. Then use your IC's.
March 11th 2009, 02:25 PM
if i get u correctly, trying this...
i get final solution y=1/3cosx + 1/2sin(2x)+ 2/3 cos(2x). Is that it?
March 11th 2009, 03:13 PM
|
{"url":"http://mathhelpforum.com/differential-equations/78169-diff-eqn-cos-i-print.html","timestamp":"2014-04-20T06:38:26Z","content_type":null,"content_length":"5984","record_id":"<urn:uuid:ba7c6899-393f-4385-80f3-40d6881b35e7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
|
EDP20 -- squares and fly traps
EDP20 — squares and fly traps
I think this will be a bit long for a comment, so I’ll make it a post instead. I want to try to say as clearly as I can (which is not 100% clearly) what we know about a certain way of constructing a
decomposition of the identity on $\mathbb{Q}.$ Recall from the last post or two that what we want to do is this. Define a square in $\mathbb{N}\times\mathbb{N}$ to be a set of the form $[r,s]^2,$
where by $[r,s]$ I mean the set of all positive integers $n$ such that $r\leq n\leq s.$ Let us identify sets with their characteristic functions. We are trying to find, for any constant $C,$ a
collection of squares $S_1,\dots,S_k$ and some coefficients $\lambda_1,\dots,\lambda_k$ with the following properties.
• $C\sum_{i=1}^k|\lambda_i|\leq\sum_{i=1}^k\lambda_it_i,$ where $S_i=[r_i,s_i]^2$ and $t_i=(s_i-r_i+1)$ is the number of points in the interval that defines $S_i,$ or, more relevantly, the number
of points in the intersection of $S_i$ with the main diagonal of $\mathbb{N}\times\mathbb{N}.$
• Let $f(x,y)=\sum_i\lambda_iS_i(x,y).$ Then for any pair of coprime positive integers $a,b$ we have $\sum_{n=1}^\infty f(na,nb)=0.$
The second condition tells us that the off-diagonal elements of the matrix you get when you convert the decomposition into a matrix indexed by $\mathbb{Q}_+$ are all zero, and the first condition
tells us that we have an efficient decomposition in the sense that we care about. In my previous post I showed why obtaining a collection of squares for a constant $C$ implies that the discrepancy of
an arbitrary $\pm 1$ sequence is at least $C^{1/2}.$ In this post I want to discuss some ideas for constructing such a system of squares and coefficients. I’ll look partly at ideas that don’t work,
so that we can get a sense of what constraints are operating, and partly at ideas that might have a chance of working. I do not guarantee that the latter class of ideas will withstand even five
minutes of serious thought: I have already found many approaches promising, only to dismiss them for almost trivial reasons. [Added later: the attempt to write up even the half promising ideas seems
to have killed them off. So in the end this post consists entirely of half-baked ideas that I'm pretty sure don't work. I hope this will lead either to some new and better ideas or to a convincing
argument that the approach I am trying to use to create a decomposition cannot work.]
Using squares and fly traps.
A general idea that I have not managed to rule out is to build a decomposition out of “squares and fly traps”. I’ve already said what a square is. If you take the two squares $[r,s]^2$ and $[r,s-1]^
2,$ then their difference is the set of all points $(s,t)$ or $(t,s)$ such that $r\leq t\leq s.$ It has the shape of two adjacent edges of a square. It is this kind of shape that I am calling a fly
The idea then is to take a collection of fly traps with negative coefficients and a collection of squares with positive coefficients. In order for the second condition to hold, we need the following
to hold: as you go along any line from the origin other than the main diagonal $x=y,$ if you sum up the coefficients associated with the squares you visit, then the result should be cancelled out by
the sum of the coefficients associated with the fly traps. In particular, if all squares have coefficients equal to 1 and all fly traps have coefficients equal to -1, then the number of times the
line hits a square should be the same as the number of times it hits a fly trap. (I think of the squares as sending out “flies” that are then caught by the fly traps, which have some nasty sticky
substance at their points.)
It’s not really necessary for the fly traps all to point in the same direction, and there are other small adjustments one can make, but the basic square/fly-trap idea seems to be what the computer is
telling us works best in very small cases. (It is far from clear that this is a good guide to what happens in much larger cases, but it seems sensible at least to consider the possibility.)
For a nice illustration of a square/fly-trap construction, see this picture that Alec produced. Alec also has a general construction that gives us $C$ arbitrarily close to 2. Rather than repeat it
here, let me give a link to the relevant comment of Alec’s (if you think of the bar as squaring the interval, it will be consistent with what I am saying in this post), and a link to a similar
comment of Christian’s.
This example (or rather family of examples) uses a single fly trap of width $k$ and squares of width 2 (unlike the example in Alec’s picture, which I therefore find more interesting, despite the fact
that it gives a worse constant). It is instructive to see why this gives us a bound of $C=2.$ If the fly trap has width $k,$ then it has $2(k-1)$ off-diagonal points. So we need $2(k-1)$ flies. Each
square of width 2 contributes two flies, so we need $k-1$ such squares. This means that $\sum_i|\lambda_i|=k+1$ (since the fly trap needs two squares to make it) and that $\sum_i\lambda_it_i=2(k-1)-1
=2k-3.$ The ratio of these two numbers tends to 2 as $k$ tends to infinity.
It is not hard to see that if we could use squares of width 3 instead, then we would be able to get a constant arbitrarily close to 3. However, significant difficulties arise almost immediately.
However again, this could be good news, because if we can find some way of getting $C$ beyond 2, we may by that stage have found a pattern that can be generalized. And I think there is some hope of
pushing $C$ beyond 2, as I shall now try to explain.
One fly trap is not enough.
First, let us see why there is absolutely no hope of achieving this with just one fly trap. The argument is simple. Let $L_{N,k}$ be the flytrap $[N-k,N]^2-[N-k,N-1]^2.$ If all the off-diagonal
points in the square $[r,r+2]^2$ are caught by this fly trap, what can we say about $r$? One necessary condition is that both $r+1$ and $r+2$ are factors of $N.$ But this implies that $N$ is at least
$r^2,$ which in turn implies that $k$ is at least $\sqrt{N}.$ Since we need almost all points in the fly trap to catch flies, we need at least $\sqrt{N}$ distinct flies, which is more than $r.$ So,
roughly speaking, we need a constant fraction of the numbers $s$ of order of magnitude $\sqrt{N}$ to be such that both $s$ and $s+1$ are factors of $N.$ This just isn’t going to happen.
Note that if we make $r$ smaller, to give numbers near $r$ a better chance of dividing $N,$ then we are forced to increase $k$ (or else the flies miss the fly trap). And that makes things even worse
— we now have fewer possible flies and a bigger fly trap.
I’m sure it would be easy to state and prove something rigorous here, but for now I’d prefer to leave those thoughts as a convincing enough demonstration that a single fly trap will not do the job.
But if that’s the case, what can we do? Well, the obvious next thing to try is several fly traps.
Pure randomness as a way of catching flies.
How can we make use of multiple fly traps? A first thought is that if we take the square $[r,r+2]^2$ and send out some flies, we could create two traps, one to catch flies with maximum coordinate
$r+1$ and the other to catch flies with maximum coordinate $r+2.$ But the trouble with this is that it seems to be far too tailored to one particular square: it is hard to believe that such a trap
could catch the flies from several different squares. We would be asking for two integers $N_1$ and $N_2$ such that there are many integers $s$ such that one of $s$ and $s+1$ divides $N_1$ and the
other divides $N_2.$
Actually, on writing that I realize that I have given no thought to it at all, so perhaps it is worth trying to show that it cannot be done (just in case, contrary to expectations, it can be done).
Since $s$ and $s+1$ are always coprime, there seems no point in giving $N_1$ and $N_2$ any common factors, so let’s take $N_1$ and $N_2$ to be highly smooth numbers that are coprime to each other.
And let’s try to find 3-by-3 squares that send out flies that are caught by one of the fly traps $L_{N_1,k}$ or $L_{N_2,k}.$ (I’m assuming that $N_1$ and $N_2$ are roughly the same size. If it is
convenient to take different $k$s then I don’t mind doing it, but I don’t expect it to help.)
If $k$ is fixed and $N_1$ and $N_2$ are large, then … I think we’re completely dead. We have $2(k-1)$ fly trap points below the diagonal to get rid of and three flies below the diagonal per 3-by-3
square, so we need about $2k/3$ squares. If $[s,s+2]^2$ is one of those squares, then for the fly at $(s+2,s)$ not to miss the fly traps, we need $s$ to be at least $2N/k,$ where $N$ is the rough
size of $N_1$ and $N_2.$ So we need $2k/3$ pairs $\{s+1,s+2\}$ such that we can find an integer $j\leq k/2$ with $j(s+1)=N_1$ and $s+2|N_2.$ But that more or less fixes the ratio of $N_1$ to $N_2,$
and anyway $2k/3$ is bigger than $k/2.$
From this I am happy to conclude that we need to change our attitude and go for many fly traps. The idea would be that the reason a fly hits a fly trap is not that the fly starts out at a very
carefully chosen point (roughly speaking, a “factor” of the fly trap) but that there are enough fly traps for it to be almost inevitable that the fly will hit at least one of them.
What I mean by “pure randomness” is that we use the following mechanism for ensuring that the fly at $(s,t)$ hits a trap. If $s\geq t,$ then we simply make sure that there are at least $s$ traps,
fairly randomly placed, or rather $As$ traps for some large constant $A.$ Then the probability that the fly misses all traps is small: roughly speaking, the expected number of traps it hits is $A,$
and if we can get enough independence then we can hope that all flies will hit roughly $A$ of the traps. (This model turns out to be much too crude, since the probability of a fly hitting a trap
depends very much on divisibility properties of the coordinates of the fly and the trap. But let us work with it for now.)
Some back-of-envelope calculations.
Let us try to check the feasibility of this idea. An initial observation is that most fly traps $L_{N,k}$ are useless for our purposes. If you choose a random large integer $N,$ then the fraction of
integers coprime to $N$ will be around $6/\pi^2.$ (If $m$ is the other integer, then the probability that $p$ divides both $m$ and $N$ is $1/p^2,$ so the probability that they are coprime is roughly
$\prod_p(1-1/p^2)=1/\zeta(2).$) But if $j$ is coprime to $N,$ then the point $(N,N-j)$ cannot catch any flies. If we have a large set of such points, then we are in trouble.
To deal with this, it seems that the only option we have is to insist that our fly traps $L_{N,k}$ occur at highly composite values of $N,$ so that almost all other integers have quite high common
factors with $N$ and therefore give rise to points that can catch many flies. It will be convenient to call $N$ and $k$ the height and width of the fly trap $L_{N,k}.$ In that language, we want fly
traps with highly composite heights. (Note that the height refers to the altitude of where the fly trap is placed, whereas the width measures the size of the trap itself. Indeed, “altitude” is
probably a better word than “height” here, but I prefer an Anglo-Saxon word if there is one.)
Now let us suppose that we have $T$ 3-by-3 squares, and a reception committee of fly traps with highly composite heights between $N_0$ and $2N_0.$ If the widths of the fly traps are all $k$ (or
perhaps all between $k$ and $2k$ or something like that), then we’ll need $3T/(k-1)$ fly traps if we want a one-to-one correspondence between flies and trap points, and a bit more than that if we
want each fly to hit $A$ traps. Let us take $3AT/(k-1)$ fly traps.
Now consider a fly at $(s,s+1),$ say. If its chances of hitting a given trap are $1/(s+1),$ then we’ll also need there to be about $As$ fly traps. That is, we’ll want $s$ to be about $3T/k.$ And for
that fly not to miss the traps altogether (because its angle from the main diagonal is too large), we’ll need $kN_0/3T$ to be at most $k.$ So we’ll need $T$ to be bigger than $N_0/3.$ That looks
pretty problematic, because we now need a very large number of fly traps, and it will not be possible to put them all at highly smooth heights between $N_0$ and $2N_0$: there just aren’t that many
highly smooth numbers.
Just to make that more conceptual, the problem we have is that there are two conflicting pressures on the flies. If they are not high enough, then the angle they make with the main diagonal is forced
to be large and they therefore miss all the fly traps. But if they are too high, then they are very unlikely to hit any given fly trap, which forces the fly traps to be extremely numerous, which
forces there to be several fly traps at non-smooth heights, and therefore several points in the traps that cannot catch flies.
Smooth traps and smooth squares.
Is there anything we can do to get round this problem? I think there may be. There was one questionable assumption in the discussion above, which was that the probability of a fly $(s,s+1)$ hitting
any given fly trap was about $1/s.$ The condition we need for this fly to hit the trap $L_{N,k}$ is that $s+1$ should divide $N$ and that $s$ should be at least $N/k.$ Now if we choose $N$ randomly,
then of course the probability that $s+1|N$ is $1/(s+1).$ But if we choose it as a random number with lots of small prime factors, and if $s+1$ also has quite a lot of small prime factors, then we
hugely increase the chances that $s+1|N.$ For instance, if $N$ is a random multiple of 6, and $s+1$ also happens to be divisible by 6, then the chances that $s+1$ divides $N$ are now $6/(s+1).$
Let us now go back to the attempt above. Again let us suppose that we have $T$ 3-by-3 squares. Again, if we are taking fly traps $L_{N,k}$ with $N$ between $N_0$ and $2N_0,$ and if we want each fly
to hit $A$ traps, then we will need about $3TA/k$ or so traps. But now let us suppose that all the traps have very smooth heights. More precisely, let us suppose that all the heights $N$ are such
that all but a small proportion of integers have a fairly high common factor with $N.$ Simplifying absurdly, let us suppose that this gains us an extra factor of $D$ when we think about the
probability that a fly $(s,s+1)$ or $(s,s+2)$ is caught by a given fly trap: now the probability is more like $D/s$ rather than $s.$ What does that do for us?
It means that now, if we want each fly to hit $A$ traps, we’ll need not $As$ traps (where $s$ is the height of the fly) but more like $As/D$ traps. We already know we need about $3TA/k$ traps, so
equating the two we find that $s$ needs to be about $3TD/k.$ And if we want a fly at that kind of height not to be too far from the diagonal to hit the traps, we need $Nk/3TD$ to be at most $k,$
which tells us that $T$ should have approximate size $N_0/D,$ which is rather better than the earlier estimate of $N_0$ (up to a constant).
But at this point we have an important question: are there enough highly smooth numbers $N$ between $N_0$ and $2N_0$?
To answer that, we need to think about what the typical probability gain is for a given number. Suppose, for instance, that $N$ is divisible by $\prod_{i\in X}p_i,$ where $X$ is some set of small
integers. For what $D$ can we say that a random integer $m$ has a good chance of having a highest common factor with $N$ of at least $D$?
The expected number of $i\in X$ such that $p_i|m$ is $\sum_{i\in X}p_i^{-1},$ and we can expect this to be reasonably concentrated if the expectation is not too small. Writing $M$ for $\prod_{i\in X}
p_i$ and assuming that $X$ is a fairly dense set of primes (something like a random set of $r$ of the first $2r$ primes, say) then the expectation will be around $\log\log r,$ so the value we get for
$D,$ assuming (not quite correctly) that the primes $p_i$ that divide $m$ are fairly evenly distributed, ought to be around $((2r)!)^{\log\log r/2r},$ or around $r^{\log\log r}.$ (We could get there
by saying that the typical size of a $p_i$ is fairly close to $r$ and we are choosing $\log\log r$ of these primes.)
This is fairly worrying, because in order to gain a factor $D,$ we have to make the set of $N$ we are allowed to choose very much sparser. It seems as though we lose a lot more than we gain by doing
The “smooth squares” in the title of this section refer to the possibility that we might try to choose $s$ so that both $s+1$ and $s+2$ have quite a lot of small prime factors. But such numbers are
hard to come by, so again it seems that any gain one might obtain is more than compensated for by the loss in their frequency.
Special configurations of squares and fly traps.
Can we achieve what we want by making very careful selections of our $N$s? It’s clear that something that helps us is to have pairs of heights $N_1,N_2$ such that $N_2/N_1$ is, when written in its
lowest terms, of the form $(a+1)/a$ or $(a+2)/a.$
It’s quite easy to find $N_1,\dots,N_t$ such that all of $N_i/N_1$ are of this form: just make $N_1$ extremely smooth and make all the differences small. Then the differences will divide $N_1$ and we
are done. But what if we try to ensure that $N_2/N_1$ and $N_3/N_2$ are of the required form? Then we need large positive integers $a$ and $b$ such that $(1+1/a)(1+1/b)$ is of the form $1+1/c$ for a
positive integer $c.$ That is, we need the reciprocal of $1/a+1/b+1/ab$ to be an integer. Rearranging, we want $ab/(a+b+1)$ to be an integer. It’s moderately reassuring to observe that this can be
done: for instance, if $a=3$ and $b=8$ we get $24/12=2.$ But how about if $a$ and $b$ are very large? Or perhaps they don’t have to be very large, just as long as we can find a set $a_1,\dots,a_r$
such that many of the ratios $a_ia_j/(a_i+a_j+1)$ are integers.
Let’s think about this slightly further. Suppose we have such a collection of integers. Then choose $N$ with enough factors for all the numbers I mention to be integers, and for each $i$ let $N_i=N
(1+1/a_i).$ What we want is for $N_i-N_j$ to divide $N_i.$ That is, we want $1/a_i-1/a_j$ to divide $1+1/a_i.$ So we need $a_j(a_i+1)/(a_j-a_i)$ to be an integer. (This doesn’t look very symmetric,
but it is true if and only if $a_i(a_j+1)/(a_j-a_i)$ is an integer.)
Suddenly this looks a bit easier. It looks as though we’ll be OK if we make the $a_i$ all fairly smooth and make their differences small. Hmm … except that that doesn’t look easy after all, since if
$a_j$ is smooth and $a_j-a_i$ is small, then $a_i$ will not be all that smooth.
I won’t think about this for the time being, but I think it may be possible to construct, in a not quite trivial way, an arbitrarily long sequence of integers $a_1,\dots,a_r$ such that $a_j(a_i+1)/
(a_j-a_i)$ is always an integer.
Let’s suppose we managed that. Would it help? What we could do is this. We let $N$ be some huge factorial so that it’s divisible by whatever we need it to be divisible by. We then define the numbers
$N_1,\dots,N_r$ as above: that is, $N_i=N(1+1/a_i).$ Since whenever $i<j$ we have $N_j/N_i$ of the form $1+1/u$ for some positive integer $u,$ we can find an integer $s$ such that $us=N_i$ and $(u+1)
Therefore, potentially at least, we could consider using the square $[u-1,u+1]^2$ to knock out some points in the fly traps at heights $N_i$ and $N_j.$
However, for this to have a chance of working, we want $u$ to be big, since otherwise our flies will be out wide again, which will force the traps to be big and we’ll get into all sorts of problems.
But it’s problematic either way. If we want traps of width at most some fixed $k,$ then we need $u$ to be at least $N/k.$ For that we need the integers $a_1,\dots,a_k$ to be of size at least $\sqrt{N
/k}$ (since $u=a_i(a_j+1)/(a_i-a_j)$), and more than that unless they are close together.
But we also need the $a_i$ to divide $N,$ so we can’t just choose the $a_i$ and then make $N$ huge. Rather, what we seem to want is a number $N$ that has so many factors of size somewhat bigger than
$\sqrt{N/k}$ that we can find interesting clusters of them such that many of the numbers $a_i(a_j+1)/(a_i-a_j)$ are integers.
I should think a bit more about how many of these numbers actually need to be integers. Perhaps we don’t need them all to be integers — if not, then we would have a much greater chance of success.
If the fly traps have width $k,$ then we have $rk$ points below the diagonal that need to be hit. Each good pair $i,j$ leads to three flies that can do the hitting. So it looks as though $3\binom r2$
needs to be bigger than $rk.$
I think I must have made a mistake here, since there are basically only two chances to hit the point $(N_i,N_i-t)$: we must do so either at $(N_i/t,N_i/t - 1)$ or at $(2N_i/t,2N_i/t-2).$ So we need
an extraordinary miracle to occur: it must be possible to partition (almost all of) the numbers $N_i/t$ and $2N_i/t$ into pairs of consecutive integers. This does not feel possible to me.
I’m going to stop at this point. I’ll end with the obvious question: is it possible to create an example out of squares and fly traps? Part of me thinks that the square/fly-trap idea is almost
forced, since we need the bigger points to avoid coprime pairs. I think also that I have not devoted enough space to discussing bigger fly traps — ones where the width is proportional to the height,
say. This requires bigger squares, but it may be possible to do something. In fact, I’ll think about that (not for the first time) and if anything interesting comes out of it then I’ll extend this
Mark Bennet Says:
September 10, 2010 at 8:33 pm | Reply
Just to observe that finding smooth numbers close enough together in bunches is at the heart of the problem. The discrepancy 2 sequence looked as though it might possibly go on for ever, and was
certainly longer than most initial expectations. It failed because smooth numbers which are close together (in a sense to be made precise) belong to enough different HAPs to cause a blockage – which
seems very close to the ‘let’s design a flytrap’ problem.
Since we are now dealing with completely multiplicative sequences on the rationals, it is possible that there is a particularly effective flytrap associated in some way with the maximal completely
multiplicative sequence with discrepancy 2. [maximal sequences which are not completely multiplicative might also play a part in this].
gowers Says:
September 10, 2010 at 10:13 pm | Reply
I have an observation that is similar in spirit to yours. (In case Mark’s comment gets replied to, I’m referring to the first comment on this post.) In a sense the thing that is causing me most
difficulty is that there are lots of points $(x,y)$ such that $x$ and $y$ are coprime. This too seems to be closely related to the “true” reason that the problem is hard: that there are many pairs of
numbers such that choosing the value at one of them has almost no bearing on the value at the other.
A small comment is that we are not just restricting attention to completely multiplicative functions, though I suspect that we more or less can (especially if we allow them to be complex).
gowers Says:
September 10, 2010 at 10:36 pm | Reply
I’ve just had a thought that may be quite helpful on the theoretical side. A problem I’ve been having up to now is that small squares do not contain many flies and therefore do not hit many points in
the fly traps. But there is another way that we could consider hitting them, which is with thickened flytraps.
Let me try to explain what I mean. Suppose that instead of a square of width 3, which gives us a contribution of 3 to the trace and 1 to the sum of coefficients, we were to take a function of the
form $[r,s]^2-[r,s-6]^2.$ This would contribute 6 to the trace and 2 to the sum of (absolute values of) coefficients, so would be as good in that sense, but it would also send out far more flies.
That would allow us to choose far fewer functions of this kind, so we might be able to place them in very special places where all of $s-5,\dots,s$ are highly composite.
I don’t know whether this deals with my difficulties, but it gives us an extra parameter to play with, which can’t be a bad thing.
gowers Says:
September 10, 2010 at 11:13 pm | Reply
One other small thought, which might conceivably lead to something that could be searched for experimentally. Perhaps we could pick a whole lot of multiples of $k!$ for some small $k$, add together
the corresponding Alec examples, and then hope that some of the 2-by-2 squares overlap or otherwise lead to potential gains in efficiency. For example, if we are lucky enough to have amongst our
three 2-by-2 squares the squares $[2s-2,2s-1]^2,$$[2s-1,2s]^2$ and $[s-1,s]^2,$ then we can replace them by a single 3-by-3 square $[2s-2,2s]^2.$ This will give us the same set of flies, but the
efficiency will have gone up from 2 to 3 (since the sum of coefficients is now 1 instead of 3 and the contribution to the trace is now 3 instead of 6).
It’s just conceivable that we could get enough coincidences like this to push the value of $c$ down below 1/2. But I think we might need a computer to search for them.
• Thomas Sauvaget Says:
September 11, 2010 at 9:36 am
This is a great general idea, but unless mistaken I think the particular replacement you mention will not appear when adding Alec examples (since all the little 2-by-2 squares there are of the
form $[2x-1,2x]$ by construction, so one shall never have a $[2s-2;2s-1]$).
Perhaps one might try some reverse engineering, that is contruct three examples (even without a low $c$ to start with) where one contains a $[s-1;s]$, a second a $[2s-1;2s]$ and a last a $[2s-2;
2s-1]$, in such a way that taking multiples allows to reproduce the trick again and again.
• gowers Says:
September 11, 2010 at 9:58 am
Oh yes. I think though that this can be dealt with by having some of the fly traps as inverted Ls and others as Ls. That is, some are of the form $[r,s]^2-[r,s-1]^2$ and others are of the form $
I’m in general keen on the idea of trying to get product constructions to work, so I’d be interested to understand better what you mean by your reverse engineering suggestion.
• Thomas Sauvaget Says:
September 11, 2010 at 6:40 pm
What I meant is to design an example using your trick as a cornerstone. I think I finally managed to get it to work, the idea is to use something a bit reminiscent of telescoping series combined
with your trick (resulting in an amplification of a small disturbance, like the tricki idea “create an epsilon of room”).
Namely, at step $s$ we build a decomposition which has both the pair $[2s-2;2s-1]^2$, $[2s-1;2s]^2$ and the third one but shifted by 1, $[s+1;s+2]^2$. So when we add the step $s+1$ we get the
cancellation between the pair of step $s+1$ and the lone one of step $s$, i.e. we do the replacement you proposed and thus $c$ decreases a little bit. When done and we add step $s+2$ and do the
new replacement to decrease $c$ still a bit more, and so on.
Here is a drawing of an example which seems to work (it’s a picture of step $s$, the flies are cercled in green, and the traps in red). For instance at step 1 we have $c=\frac{7+21}{0+2}=14$,
then adding step 2 to it and replacing we get $c=\frac{(7+6)+(21+20)}{(0+2)+(2+2)}=9$. So this way one would get $c$ arbitratily small. Of course this very much requires independent confirmation,
I may well have made an error again.
• Thomas Sauvaget Says:
September 11, 2010 at 7:05 pm
Typo in previous message: it should read “shifted by 1, $[s;s+1]^2$“.
• gowers Says:
September 11, 2010 at 8:41 pm
Thomas, I’m curious to understand your construction but I don’t. Would it be possible to say precisely what all the rectangles and coefficients are after, say, three steps of the construction?
• Alec Edgington Says:
September 11, 2010 at 9:26 pm
I find the general idea interesting. It seems that Tim is proposing a kind of transformation rule whereby we can replace any decomposition that includes $[s-1, s]^2$, $[2s-2, 2s-1]^2$ and $[2s,
2s-1]^2$ with the same decomposition with those three terms replaced by $[2s-2, 2s]^2$, and still have a valid decomposition with smaller $c$. For example, we can replace $[1,2]^2 + [2,3]^2 +
[3,4]^2 + \ldots$ by $[2,4]^2 + \ldots$.
Now there may be other such transformations. And we can always form a convex combination of decompositions that achieve $c$ (or better), to obtain a new decomposition that achieves $c$ (or
So, I wonder whether given a sufficiently rich supply of decompositions (not necessarily particularly ‘good’ ones), of unbounded support, and of transformations like Tim’s, we could apply the
transformations repeatedly, combined with taking convex combinations, to get better and better $c$.
I’d be interested to know whether there are other similar transformations, or whether this is a ‘one-off’ …
• Thomas Sauvaget Says:
September 11, 2010 at 10:58 pm
All pieces have the same coefficient except for signs, so that computing $c$ is done with the formula you derived earlier $c = \frac{\sum |\lambda_i|}{\sum \lambda_i |P_i|}=\frac{r+s}{\sum |R_i|-
\sum |S_i|}$, that is only the number of squares and rectangles and their sign and imbalance matter.
Here is a new picture with much better explanations (please disregard the previous messy one as I’ve changed color conventions): it shows what happens in three steps: first we have piece $s$
which is made of two parts, then we add to it piece $s+1$, and finally we apply your trick. I’ve corrected some errors from my previous message: in fact piece $s$ has $c=13$, and the addition of
piece $s$ and piece $s+1$ after the trick becomes $c=25/3=8.333...$.
(you can click on the image to see it full size in your browser)
• Thomas Sauvaget Says:
September 12, 2010 at 8:13 am
In fact the general formula after adding $n+1$ pieces and doing the trick each time is $c=\frac{25+25n}{2+4n}>\frac{25/4}=6.25$, so it doesn’t decrease arbitrarily as I thought, and my
construction fails blattantly.
So to exploit properly your replacement idea, one should find a construction which manages at each step to use it more often than one adds new squares & rectangles, so as to obtain something like
$c=\frac{a+bn}{c+dn}$ with $b<d$.
gowers Says:
September 11, 2010 at 11:30 am | Reply
There seems to be a basic snag with what I was suggesting. Alec’s construction relies on our taking a multiple of $k!$ so that we can pick off all the points in a fly trap of width $k.$ (It would be
sufficient, as he has noted elsewhere, to take the lowest common multiple of 1,2,…,k.) Then for each such multiple $N,$ he takes 2-by-2 squares near the numbers $N,N/2,N/3,\dots,N/k.$ Now we know
that mod $k!$ the number $N/j$ is congruent to $k!/j.$ It follows that these numbers are all well-separated mod $k.$ And from that it follows that the kinds of coincidences I was hoping for, where we
have pairs of numbers $N_1,N_2$ and numbers $j_1,j_2\leq k$ such that $N_1/j_1$ is very close to $N_2/j_2,$ simply don’t exist.
I’m not sure about this, but I think it may be possible to prove that a construction with fly traps and with squares that are mostly of size 3 cannot be made to work. The rough idea of the proof (I
have more of an idea of the proof than of the statement that it proves) is this. If you want fly traps of width $k$ and you want all your flies to be at distance either 1 or 2 from the diagonal
(defining the distance of $(x,y)$ to the diagonal to be $|x-y|$), then almost every $N$ used for a fly trap must be at a number such that $2N$ is divisible by almost every number from 1 to $k.$ But
that makes every $N$ divisible by half the l.c.m. of $\{1,2,\dots,k\}.$ Call that number $M.$ This means that if we have $N_1$ and $N_2,$ then the distance between $N_1/j_1$ and $N_2/j_2$ is at least
$M|1/j_1-1/j_2|$ if $j_1e j_2,$ and at least $M/j$ if $j_1=j_2=j$ and $N_1e N_2.$ So in all cases it is large.
What I think this proves is that the flies that are very close to the diagonal and that can be caught in highly smooth traps are well separated, and therefore cannot be covered efficiently by
squares. And if we try to deal with this by making the traps less smooth, then we will introduce lots of points that cannot catch any flies that are very close to the diagonal.
I might have thought that we were in serious trouble at this point, but I think the thickened-L idea from a couple of comments back could be helpful.
Jonathan Vos Post Says:
September 11, 2010 at 6:52 pm | Reply
“So in the end this post consists entirely of half-baked ideas that I’m pretty sure don’t work.”
Depending on the Topology of the Mathematical Ideas subspace of the “Ideocosm” [the space of all possible ideas], sometimes a 1/2 baked idea + a 1/4 baked idea a 1/8 baked idea a 1/16 baked idea … in
the limit becomes a complete and correct idea.
Jonathan Vos Post Says:
September 11, 2010 at 6:59 pm | Reply
Must have missed the “+ key from lack of coffee. Okay, if you permit me to try again:
Depending on the Topology of the Mathematical Ideas subspace of the “Ideocosm” [the space of all possible ideas], sometimes a 1/2 baked idea + a 1/4 baked idea + a 1/8 baked idea + a 1/16 baked idea
… in the limit becomes a complete and correct idea.
Or should I be defining a Selberg zeta function over all closed geodesics on dynamical systems on the Ideocosm for surfaces of constant curvature -1 in 1-to-1 correspondence with the periodic orbits
of the associated geodesic flow which takes place on the unit tangent bundle of the manifold?
gowers Says:
September 12, 2010 at 11:38 am | Reply
Here’s an idea for a general method of construction. But before I say what it is, I want to make an important general point, which is that there isn’t a hugely strong reason to believe that the
decompositions we are currently searching for actually exist. That is because in order to simplify the search we are placing an artificial constraint on ourselves: to consider only those HAP products
$P\otimes Q$ for which $P$ and $Q$ have the same common difference. Note that if this is the case, then we can divide them both through by the common difference, which is why we have been considering
progressions with common difference 1. Maybe now that we have looked pretty hard it would be an appropriate moment to allow ourselves the occasional more general product if it seems to help. The only
rule is that $P$ and $Q$ must be segments of homogeneous progressions, and WLOG their common differences are at least coprime. So for example, if we wanted to use $\{2s,2s+2\}\times\{2s-1,2s,2s+1\}$
we would be free to do so. Of course, if we allow ourselves extra flexibility, then we’ll have to experiment a bit before we get a feel for what the smallest “easy” value of $c$ is.
But for the rest of this comment I want to stick with progressions of common difference 1. The basic idea can be summarized as follows: why not try to find a construction made out of fly traps and
fly traps? That is, we could simply aim to take some linear combination of fly traps that cancels off the diagonal. (We know this is possible in non-trivial ways, since a square is itself a union of
consecutive fly traps.)
Now at first this is hopeless, since the efficiency of a fly trap is at most 1/2 (it adds 2 to the sum of coefficients and makes a difference of 1 to the trace), meaning that the best $c$ we can hope
for looks like being 2. And if the positive and negative coefficients are fairly well balanced, then the efficiency will be smaller still. So there are two improvements that would need to follow.
First, we would want the fly traps with negative coefficients to be quite a bit bigger than the ones with positive coefficients. That way, we would use fewer of them, so the trace would be
proportional to the total number of fly traps, and perhaps even (if the negative fly traps are much longer on average) almost equal to it. But that will get the efficiency to $1/2-\epsilon$ at best.
To go any further we need something else to happen: we want a significant fraction of the positive fly traps to be partitionable into intervals of length greater than 1. If they also drop to the same
point below the diagonal (which could be a rather awkward constraint to insist on — this is a potential problem with the suggestion), then they form thickened Ls and we can reduce the sum of
This focuses attention on the following class of functions defined on rationals in $[0,1].$ Let $f_{n,k}$ be the characteristic function of the set $\{1,1-1/n,\dots,1-k/n\}.$ Can we find a linear
combination of these functions that cancels, or almost cancels (leaving us to mop up the error) in an interesting way? And can we do it with plenty of clusters of consecutive $n$s?
I find this a clearer formulation of the problem — not too far from being equivalent to what we were trying to do anyway — since it focuses attention very firmly on the number theoretic questions
that have to be solved. Also, one can simplify it a bit further by looking at $g_{n,k}(x)=f_{n,k}(1-x),$ which is the characteristic function of $\{0,1/n,\dots,k/n\}.$
Bearing in mind the first part of this comment, we would be just as happy if we had a cluster of $n$s that formed a HAP segment. And I think we also wouldn’t mind choosing characteristic functions of
sets such as $\{0,r/n,2r/n,\dots,kr/n\}.$
One other remark I’ll make briefly but I’ll try to work out the details and post them in another comment later. I think if we always take the same ratio (or almost the same ratio) $k/n=\alpha$ for
our functions $f_{n,k},$ then we’ll probably be doomed to failure, since then the functions will all have small inner product with the characteristic function of rationals less than $1-\alpha/2$
minus the characteristic function of rationals greater than or equal to $1-\alpha/2,$ whereas this is not true of the main diagonal. This is the essentially the same point that was made before about
its not being possible to prove that there is high discrepancy on a “full” HAP, meaning that we fix $N$ and insist that the HAP consists of all multiples of some $k$ up to $k\lfloor N/k\rfloor.$
gowers Says:
September 12, 2010 at 3:25 pm | Reply
This comment is an attempt to understand when the existence of certain functions on $\mathbb{Q}_+$ gives us a proof that decompositions of a certain kind cannot work.
Suppose, then, that we think we have a linear combination $\sum_i\lambda_iS_i,$ where each $S_i$ is a square $P_i\times P_i,$$\sum_i|\lambda_i|\leq c\sum_i\lambda_i|P_i|,$ and the sum along
off-diagonal lines is zero. It follows that for any function $f$ defined on $\mathbb{N}^2$ that is constant along rays from the origin we have
$\sum_i\lambda_i\langle S_i,f\rangle = f(1,1)\sum_i\lambda_i|P_i|.$
Now $\langle S_i,f\rangle=\sum_{x,y\in S_i}f(x,y),$ so if we can come up with a function $f$ that has discrepancy at most $C$ on any square $S_i$ that we use in our decomposition, then the left hand
side is at most $C\sum_i|\lambda_i|.$ If in addition $f(1,1)=1,$ then that tells us that $C\sum_i|\lambda_i|\geq\sum_i\lambda_i|P_i|,$ which tells us that we cannot make $c$ any better than $1/C$
with this collection of squares.
In particular, if we could find a completely multiplicative $\pm 1$-valued function $g$ with HAP discrepancy at most $K,$ then we could define $f(x,y)=g(x)g(y),$ and we would have the property $f
(ax,ay)=f(x,y),$ together with the property $\sum_{(x,y)\in S_i}f(x,y)=(\sum_{x\in P_i}g(x))^2,$ so we would be able to take $C=K^2.$
We shouldn’t be too discouraged by this of course: it is trivial that if there is a completely multiplicative function of bounded discrepancy then our approach is doomed to failure. Where this kind
of observation might be useful is in demonstrating that certain even more restricted classes of decomposition cannot work.
For example, here is a slightly strange restriction that we cannot ignore. Let $\lambda$ be the Liouville function (that is, -1 to the power the number of prime factors). The above argument shows
that to obtain a decomposition powerful enough to prove EDP we will have to use intervals $P_i$ on which the sum of $\lambda$ is unbounded. This means that if we have the idea of some clever
construction using a particular huge and smooth $N,$ or something like that, we have to have a reason that the intervals that arise are ones on which $\lambda$ sometimes has a large sum. Since $\
lambda$ behaves in a fairly random way, especially on small scales, this looks like quite a hard thing to do explicitly. I think it more or less forces us to use more global arguments where there are
so many intervals around that on average we are pretty sure that $\lambda$ (or any other multiplicative function) will have big discrepancy on some of them.
Ultimately, we won’t be trying to verify that multiplicative functions have large discrepancy on our collection of intervals. Rather, it will go the other way round — the existence of a clever
decomposition will prove that this is the case. But it provides a plausibility check: if there aren’t very many intervals about and they are all quite small, and if there is enough freedom in the
construction that we can regard where they are as somewhat random, then there is no reason for them not all to be intervals on which $\lambda$ has small discrepancy, so the attempt is almost
certainly doomed to failure.
Now let me see whether the remark at the end of my previous comment is correct. Suppose we try to find a decomposition that is built out of squares $[r,s]^2$ that all have the property that $s=\
lfloor\alpha r\rfloor$ for some fixed $\alpha>1.$ (Since a fly trap is a difference of two squares, this applies to fly traps too.) An obvious function that might have small discrepancy on all of
those squares is one where you choose $\beta$ such that exactly half of the square $[1,\alpha]^2$ (now considered as a square in $\mathbb{R}^2$) consists of points $(x,y)$ with $\beta^{-1}x\leq y\leq
\beta x.$ Then one would expect the discrepancy on the integer approximations to these squares to be small too. What I don’t know at the time of writing is whether this discrepancy would be bounded
or whether it would grow linearly. Hmm … or perhaps I do. It looks as though there would be a linear error arising from when the squares suddenly increase in width by 1 (in the integer case). So
perhaps we can get away with squares of this form.
Incidentally, I forgot a condition: we want to insist that $f$ is symmetric in $x$ and $y,$ since we are insisting on the same for the decomposition.
gowers Says:
September 12, 2010 at 3:39 pm | Reply
As an immediate application of the criterion in the previous comment, we can say something about the approach suggested in the comment before that. I suggested that it might be a problem if we always
tried to take $k=\lfloor\alpha n\rfloor$ for our functions $f_{n,k}.$ And that is indeed the case, since if we define $\phi(x,y)$ to be 1 if $y\leq\alpha x/2$ or $x\leq\alpha y/2,$ and -1 otherwise,
then $\phi$ will be 1 on approximately half of each fly trap and -1 on approximately half. This will make the discrepancy bounded on each fly trap (because we’ve fixed $\alpha$) and will cause the
approach to fail.
What this shows is that if we build a decomposition entirely out of fly traps, then we will be forced to use many different shapes of fly trap (where I am defining the shape to be the approximate
ratio of width to height). I would dearly like to understand how differing shapes can be of any help. Perhaps it’s just for the simple reason discussed earlier: that we want the bottoms of the fly
traps to be at the same level so that we don’t waste coefficients dealing with the small errors that would otherwise appear there.
gowers Says:
September 13, 2010 at 9:07 am | Reply
In this comment I’m going to try to find some interesting linear dependences amongst sets of the form $\{1/n,2/n,\dots,k/n\}.$
Alec’s example corresponds to this. We take $n=k!,$ and then each $n/j$ is an integer, so we can add up the singletons $\{j/n\}$ and subtract the set $\{1/n,\dots,k/n\}.$
As an experiment, let me try taking $n=126$ and $k=10.$ Writing the fractions in their lowest terms we get
If we subtract off the set $\{1/63,2/63,3/63,4/63,5/63\}$ we get the set
Apart from the $5/126$ we can write this as a sum of singletons. And we can represent $\{5/126\}$ as the difference between the multiples up to 5 and the multiples up to 4.
Translating back into 2×2 squares and fly traps this gives a decomposition with $c=3/2.$ Let me explain how I worked that out. Each AP of length more than 1 corresponds to a fly trap so it costs 2.
For instance, the initial one consisting of the first ten multiples of 1/126 corresponds to the fly trap $L_{126,10}.$ A singleton of the form $1/r$ can be realized as the 2×2 square $[r-1,r]^2.$ And
a singleton that is not of this form can be represented as a difference of two APs, so has a cost of 4. We have two APs, four reciprocals and one stray rational 5/126, so the total cost comes to
$2+2+4+4=12.$ The sum along the diagonal is 2 for each 2×2 square, 0 for the difference of two fly traps that gives us 5/126, and the main two fly traps have coefficients of 1 and -1, so the total
comes to 8.
gowers Says:
September 13, 2010 at 12:41 pm | Reply
I seem to keep running into rather similar difficulties when I try to arrange for plenty of coincidences and efficiency gains. I want to see if I can understand these difficulties in a more precise
way, by which I mean show that there are various requirements that hugely restrict the possibilities for any decomposition. At best, this would lead to more efficient ways of searching for
If we stick with the idea of making everything out of fly traps and then sticking some of the fly traps together to save on the cost of coefficients, then in broad terms what we need is this. In
order for $\sum_i\lambda_i|P_i|$ to be big compared with $\sum_i\lambda_i,$ it is important that the sizes of the $P_i$ for which $\lambda_i$ is negative should be generally somewhat larger than the
sizes of the $P_i$ for which $\lambda_i$ is positive. It is also vital that there should be very few points with coprime coordinates in the second half of the decomposition, since it is impossible to
cancel these out efficiently. This seems to tell us that the heights of the higher fly traps should be very smooth.
Let me try to quantify that last remark. For simplicity, let’s suppose that our combination of fly traps takes the form $\sum_iL_i-\sum_jM_j,$ where all of the $L_i$ and $M_j$ are fly traps. And
suppose that we have managed to do this in such a way that the $M_j$ are on average bigger than the $L_i,$ sufficiently so for the trace to be proportional to the number of $L_i.$ And let’s suppose
that the $M_j$ contain $t$ points with coprime coordinates. Each of these points has to be cancelled out in a very crude way, at a cost of 4, so 4t needs to be small compared with the number of
Another constraint, which is quite hard to reconcile with the smoothness, is that we want there to be many consecutive heights. Combining that with the cancellation we need, that tells us that we
want many heights to have factors that are very close to each other.
I can feel this comment getting less and less precise, so I’m going to abort it and try again when I’m clearer in my mind about what it is I want to say.
gowers Says:
September 13, 2010 at 7:11 pm | Reply
I’ve thought of a different way of trying to understand why the problem we are now thinking about is a difficult one. I more or less said it above, but now I think I have a clearer understanding of
it. Actually, looking back, it seems that in this comment I came close to saying what I’m about to say here, but didn’t quite say it.
Anyhow, I just want to make the simple observation that if the current approach succeeds, then it implies a rather strong looking discrepancy result, though quite how strong is something I’m still
not clear about. The observation is this. Suppose we can find a $\pm 1$-valued function $\phi$ on $\mathbb{N}^2$ that is constant along rays (that is, $\phi(ax,ay)$ is always equal to $\phi(x,y)$)
and that has discrepancy bounded above by $C$ on all squares $[r,s]^2.$ Then we cannot get a decomposition with $c<1/C.$ I gave the proof in the previous comment, but I then concentrated on functions
of the form $\phi(x,y)=g(x)g(y)$ where $g$ is completely multiplicative. However, it may be helpful to think about more general functions.
For example, it occurred to me that if we could find a function $\phi$ that is constant along rays and is such that for every $r$ and $s$ we have
then our approach would fail, since such a function has discrepancy at most 4 on any square. (This can be checked.)
However, it is quite easy to prove that no such function exists. The most conceptual way to see it is to observe that the entire function is determined by $\phi(1,1).$ I’ll illustrate that with an
example: $\phi(5,8)=-\phi(6,8)=-\phi(3,4)=\phi(4,4)=\phi(1,1).$ One can show that $\phi(x,y)$ is a sum of a Morse-like function in $x$ and a Morse-like function in $y.$ And such a function has no
reason to be constant along rays — and indeed isn’t. The first place a problem shows up is this:
But we need $\phi(6,9)=-\phi(6,10).$
However, we could play a similar game with 4-by-4 squares, looking for a function $\phi$ that sums to zero along every row and every column within each one of these squares. That allows a discrepancy
of up to 16. Now we no longer have the situation where choosing the value at one place in a square forces all the other values — there are a number of different ways of getting all the row and column
sums to be zero. So now it suddenly looks much much harder to prove that we cannot find such a function.
An additional very important moral of this is that if we use a restricted class of squares, such as squares that have a very small side length compared with how far up they are, then this discrepancy
result has to hold for that class of squares. Indeed, if you’ve got a collection of squares and want to prove a lower bound for the $c$ that you can use it to obtain, then a good way is to choose a
suitable function $\phi.$ For instance, for Alec’s example one can choose $\phi$ to be 1 when $x=y$ and 0 otherwise. This has discrepancy 2 on all 2-by-2 rectangles and 1 on all fly traps, so we see
immediately that it cannot do better than $c=1/2$ (though of course this is also easy to see directly).
I’m fairly sure that this line of thought will make it very clear how the different squares must interact with each other if there is to be any hope of their beating the $c=1/2$ bound. Basically, it
is essential that there are lots of rays that intersect lots of squares.
• Alec Edgington Says:
September 13, 2010 at 8:53 pm
That’s a nice clarification of the problem.
Do we gain anything by insisting that $\phi$ be a $\pm 1$-valued function? It looks to me as if any real-valued function will do. (Indeed the example you give at the end is a $\{0, 1\}$-valued
In your earlier comment, you wrote:
$\langle S_i,f\rangle=(\sum_{x,y\in P_i}f(x,y))^2$.
Did you mean
$\langle S_i,f\rangle=\sum_{x,y\in P_i}f(x,y)$?
• gowers Says:
September 13, 2010 at 9:42 pm
I concentrated on $\pm 1$ functions just because they’re quite nice to think about, but if we wanted to investigate the problem computationally, then it would probably be easier to look at more
general functions, since then we’d have a linear program rather than an integer program. I allowed myself 0 to analyse your example because it didn’t seem to be that easy to do so without it,
which does confirm that the extra flexibility could be useful. I’ve corrected the earlier comment, which was indeed wrong (though the correction was slightly different because I needed to sum
over $S_i$).
If we do stick with $\pm 1$-valued functions, then we can formulate quite a nice combinatorial problem. Suppose we have a bunch of squares and we want to prove that no combination of them will
give us a small value of $c.$ Then we can do it by finding a $\pm 1$-valued function that’s constant on rays and that has small discrepancy on every square.
The examples we’ve been looking at have tended to have rather small squares, so let us make the additional assumption that, apart from on the main diagonal, no square contains more than one point
on any ray.
Now let’s form a set system. Its vertices are the rays, and the sets are given by the squares. That is, if $S$ is one of our squares, we associate with $S$ the set of all rays that pass through
$S$ (or equivalently the function $S/S,$ which because of our assumption takes values in $\{0,1\}$ except at $1$ where it takes the value $|S|$). We now want our set system to be sufficiently
“dense” that it has a high discrepancy.
This is a necessary condition, but it’s a pretty strong necessary condition that should hugely narrow down the search for a construction. Indeed, it could even narrow it down so much that we were
able to prove that there was no such set system, which would force us to consider HAP products with different common differences.
Now let’s narrow things down further and suppose that each ray that hits any squares at all hits exactly two squares (one with a coefficient of 1 and the other with a coefficient of -1). This
tells us that in our set system each point is contained in precisely two sets. So an obvious preliminary question is whether we can think of any set system at all with the property that each
point is contained in two sets and the discrepancy is at least, say, 10.
If each point is contained in two sets, we can form a graph: its vertices are the sets and its edges are the points, so if $x$ belongs to $S$ and $T$ then we regard $S$ and $T$ as joined and $x$
as the name of the edge joining $S$ to $T.$ Now we are trying to find a $\pm 1$-valued function defined on the edges that is well-balanced at every vertex. Or rather, we want a graph where that
cannot happen.
So now we have a problem that I rather like: is it true that for every constant $C$ there is a finite graph such that, however you 2-colour the edges, there must be a vertex such that the number
of blue edges coming out of that vertex differs from the number of red edges coming out of that vertex by at least $C$?
I’ll have a little think about this, but I’ll also post it now in case anyone else wants to think about it. It doesn’t seem as though it should be hard, but I don’t immediately see how to do it.
I hope the answer will be that there does exist such a graph and that we can make it pretty sparse. If it turns out to be hard, then I’ll also post a question on Mathoverflow, but I won’t do that
for the time being.
• gowers Says:
September 13, 2010 at 11:28 pm
A small remark about that last problem: if we can partition the edges of the graph into cycles of even length, then we can just colour the edges alternately red and blue along each cycle and we
will have a discrepancy of zero on the edges at each vertex. In particular, this happens if the graph is bipartite and every vertex has even degree.
Also, if we have something like an expander of degree $d,$ then we can probably partition it into a bounded number of cycles. (I don’t know any theorem that actually says that we can, however.)
Then we could do something similar, colouring edges alternately along the cycles. If a cycle was odd, we would have to have two consecutive edges of the same colour, but if it was long, then we
would have a huge amount of choice about where to put the one bad vertex. So presumably we could have at most one bit of badness at each vertex, which would give us a discrepancy of at most 2.
So it seems that we want a graph that cannot be partitioned into even cycles and a few long odd cycles. It isn’t obvious to me how to construct such a graph.
But actually I’m starting to think that this problem is not as relevant to EDP as I thought. After all, if we have a system of squares where each ray hits exactly two squares, then the resulting
graph must be bipartite for there to be any chance of a decomposition. (Proof: the coefficient of a square must be minus the coefficient of any neighbouring square, so the signs of the
coefficients give you your bipartition.)
And now I think I have a trivial solution to the problem anyway. Consider the following algorithm for partitioning the edge set. We’ll construct a trail (that is, path that’s allowed to visit
vertices more than once, but we are not allowed to revisit edges) and make it maximal in both directions. We do this by simply extending the trail however we like until we get stuck. How do we
get stuck? The only way is if we arrive at a vertex and find that there are no unused edges leaving that vertex.
Having constructed a maximal trail, we colour its edges alternately. This contributes zero to the discrepancy of all vertices except those at the two ends of the trail, where it contributes 1 (or
possibly 0 or 2 if those two ends happen to be at the same vertex). But a very important point is that if we cannot extend the trail any further, then there are no edges left at the vertex that
has a discrepancy of up to 2, so if we now build some new trails, we will not contribute any further to its discrepancy.
So to continue, we simply remove all edges from the first maximal trail and start again. In that way we find a colouring of the edges such that at each vertex the discrepancy is at most 2.
The moral of this, which is quite interesting actually, is that to find a decomposition of the kind we want, we must have at least one ray that meets more than two squares, from which it is
reasonable to conclude that we will actually want several rays that meet quite a bit more than two squares.
• Alec Edgington Says:
September 14, 2010 at 6:37 am
It occurs to me that the problem of constructing a function $\phi : \mathbb{N}^2 \to \pm 1$, constant on rays, with low discrepancy on squares of the form $P^2$ where $P$ is an interval feels
considerably easier than the problem of constructing multiplicative $\pm 1$-valued functions on $\mathbb{N}$ with low discrepancy on intervals (which we think is more or less equivalent to EDP).
The reason is that for the latter problem, if we want to construct our function by assigning values to integers in order, we only have freedom to choose at primes, which are vanishingly rare as
we go up. But for the two-dimensional problem, if we want to construct our function by assigning values in some moderately natural order, we have freedom to choose at all coprime pairs, which are
relatively abundant – indeed, I think more than half of pairs will tend to be coprime. Given such a degree of freedom, it wouldn’t surprise me if it were quite easy (at least for a computer) to
just write down a suitable function $\phi$ with discrepancy 2 on squared intervals.
• gowers Says:
September 14, 2010 at 8:13 am
That’s certainly a tempting conclusion, and it may be correct. It would be quite interesting to get some idea of how easy the computer finds it, and perhaps we could prove quite quickly that any
example with $c<1/2$ would need $N$ to be at least 1000 or something.
But it also seems to me conceivable that we would find that there were little clusters of smoothness — that is, rectangles that contain very few coprime pairs — that made the task difficult. It
might even be that if we ran a program to find a function with small discrepancy on squared intervals, we could examine where it found itself doing a lot of backtracking, use that to identify a
collection of points, and then search for a decomposition based on small squares that contain those points. This seems to me to be very much worth trying, as it could be an efficient method of
searching for decompositions with significantly larger $N.$
I was about to write “squares that contain very few coprime pairs” above, when it occurred to me that the fly trap idea is more or less forced on us because a square that sits on the diagonal
(that is, one of the form $[r,s]^2,$ contains $s-r$ points of the form $(x,x+1)$ (not to mention points of the form $(2x-1,2x+1),$$(3x-1,3x+2),$$(3x-2,3x+1),$ etc.). So we have to go for long and
I’m a bit anxious about the fact that we need a collection of squares such that many rays meet them more than twice. Even the one-dimensional consequences of this seems hard: we appear to need
something like a collection of intervals such that almost every HAP that intersects any one of the intervals intersects at least two of them, and several HAPs intersect more than two of them. And
we want the average lengths of these intervals to be significantly greater than 1. That could be another way of narrowing down the search when $N$ is large: first search for some good $x$
coordinates, and then search for decompositions that are largely based on those $x$ coordinates.
• Klas Markström Says:
September 14, 2010 at 8:35 am
The graph colouring problem is actually a variant of something which I have used in a paper on hard SAT-problem.
Given a connected graph wit hall degrees even we can find an eulerian walk on the graph, and colour the edges alternatingly along that path. If there is an even number of edges this gives
discrepancy 0 and if their number is odd we get discrepancy 1. An old theorem of Kotzig shows that this construction is optimal on all eulerian graphs.
If there are vertices of odd degree we can add a matching among the odd degree vertices and use the eulerian walk construction on the new graph. By deleting the new edges we raise the discrepancy
by at most 1. If there were vertices of even degree in the original graph we can avoid raising the discrepancy above 1 by starting the walk in a vertex of even degree.
• Alec Edgington Says:
September 14, 2010 at 9:17 pm
I’ve done a simple experiment to search for functions $\phi : \mathbb{N}^2 \to \pm 1$, constant on rays, with low discrepancy on rectangles of the form $[1,m] \times [1,n]$. (This is enough to
give us four times the discrepancy on all rectangles, and is somewhat easier to work with.) The experiment has shown that it is at least not totally trivial. That is, if we step along diagonals
of constant sum, in order ((1,1), (2,1), (3,1), (2,2), (4,1), (3,2), (5,1), (4,2), (3,3), (6,1), …), and we assume that the function is symmetric, and whenever we reach a coprime pair $(a,b)$ we
assign it a value that minimizes $\lvert \sum_{i \leq a} \sum_{j \leq b} \phi(i,j) \rvert$ (choosing positive values when the existing terms sum to zero), then the discrepancy goes up to at least
7 (we hit 7 at the point (143,13)).
Polymath5 « Euclidean Ramsey Theory Says:
September 14, 2010 at 12:05 am | Reply
[...] Polymath5 By kristalcantwell There is a new thread for Polymath5 here. Let me update this there is another thread here. And yet another here. [...]
gowers Says:
September 14, 2010 at 10:43 pm | Reply
I’m going to think briefly about the 3-uniform hypergraph version of the graph problem I mentioned above that turned out to be easy and to have the opposite answer to the one I had hoped for.
The question becomes this. Does there exist $C$ such that, given any collection of sets of size 3, is it always possible to colour them red and blue in such a way that for every point the number of
blue sets containing that point and the number of red sets containing that point differ by at most $C$?
Let us call the points vertices and the 3-sets edges. (This is standard hypergraph terminology.) And let us define the discrepancy of a vertex to be the difference between the red degree of that
vertex (meaning the number of red edges containing it) and the blue degree. Finally, let us define the edge discrepancy of the hypergraph to be the smallest we can make the maximum discrepancy at a
vertex. (I’m calling it edge discrepancy because it is the edges that are coloured. The discrepancy of a hypergraph standardly refers to vertex colourings and minimizing the discrepancy over all
So now we are asking whether the edge-discrepancy of a 3-uniform hypergraph is bounded by some constant $C.$ This is a completely different problem from the graph case because the greedy algorithm we
used there fails completely. Indeed, if you colour an edge red and then try to guarantee that you haven’t done too much damage, then you will typically have to colour three “neighbouring” edges blue,
and then each of those will have two further neighbouring edges that need to be coloured red, and so on, expanding out into a 3-regular graph. Unlike a path, where there is trouble only at the end
points, a 3-regular graph can grow at an exponential rate, which means that a positive proportion of its vertices could in theory be troublesome ones.
That leads me to suspect that the answer is going to be that there are 3-uniform hypergraphs with unbounded discrepancy. (Whether we could realize such a hypergraph in terms of squares and flytraps
and rays that pass through them is quite another matter of course.) The obvious thing to try would be a fairly sparse random 3-uniform hypergraph. That would give us the kind of exponential growth
that we expect to cause trouble.
It occurs to me that one might try to prove a lower bound on the discrepancy by the sorts of methods we are already using for EDP. That is, one might try to find a decomposition of the identity. Or
perhaps one could work the other way round and try to build a hypergraph with a decomposition of the identity in mind.
Let me say in more detail what I mean by that. As a matter of fact, it’s quite simple now that I come to think of it. Let us call the set of edges that contain a vertex a star (because that is what
we would call it in the graphs case). We are trying to find a hypergraph such that the set of stars has unbounded discrepancy. Now each edge is contained in exactly three stars, so this is equivalent
to finding a set system with unbounded discrepancy such that each point is in exactly three sets. So we’d like to find an efficient decomposition of the identity as a linear combination of products
of sets with the property that no point is in more than three of the sets we use.
I don’t immediately see how to do that, but I’d be quite surprised if it wasn’t possible.
• Klas Markström Says:
September 14, 2010 at 11:12 pm
A quick comment before closing down for the evening.
If the hypergraph is too sparse then the local lemma should provide a good colouring.
• gowers Says:
September 15, 2010 at 8:32 am
Ah, that’s a very helpful remark.
• gowers Says:
September 15, 2010 at 9:23 am
Actually, having thought a little bit further I’m starting to wonder whether that is actually the case. If you randomly colour, then even the individual events you are trying to make happen will
have quite low probability: if you want the discrepancy on a set of size $k$ to be at most $5,$ say, then you’ll get a probability proportional to $k^{-1/2}.$ But I thought that for LLL you
needed that the sum of the failure probabilities of an event and all the other events on which it depends to be less than 1.
• Klas Markström Says:
September 15, 2010 at 10:24 am
Yes the meaning of “too sparse” will definitely depend on the value of discrepancy we are considering. Getting things perfectly balanced will be difficult unless the graph is very sparse.
A straightforward application of the LLL should give a bound on the discrepancy proportional to $k^{1/2}$, since, roughly, we need $e(2k+1)e^{-k^2}$ to be smaller than 1.
• Klas Markström Says:
September 15, 2010 at 10:29 am
Sorry, the “e^{-k^2}” should be the probability that the binomial random variable deviates more than $k^{1/2}$ from $k/2$.
Klas Markström Says:
September 15, 2010 at 8:35 am | Reply
I have now added a solutions file, called data-file-rectangular, for the case where general rectangles, rather than only squares, are allowed.
A quick inspection shows that the structure of the solutions has changed into a collection of small squares and narrow “bands”.
Alec, could you modify the program to use only rectangles with area at most K?
I think I can make the modification but I*m not comfortable with modifying someone elses code without having read enough of it to know that there are no unexpected side effects of the modifications.
• gowers Says:
September 15, 2010 at 11:35 am
A quick remark about this. From the point of view of proving EDP, rectangles are just as good as squares, since the inequality $(\sum_{x\in P}f(x))(\sum_{y\in Q}f(y))\geq C$ implies a discrepancy
of $C^{1/2}$ just as well if $Pe Q$ as if $P=Q.$ Since we are trying to optimize the discrepancy for what in the context of the problem are very small values of $N,$ this probably means that the
rectangle discrepancy gives us a better idea of what is going on. The reason we looked at square discrepancy was to try to reduce the number of variables. If looking for rectangles becomes
prohibitively slow, then there is a compromise we might try, which is to look for arrangements of squares but to evaluate them differently, so that, for instance, a fly trap would have a cost of
1 rather than 2. In fact, that’s not quite correct. If we had a fly trap we would probably replace it by two rectangles of width 1 that overlapped on the diagonal, so we’d actually get a trace of
2 for a cost of 1 rather than a trace of 1 for a cost of 2.
It would be quite interesting to see whether the “trivial” bound changes when one adopts this point of view. I think perhaps it doesn’t, as 2-by-2 squares have a trace of 2 for a cost of 1, and
the extra flexibility afforded by rectangles doesn’t seem to improve that. But if that’s the case, then it is good news as it increases the chances of finding an arrangement that beats the $c=1/
2$ bound.
• Klas Markström Says:
September 16, 2010 at 7:28 pm
Earlier today I made the modifications needed in order to only use rectangles with a bounded are. Here area is really area, so a “2×2″ square has area 1.
I have added two solutions files data-file-rectangular-area-9 and data-file-rectangular-area-25
The solutions reached at the end of the area 9 run stand out in that, up to signs, they really have only two different coefficients, 1/36 and 1/18, and are still made up of 2×2 squares and
rectangles of width 1.
• Klas Markström Says:
September 20, 2010 at 3:32 pm
I have made one more solutions file, now with bounded area and no $1xK$ strips for K greater than 20. The restrictions are needed to keep the memory use down.
The file is “data-file-rectangular-area-4-length20″ in http://abel.math.umu.se/~klasm/Data/EDP/DECOMP/
The best solution in this class for N=66 is better than the best symmetric solution for N=840, and has only coefficient 1/12, with positive 2×2-squares and a single negative 1xK strip. However
unless I am missing something a solution with that structure can never take us below C=1/2
Alastair Irving Says:
September 15, 2010 at 4:58 pm | Reply
I’ve just written some code to solve the linear programming problem which corresponds to finding a real-valued function $\phi$ on $[1,N]^2$ which is constant along rays, has $\phi(1,1)=1$ and which
has maximum discrepancy over squares as small as possible.
So far, the results correspond precisely with the results for the problem of finding the decomposition with the smallest sum of coefficients. Namely, for $N<12$ we get discrepancy $1$ , which then
goes up to $7/6$ and then at $N=42$ this goes up to $13/11$.
I suspect the two problems may be identical. For the decomposition problem we wish to minimise $\|x\|_1$ constrained by $Mx=(1,0,0,\ldots)$. For the discrepancy problem we wish to minimise $\|M^Tx\|_
\infty$ subject to $x(1)=1$. Considering the numerical evidence I think there might be some sort of duality argument to relate the two problems.
This is my first post so I appologise if any of the LaTeX doesn't come out right.
• gowers Says:
September 15, 2010 at 5:10 pm
They are indeed the same problem — that was the motivation behind the discrepancy version. Do you have a sense of which is easier computationally? From a theoretical point of view it seems better
to look for decompositions, because the discrepancy result would be strictly stronger than the statement that completely multiplicative functions have unbounded discrepancy, which we don’t know
how to prove. But I am interested in the possibility of using the linear programming problem as a way of searching for good decompositions, especially if that turns out to be more efficient than
searching for them directly.
gowers Says:
September 15, 2010 at 6:49 pm | Reply
Here’s a slightly strange observation, that’s either wrong or quite encouraging.
Suppose we are trying to find $\pm 1$ values along the rays in such a way that the discrepancy on all rectangles is bounded. Then in particular this applies to rectangles of width 1. Now consider the
rectangle of width $m$ that consists of all points $(m!,j),$ where $j$ runs from $1$ to $m.$ Then the sum of the values on that rectangle must be bounded. Next, consider the rectangle that consists
of all points $(m!/2,j)$ where $j$ runs from $1$ to $\lfloor m/2\rfloor.$ The values along here must be bounded too. Actually, I should say more: the partial sums as you go up the rectangle are
bounded. But these are equal to the partial sums for the even $j$ in the first rectangle. More generally, we find that the sums along all HAPs of $j$s in the first rectangle have to be bounded. So if
EDP is true then we can’t find a $\pm 1$ function that’s constant along rays and of bounded discrepancy on all rectangles.
This doesn’t quite prove that we can find a decomposition because we have to allow more general functions. I haven’t yet thought about …
Let’s just see what happens if we define $f(x,y)=0$ if either $x$ or $y$ is equal to $0$ mod 3, $1$ if $x\equiv y$ mod 3, and $-1$ if $xot\equiv 1$ mod 3. Is this constant on rays? Yes. Oh dear, it
seems to have bounded discrepancy on all rectangles.
Phew, that got me worried, but I’ve just realized that it’s NOT constant along rays. So after that moment of madness I’ll continue the interrupted sentence.
… what happens if you do this.
• Alec Edgington Says:
September 15, 2010 at 7:30 pm
Ah, that’s a good observation. My gut feeling was completely wrong (assuming EDP, of course)!
I suppose the next thing to think about is what happens if we allow $\pm 1$ and zero.
Alastair, could you post one or two of your solutions? It would be interesting to see how sparse the nonzero terms are.
• gowers Says:
September 15, 2010 at 8:39 pm
I find this a nice problem. At first it seems as though having 1s down the diagonal is an extremely weak condition, but then one realizes that if the discrepancy on every rectangle has to be
bounded, which is equivalent (if we assume symmetry) to the discrepancy on every square $[r,s]^2$ being bounded, then there must be a lot of -1s near the diagonal. And those imply a lot of -1s
further away from the diagonal, which in turn force more +1s, and so on. At the moment, I don’t have a clear feeling about the theoretical version of exactly the question you ask: how sparse can
the non-zero terms possibly be? It might be interesting to see whether if they have zero density (meaning the limit of the density inside $[1,N]^2$ as $N$ tends to infinity) then the discrepancy
must be unbounded. Or rather, it might be interesting to see whether that is any easier to prove than the general statement.
Alastair Irving Says:
September 15, 2010 at 10:16 pm | Reply
My solutions are not sparse at all. For example for $N=12$, the solution is non-zero at all of the 91 coprime pairs involved. Some of the values are nice rationals, but others don’t appear to be,
(although that’s maybe just a feature of how I’m converting to rationals). I can make the code, (which is for Matlab), or the solutions available if people want them but I don’t know how useful
they’d be.
Could you possibly clarify, or reference a comment clarifying, why the problems are identical. I understand your explanation in a previous comment that the existance of a function with discrepancy
$C$ forces the $l_1$ norm of the coefficients in a decomposition to be $>1/C$, but I can’t see why the bound is atained. Is there a general way we can convert from a function with minimum discrepancy
to the decomposition with best possible sum?
Computationally, solving the linear problem for discrepancy and that for decompositions seems fairly similar.
• gowers Says:
September 15, 2010 at 10:32 pm
I should have done that in my previous comment. The proof (or at least the proof that I like) uses the finite-dimensional version of the Hahn-Banach theorem. If you can’t express a function $f$
as $c$ times a convex combination of functions $g_i$ that belong to some class $G,$ then $f$ lies outside the convex hull of $cG,$ so by the Hahn-Banach separation theorem there is a linear
functional that separates $f$ from $cG.$ That is, there is a linear functional $\phi$ such that $\langle f,\phi\rangle=1$ and $\langle g,\phi\rangle <c^{-1}$ for every $g\in G.$
In our case, $f$ is the function $\delta_1,$ that is, the function defined on positive rationals that’s 1 at 0 and 0 everywhere else. $G$ consists of functions obtained by taking a square $[r,s]^
2$ and counting for each rational how many pairs in the square have ratio equal to that rational. The property $\langle f,\phi\rangle$ is telling us that $\phi(1)=1,$ and the property that $\
langle \phi,g\rangle <c^{-1}$ for every $g\in G$ is telling us that the discrepancy of the function $\psi(x,y)=\phi(y/x)$ is at most $c^{-1}$ on every square.
• gowers Says:
September 15, 2010 at 10:38 pm
I think I would be quite interested in staring at your solutions for a bit, just to see whether anything can be read out of them. For example, I’d be interested to see whether there is a
difference in behaviour at coprime pairs where the values are small, or where at least one of the numbers is reasonably smooth (obviously 12 is a bit small to tell that, but perhaps a point like
(8,9) is different from a point like (5,7)), etc.
• Alastair Irving Says:
September 15, 2010 at 11:24 pm
The solution for $N=12$ can be downloaded from http://dl.dropbox.com/u/3132222/12.txt
Its a text file with 3 columns, the first two giving the coprime pair and the third the value of the function at that pair. I haven’t included rational approximations to these as some of them
seem very spurious. My code doesn’t assume that the function is symmetric in interchanging the two coordinates, hence we have values for both $(m,n)$ and $(n,m)$. It looks like they’re all the
same though, which isn’t surprising, but I haven’t checked it.
I’ll modify the code to assume reflective symmetry tomorrow and thus be able to produce some bigger solutions.
• gowers Says:
September 15, 2010 at 11:30 pm
If I’m not much mistaken, we can always replace a solution $f(x,y)$ by $(f(x,y)+f(y,x))/2,$ so assuming symmetry should be fine.
• Alastair Irving Says:
September 16, 2010 at 12:38 pm
I’ve modified the code to assume symetry. The solution for $N=12$ looks the same, but I’ve replaced the old version with it anyway. I’ve also got a solution for $N=42$ which is at http://
• Alec Edgington Says:
September 16, 2010 at 7:53 pm
Here’s a plot of the solution for $N = 42$:
I suggest saving the image file to your computer and zooming in. Light pixels correspond to large positive values, dark ones to large negative values.
gowers Says:
September 15, 2010 at 11:53 pm | Reply
A quick observation that’s so trivial it’s embarrassing, but I overlooked it for a while. It’s that while looking at rectangles of width 1 is sufficient to show that you can’t get a $\pm 1$-valued
function that’s constant along rays and has bounded discrepancy on squares (if EDP is true), they are not enough to tell us about more general functions. The example is the function that’s 1 on the
main diagonal and 0 everywhere else. This has discrepancy at most 1 on any rectangle of width 1.
Of course, we know that rectangles of width 1 are not good enough to make efficient decompositions, so this doesn’t come as a huge surprise. It seems to indicate that the $\pm 1$ assumption is quite
a strong one, though I’m not quite sure about that. It would be nice if the $\{-1,0,1\}$ version had some one-dimensional consequence: I think I’ll think about that next.
gowers Says:
September 17, 2010 at 9:50 am | Reply
This is to report on something I tried that has not yet worked. I thought (and still think) that it would be interesting to see if we could obtain a non-trivial estimate for the best bound that can
be obtained using squares of side length at most 3 and fly traps. The way I wanted to do it was to construct a function that had small discrepancy on all such sets. The trivial bound is obtained by
means of the function that is 1 when x=y and 0 otherwise. This has discrepancy at most 3 on all the sets that interest us. So the question is whether we can improve on 3. Note that if we just go for
2-by-2 squares, then Alec’s family of decompositions shows that this function is the best possible, since 2 really is the best bound. So this question amounts to asking whether, if we allow ourselves
3-by-3 squares as well, we can obtain a constant arbitrarily close to 1/3.
I set out on this hoping to find a clever function that would have discrepancy at most some constant less than 3 on all 2-by-2 squares, all 3-by-3 squares and all fly traps. (Of course, I’m always
insisting that the function should be 1 on the diagonal and constant on rays.) I thought it was going to be easy because to deal with the squares I cared only about what happens for pairs (r,s) with
|r-s| at most 2. And indeed, it is easy to get a non-trivial bound for the squares: for instance, if we define $\phi(r,s)$ to be 1 if $r=s$ and $-5/8$ if $|r-s|=1$ or 2, then the sum on all 2-by-2
squares is 3/4 and the sum on all 3-by-3 squares is -3/4. This starts to suggest a bound of 4/3, but we know that can’t be achieved, since the inclusion of 2-by-2 squares means that 2 is the best
bound we can hope for. The reason this isn’t a contradiction is that we haven’t dealt with the fly traps.
And that is where things start to get difficult. Once you start putting in lots of values off the diagonal, as we have now done, you commit yourself to many more. From this point of view, the choice
of $-5/8$ all the way down the diagonals $|r-s|=1$ is disastrous for us, since it will force values of $-5/8$ everywhere on the fly traps of width $k$ at $k!$ (that is, the ones Alec used). So it
will in fact give rise to unbounded discrepancy.
Thus, there is quite a nice problem I don’t yet know how to answer. First and foremost, can one improve on the trivial bound of 3 for this problem? That is, can one find $C<3$ and a function that’s 1
on the main diagonal, constant on rays, and that has discrepancy at most $C$ on all squares $[r,s]^2$ with $|r-s|\leq 2$ and on all fly traps? So far, I can’t even answer the following weaker
question: can one get the discrepancy to be at most $C$ on the squares and bounded on the fly traps?
• gowers Says:
September 17, 2010 at 12:18 pm
A simple observation that has some bearing on the relationship between the two questions is that the weaker version of the question for 2-by-2 squares is simple. Indeed, suppose we manage to get
the discrepancy to be at most $2(1-1/k)$ on all 2-by-2 squares $[r,r+1]^2.$ Then the value of the function (assuming symmetry) has to be at most $-1/k$ at all points $(r,r+1).$ It follows that
the value at $(m!,m!+t)$ is at most $-1/k$ for all $t\leq m,$ so the discrepancy is unbounded on fly traps.
I think this gives another perspective on why finding a decomposition using 3-by-3 squares is much harder than finding one using 2-by-2 squares: there is much more flexibility when it comes to
devising functions with low discrepancy. For instance, we might try to do it as follows. First we define $f(r,r+1)$ to be $-1/k$ for every $r.$ Then we do a bit of adjustment: we look at numbers
$m$ with lots of small factors, where there will now be fly traps with large discrepancy, and we make some adjustments to the values at $(r,r+1)$ when either $r$ or $r+1$ is of the form $m/j$ for
some small factor $j$ of $m.$ That will involve creating some 2-by-2 squares where the discrepancy is now slightly bigger than 2, but we could keep it down to $2+1/k$ perhaps.
Actually, I’ll stop there because I’ve got confused about what my aim is. Is it to keep the discrepancy down to almost 2, or is it merely to keep it below $3-\epsilon$? I’m interested in both
problems, but they seem fairly different.
Instead, I’ll just make the general point that if the only numbers where we make adjustments are of the form $m/j,$ where $m$ has many small factors and $j$ is one of those small factors, then
it’s not clear to me whether we have to make adjustments at big clusters of consecutive numbers. I think this may be at the heart of the problem — perhaps even at the whole of EDP — though I
don’t yet have a precise formulation of what it is I’m asking.
gowers Says:
September 17, 2010 at 5:14 pm | Reply
I want to think aloud for a bit about the problem mentioned in my previous comment. Suppose that we start with a first approximation to what we want, by setting $f(r,r\pm 1)$ and $f(r,r\pm 2)$ to be
-1/2 for every $r.$ Now we know that this causes problems for fly traps with lots of small factors, so the next step is to do an adjustment. For now I’ll concentrate on the “weak” problem, so all I
want to do is make the discrepancy on fly traps bounded. Note that so far the discrepancy on the squares is 1 for 2-by-2 and 0 for 3-by-3, so we’ve got a bit of elbow room.
Let’s suppose that we want to keep the discrepancy on fly traps below something like 1000. Then we have a problem at $M$ only if there exists $m$ such that the number of factors of $M$ that are less
than $m$ outnumbers the number of non-factors of $M$ that are less than $m$ by at least 1000. Let’s suppose that $M$ and $M'$ are two such numbers, and that $j$ and $j'$ are factors of $M$ and $M'$
that are less than $m.$ Then $M$ and $M'$ have at least 1000 factors in common, which … well, ordinarily it might suggest that $M$ and $M'$ differ by the lowest common multiple of some pretty huge
number, so that $M/j$ and $M'/j'$ are not close to each other.
But as I write that, I see that it’s not obviously true, which is good news. So let’s try to think in more detail about how it could be false.
I’ll take an extreme example. First we insist that both $M$ and $M'$ are divisible by 1000!. Sorry, this example isn’t working. Back to my previous line of thought.
Let’s just look at $M$ for a bit. Let $p_1,\dots,p_r$ be some primes that do not divide $M.$ Then no number that is divisible by any of the $p_i$ can be a factor of $M.$ It follows (provided $r$ is
not too big) that the probability that a number less than $m$ is a factor of $M$ is at most $(1-1/p_1)\dots(1-1/p_r)$ or so. From that it follows that the sum of the reciprocals of the $p_i$ cannot
be too large. Assuming that $t$ is not too small, that tells us that almost all primes up to $t$ are prime factors of $M.$
Now I want to know whether it is possible to find some $s$ such that there exist $u$ and $v$ less than $m$ such that $su$ is one number like that and $(s+1)v$ is another. The difficulty is that $s$
and $s+1$ are coprime, but maybe we can deal with that by multiplying them by $u$ and $v$ to get the extra factors we need. Except that that doesn’t seem very easy: $u$ and $v$ are much much smaller
than $s,$ so they don’t seem to have enough smoothness to create all the extra divisibility that we need.
Let me try to say that more clearly. Here’s a number-theoretic question I don’t know the answer to. Fix a large positive integer $t.$ For how big a $k$ can we find sets $A$ and $B$ of integers
between 1 and $t,$ such that $|A|=|B|=k$ and every number in $A$ is coprime to every number in $B$? I suspect that $k$ has to be quite a lot smaller than $t.$ One possibility is to partition the
primes and take …
OK, that problem is easy. The best you can do is partition the primes into two and let $A$ be all numbers you can make out of one set of primes and $B$ be all numbers you can make out of the other. I
feel as though that should make the product of the sizes of $A$ and $B$ be less than $t,$ but I don’t yet see why I’m saying that. Yes I do. If we make $A$ and $B$ maximal like that, then every
number up to $t$ can be written uniquely as a product of something in $A$ and something in $B.$ (Just take the prime factorization and split it up in the way you have to.) But that just gives a lower
bound on the product of the sizes of $A$ and $B.$ I’m not sure how to get an upper bound. Actually, it’s false, since just by taking primes we can get $A$ and $B$ to have size $t/2\log t.$
OK, I can at least prove, but won’t bother with the details, that one or other of $A$ and $B$ must have size $o(t).$
I need to stop this rambling comment for a bit. But it’s looking quite hard to demonstrate that any major problems would arise if one decided to adjust the values on the very smooth fly traps to be
zero, and made any other changes that were implied by that. Of course, there would still be many more changes that had to be made.
gowers Says:
September 18, 2010 at 1:06 pm | Reply
I’m finding it very difficult to say anything precise, or to decide whether certain things are likely to be possible. The question I’ve been struggling with above — can we have a function that is 1
when x=y, that has discrepancy at most 2.999 on all 2-by-2 and 3-by-3 squares, and that has bounded discrepancy on all fly traps — still seems to be hard, even though it is so weak that an answer
either way would tell us little about EDP. But I still can’t solve it, so I want to make the question weaker still. Here, then, is a question that it really ought to be possible to answer.
The question is this. It is easy to create functions that are 1 on the main diagonal and that have discrepancy at most 2.999 on all squares $[r,s]^2$ when $|r-s|=1$ or $2.$ However, the obvious ones
have the property that they give rise directly to unbounded discrepancy on fly traps. What I want to do is say precisely what I mean by “give rise directly to” and then ask whether what I have
observed is necessary, or whether there exist cleverer examples that do not give rise directly to unbounded discrepancy on fly traps.
Here is the definition of “give rise directly to”. I just mean that once you’ve decided on the values of $f(x,y)$ when $|x-y|$ is 1 or 2, you then put in all other values that follow from the
condition $f(ax,ay)=f(x,y).$ You then see if for every $C$ there exists a fly trap such that it is impossible to fill in the remaining values on that fly trap so as to keep the discrepancy below $C.$
To be more precise, for each $N,$ you fill in all the values $f(N,j)$ that are forced by the constant-on-rays condition. Then you see whether there is an interval of $j$s such that the values are all
fixed and add up to more than $2C$ in modulus.
gowers Says:
September 18, 2010 at 2:51 pm | Reply
An obvious way of weakening that question yet further is to insist that the interval in question starts at 1. So now the question is this. Suppose $f$ is a function defined on the set of all $(x,y)$
such that $|x-y|\leq 2,$ and suppose that it is 1 when $x=y$ and that the discrepancy on 2-by-2 and 3-by-3 squares of the form $[r,s]^2$ never exceeds 2.999. (To make the question yet weaker we could
go for 2.1 instead, so our hypothesis is stronger. In fact, I think that is probably a good idea. Note also that I am assuming that $f(r,r+1)=f(2r,2r+2)$ for every $r.$) Does it follow that for every
$C$ there exist positive integers $k$ and $M$ such that $k!|2M$ and $|\sum_{j\leq k}f(2M/j,2)|>C$?
I think I may already have established that this is not the case. If $k!|2M,$ then $k!/2|M.$ What can we say about $M/j$ mod $k!$ if it is an integer, and $2M/j$ otherwise? Answer: we know that it is
a multiple of $k!/2j$ or $k!/j.$ Now if we have two distinct such numbers, they must differ by a lot. Either they will be two distinct multiples of $k!/2j$ or $k!/j,$ or they will use different $j$s
but will in any case be distinct multiples of $k!/2jj'$ for some $j,j'\leq k.$
So now all one has to do is define $f$ as follows. If $x=y$ then $f(x,y)=1.$ If either $x$ or $y$ is divisible by $k!/2,$ then $f(x,y)=0.$ If $|x-y|=1$ or $2$ and we have not already chosen the value
of some $f(ax,ay)$ to be 0, then we set the value of $f(x,y)$ to be -1/2.
This guarantees immediately that the discrepancy on any 2-by-2 square is either 2 or 1 (depending on whether we put in 0 or -1/2 off the diagonal). As for a 3-by-3 square $[r-1,r+1]^2,$ I think the
argument above shows that all its off-diagonal entries will be $-1/2$ unless one of $r-1,$$r$ or $r+1$ is $M/j$ or $2M/j$ for some $M$ such that $2M$ is a multiple of $k!.$ Now the only thing that
can stop the discrepancy on a 3-by-3 square being at most 2 is if all the off-diagonal entries are 0. Since the numbers $M/j$ and $2M/j$ are well-separated, the only way even the entries $f(r,r\pm 1)
$ can both be zero is if $r=M/j$ or $2M/j.$ But even then, for the discrepancy to be more than 2, we need $f(r-1,r+1)$ to be zero as well. So we need $r-1$ or $r+1$ to be of the form $2M/j.$ And that
can’t happen.
If that argument is correct, then what it shows is this. We can’t find a proof that the discrepancy has to exceed 2 (that is, improve on $c=1/2$) by taking a bunch of 3-by-3 squares and one fly trap
and arguing that if the discrepancy is at most 2 on those squares then it forces us to choose values on some fly trap that add up to something large.
That wasn’t very well put, but I think I’ve now thought of a nice way of putting it. I think it is possible to find a function with the following properties.
1. $f(x,y)=1$ whenever $x=y.$
2. $f(ax,ay)=f(x,y)$ whenever $|x-y|\leq 2.$
3. The discrepancy of $f$ is at most 2 on all 2-by-2 or 3-by-3 squares.
4. The discrepancy of $f$ is bounded on all fly traps.
In other words, any proof that the discrepancy is unbounded on fly traps if it is bounded on small squares would have to use the fact that $f$ is constant not just on rays that go through points of
the form $(r,r\pm 1)$ or $(r,r\pm 2)$ but on more general rays, even if we are talking only about 2-by-2 and 3-by-3 squares. I think the bound in 4 could be made pretty small too, but that needs
gowers Says:
September 18, 2010 at 5:36 pm | Reply
Let me dualize that last problem in an attempt to understand it better. I think that the existence of such a function is telling us that we cannot find a decomposition $g=\sum_i\lambda_iS_i+\sum_j\
mu_jF_j$ with the following properties.
1. Each $S_i$ is a 2-by-2 or 3-by-3 square and each $F_j$ is a fly trap.
2. $g$ sums to 1 along the line $x=y$ and to 0 along all lines that pass through a point of the form $(r,r\pm 1)$ or $(r,r\pm 2).$
3. For all $(x,y)$ that do not belong to one of the above rays, $g(x,y)=0.$
4. $2\sum_i|\lambda_i|+C\sum_j|\mu_j|< 1.$
Let me check that that does follow from the existence of a function $f$ with the discrepancy property mentioned in the previous comment. I’ll do that by thinking about the sum $\sum_{x,y}f(x,y)g
(x,y).$ Let us split this sum up according to rays. The sum along the ray $x=y$ is 1, since $f$ is constantly 1 and $g$ sums to 1. Along any ray that goes through a point of the form $(r,r\pm 1)$ or
$(r,r\pm 2)$ the sum is zero, since $f$ is constant and $g$ sums to 0. Along any other ray, the sum is zero, since $g$ is identically zero. So the sum is 1.
Now let’s get a contradiction by summing instead over the squares and fly traps used to decompose $g.$ We have that the sum is at most
$\sum_i|\lambda_i||\langle f,S_i\rangle|+\sum_j|\mu_j||\langle f,F_j\rangle|.$
But by hypothesis $|\langle f,S_i\rangle|\leq 2$ for every $i$ and $|\langle f,F_j\rangle|\leq C.$ So the sum is at most
which, by hypothesis 4, is less than $1,$ the desired contradiction. As I say, I haven’t carefully checked, but I basically know that Hahn-Banach will show that this is necessary.
I think the existence of $f$ therefore implies that there is no way of obtaining a bound better than $c=1/2$ if we just use 2-by-2 squares, 3-by-3 squares and fly traps. (I don’t claim that I’ve
definitively proved that — there are some important details that need checking, such as whether I can reduce $C$ and what happens with intervals that do not start at 1. In fact, that second point may
turn out to be particularly important.)
Alec Edgington Says:
September 19, 2010 at 9:29 pm | Reply
I’ve been searching for symmetric functions taking values in $\{\pm 1, 0\}$ with discrepancy 1 on rectangles of the form $[1,m] \times [1,n]$. (This implies a discrepancy of 4 on all rectangles). To
narrow down the search, and because it seems to be possible so far, I’m restricting the non-zero values $(a,b)$ to $\max(a,b) \leq 2 \min(a,b)$.
A depth-first search (favouring zero values) running in Python has so far reached 49:
It seems to have got rather stuck at 50 (not surprisingly, since it seemed to go a bit mad at 25).
gowers Says:
September 19, 2010 at 10:39 pm | Reply
That’s a great picture. For my own part, given that I can’t by theoretical means find a decomposition that gets a bound better than $c=1/2,$ I’ve decided to concentrate on an easier task, which is to
look for low-discrepancy functions that prove that decompositions with certain additional restrictions cannot exist. For instance, I think I may be able to manage a proof that it can’t be done with
3-by-3 squares and fly traps if all the fly traps have coefficients of the same sign, and perhaps even in general. I’m not actually claiming that as a result yet, but it seems a realistic target to
determine whether it can be done. (I suppose I’d be happier if it could be done, but at the moment that is not what I believe.) I hope to have something a bit more definite to report over the next
couple of days.
If that works, then what I would hope to do is generalize it as much as possible, the aim being to narrow down considerably what a decomposition could look like. That could have two possible
benefits: making the computational problem of finding a decomposition much easier, and making the theoretical problem somewhat easier.
I’m interested in revisiting some of the experimental evidence from a few months ago from this point of view and seeing whether we can “dualize” it somehow. By that I mean that we could look at where
various searches backtrack and try to build decompositions based on those points and relationships. For instance, perhaps the program could tell us that certain HAP-constraints were important ones
and others were not. That would then give us a huge clue about how to find a decomposition.
One snag is that the decompositions tend to be equivalent to vector-valued problems, so it may be that the evidence we have collected so far is not in fact helpful for finding decompositions. And
it’s not completely obvious to me how one should search for best possible bounds in the vector-valued case.
gowers Says:
September 19, 2010 at 11:00 pm | Reply
Another question I want to think about is the hypergraphs question I was discussing earlier. We have a collection of sets (squares or rectangles) and we want to find a function with small discrepancy
on those sets, subject to the condition that certain values that that function takes have to be equal. To do this, we want to find a finite collection of the sets that we can use to form an efficient
decomposition. But that will imply a discrepancy result for the following hypergraph. The vertices are the rays, and the hyperedges are sets of rays that go through a set in the collection.
An important difference between this question and most discrepancy questions is that we are not assuming that the function takes values $\pm 1,$ but instead are assuming that there is one vertex that
belongs multiply to many hyperedges. It might be reasonable to assume that the ray $x=y$ is the only ray that intersects any square in our collection more than once. In that case, we are trying to
show that the effect is to give a positive kick to the sets in our collection that we cannot manage to cancel out.
What I’d like to investigate is the number-theoretic question of how to find squares that create a hypergraph that has even the remotest chance of achieving this. For instance, if any square contains
a point that is the only point in its ray that intersects a square in our collection, then we can give that point any value we like and cancel out the discrepancy on that square. So we may as well
get rid of that square.
I think I’ll write a new post soon and try to formulate various questions as precisely as I can.
charikar Says:
September 20, 2010 at 7:28 am | Reply
I haven’t followed the discussion in a while. I tried to catch up on the most recent comments in the last couple of days, but I am not up to speed on everything, so I may be asking naive questions
and repeating things that you already know. It seems to me that the question being considered currently is very similar to some versions of the diagonal representation question we considered way back
in EDP12 and EDP13. Is it the same as Problem 1 mentioned in Tim’s EDP13 post with the multiplicative constraint that Alec suggested in this comment ? The squares and fly-traps construction also
looks similar to a construction we had earlier that was inspired by looking at LP solutions here .
It may be worthwhile to look at LP solutions for inspiration in constructing functions $f(x,y)$ that have low discrepancy on squares. In fact, if the squares and fly-traps construction does arise as
an optimal solution to the LP, then the dual solution is a low discrepancy function with maximum discrepancy equal to the bound established by squares and fly-traps. One would hope that there should
be considerable structure in this dual solution. One problem with looking for structure in these LP solutions is that you need large values of $n$ since $n=lcm(2,3,\ldots,k)$. A trick to handle this
issue is to start at $0$ instead of $1$, so $0$ plays the role of the highly divisible number. But I’m not sure if starting at $0$ makes sense for the current discussion. In any case, it may be an
easier variant to think about because there was a very definite trend in the optimal values of the LP for the problem over $\{0,1,\ldots,n\}$ – in fact the optimal value seemed to be exactly $2-5/
I’m a little rusty, but I recall from looking at these LP solutions previously that it appeared that the optimal dual solution was probably not unique and there were some extra degrees of freedom.
Even so, in principle, one should be able to determine the form of these dual solutions if indeed the squares and fly-traps construction arises as an optimal solution to the LP. What this tells us is
that the squares used in the linear combination for the squares and fly-traps are the squares where the maximum discrepancy is achieved – all other discrepancy constraints are not tight. The tight
constraints (provided we also know the sign for each of them) should determine a family of dual solutions and there must exist some choice of free variables such that the dual solution obtained
satisfies all the discrepancy constraints.
• gowers Says:
September 20, 2010 at 12:08 pm
How amusing that we are revisiting an earlier discussion without (at least in my case) realizing it. But I think it is the right thing to do, and that we shouldn’t have forgotten about it the
first time round.
What you say in the last paragraph is what I was saying, but more vaguely, in my last comment but one. I think it might be interesting to look at some of the experimental evidence coming out of
the LP problem in order to get a feel for what the dual solutions are like — what I would be hoping is that it would be easier to spot patterns in the dual solutions.
I suppose another idea, though it might be rather too ambitious, would be to try to think of a strengthening of the discrepancy statement, insisting on lower discrepancy for some squares than for
others, so that all the constraints become tight. Equivalently, one would attach different weights to the squares in a decomposition, so that some of them were more expensive than others, in the
hope that the best decomposition then used the squares much more uniformly.
|
{"url":"http://gowers.wordpress.com/2010/09/10/edp20-squares-and-fly-traps/","timestamp":"2014-04-17T21:52:13Z","content_type":null,"content_length":"312341","record_id":"<urn:uuid:8fa9f599-bfb5-446a-8540-274a1cb34815>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
|
contraction mapping theorem
December 20th 2012, 08:15 AM #1
Junior Member
Dec 2012
contraction mapping theorem
I need to show f:[0,1]->[0,1] is a contraction mapping. $f(x)=sinx|x^{0.5}-g(x)|$ where g is continous map form [0,1] to itself
When I try |f(x)-f(y)|, using triangle inequality and sinx<1, I get it is less that or equal to $|x^{0.5}-f(x)|-|y^{0.5}-f(y)|$ which I can't seem to manipulate (even aware that x,f(x) are in
[0,1]) to k|x-y| as required.
Re: contraction mapping theorem
I require to show that |f(x)-f(y)| is less that or equal to k|x-y| for some 0<k<1
December 21st 2012, 12:13 AM #2
Junior Member
Dec 2012
|
{"url":"http://mathhelpforum.com/differential-geometry/210171-contraction-mapping-theorem.html","timestamp":"2014-04-17T23:41:04Z","content_type":null,"content_length":"32016","record_id":"<urn:uuid:8278551f-341e-466c-b586-7152532cf1d9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hypothesis Testing
We need to compare two distributions of data
The two samples will always be different
Sampling error and measurement error are unavoidable
Statistics are used to distinguish between trivial and meaningful differences
A statistical test works much like a jury in criminal trial
Both situations operate without absolute proof
You may make wrong decision
A jury trial is a test for guilt, while a statistical test is a test for differences
You do not stand trial to show you are innocent
You do not design experiment unless you expect an effect
Both situations consider one of two mistakes more serious
Finding innocent man guilty in a jury trial
Concluding a difference was "real" when it was not
A statistical test
Is based on the data
Begins by assuming "no difference" (= null hypothesis)
It is difficult to reject the null hypothesis
Must be > 95% differences observed did not occur by chance
Now you accept the alternative hypothesis
Science progress by rejecting null hypotheses
To carry out most kinds of statistical tests
Use data to calculate test statistic
Look up the critical value of the test statistic
Compare to see if the test statistic is in the rejection region
|
{"url":"http://cfcc.edu/faculty/bgillingham/b1HypothesisTesting.htm","timestamp":"2014-04-19T10:01:44Z","content_type":null,"content_length":"8496","record_id":"<urn:uuid:bbcc41a8-58be-4e3e-9ce3-2374285decbd>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Newton, MA Prealgebra Tutor
Find a Newton, MA Prealgebra Tutor
...I have been teaching physics as an adjunct faculty at several universities for the last few years and very much look forward to the opportunity of offering personalized support to those
seeking it, so please don't hesitate to contact me! My schedule is extremely flexible and am willing to meet y...
9 Subjects: including prealgebra, calculus, physics, geometry
...I have been teaching middle school math for the past 7 years and love my job. I love to also help students and tutor on the side. I want all students to feel confident in math.
6 Subjects: including prealgebra, geometry, algebra 1, elementary math
...My references will gladly provide details about their own experiences. I have a master's degree in computer engineering and run my own data analysis company. Before starting that company, I
developed software for large and small companies and was most recently the IT director at a large accounting firm.
11 Subjects: including prealgebra, geometry, algebra 1, precalculus
...I received nothing but positive feedback and recommendations. My schedule is flexible, but weeknights and weekends are my preference. I can tutor either at my home or will travel to your
location unless driving is more than 30 minutes.
8 Subjects: including prealgebra, calculus, geometry, algebra 1
...I have also worked as a tutor with low income youth through another learning center. My approach to teaching is to build a relationship with my students through trust and understanding their
point of view. While I do expect students to work hard, I never give them more than they can handle, and I aim to be mindful of their current state of mind.
15 Subjects: including prealgebra, reading, grammar, English
Related Newton, MA Tutors
Newton, MA Accounting Tutors
Newton, MA ACT Tutors
Newton, MA Algebra Tutors
Newton, MA Algebra 2 Tutors
Newton, MA Calculus Tutors
Newton, MA Geometry Tutors
Newton, MA Math Tutors
Newton, MA Prealgebra Tutors
Newton, MA Precalculus Tutors
Newton, MA SAT Tutors
Newton, MA SAT Math Tutors
Newton, MA Science Tutors
Newton, MA Statistics Tutors
Newton, MA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Auburndale, MA prealgebra Tutors
Brighton, MA prealgebra Tutors
Brookline, MA prealgebra Tutors
Cambridge, MA prealgebra Tutors
Newton Center prealgebra Tutors
Newton Centre, MA prealgebra Tutors
Newton Highlands prealgebra Tutors
Newton Upper Falls prealgebra Tutors
Newtonville, MA prealgebra Tutors
Roxbury, MA prealgebra Tutors
Somerville, MA prealgebra Tutors
Waban prealgebra Tutors
Waltham, MA prealgebra Tutors
Watertown, MA prealgebra Tutors
West Newton, MA prealgebra Tutors
|
{"url":"http://www.purplemath.com/Newton_MA_prealgebra_tutors.php","timestamp":"2014-04-18T11:13:33Z","content_type":null,"content_length":"24041","record_id":"<urn:uuid:d8393f8c-7ecb-41cf-933b-0c50a7f1456d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Charlestown, MA Precalculus Tutor
Find a Charlestown, MA Precalculus Tutor
...For more then a decade I have been using calculus to solve a wide variety of complex problems. As a math phd student I have particularly excelled in the field of analysis which is largely just
a more rigorous and abstract formulation of traditional calculus concepts. In addition to my love for the subject, I also have a natural impulse to explain it to others.
14 Subjects: including precalculus, calculus, geometry, algebra 1
...I am patient, enthusiastic about learning, and will work very hard with you to achieve your academic goals. JoannaI have three years' experience tutoring high school students in biology. I
have extensive coursework and research experience in Biology and am passionate about the field.
10 Subjects: including precalculus, chemistry, geometry, biology
...My BA involved reading and interpreting writing of all sorts. I enjoy helping others to reason through their ideas about a given text. I received a perfect score on the GRE general test verbal
portion, and have extensive experience tutoring and working one-on-one with students.
29 Subjects: including precalculus, reading, calculus, English
...I teach a variety of levels of students from advanced to students with special needs. I show students a variety of ways to answer a problem because my view is that as long as student can
answer a question, understand how they got the answer and can explain how they do so, it doesn't matter the m...
5 Subjects: including precalculus, algebra 1, algebra 2, study skills
...Since entering graduate school, I have been a general chemistry lab teaching assistant, physical chemistry II (quantum) teaching assistant, and advanced physical chemistry (graduate level)
teaching assistant. I have been tutoring students since 2005 and have been able to help all levels, from el...
10 Subjects: including precalculus, chemistry, calculus, prealgebra
Related Charlestown, MA Tutors
Charlestown, MA Accounting Tutors
Charlestown, MA ACT Tutors
Charlestown, MA Algebra Tutors
Charlestown, MA Algebra 2 Tutors
Charlestown, MA Calculus Tutors
Charlestown, MA Geometry Tutors
Charlestown, MA Math Tutors
Charlestown, MA Prealgebra Tutors
Charlestown, MA Precalculus Tutors
Charlestown, MA SAT Tutors
Charlestown, MA SAT Math Tutors
Charlestown, MA Science Tutors
Charlestown, MA Statistics Tutors
Charlestown, MA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Allston precalculus Tutors
Boston precalculus Tutors
Brookline, MA precalculus Tutors
Cambridge, MA precalculus Tutors
Chelsea, MA precalculus Tutors
East Boston precalculus Tutors
Everett, MA precalculus Tutors
Jamaica Plain precalculus Tutors
Medford, MA precalculus Tutors
Revere, MA precalculus Tutors
Roxbury, MA precalculus Tutors
Somerville, MA precalculus Tutors
South Boston, MA precalculus Tutors
West Medford precalculus Tutors
Winthrop, MA precalculus Tutors
|
{"url":"http://www.purplemath.com/Charlestown_MA_Precalculus_tutors.php","timestamp":"2014-04-19T14:38:59Z","content_type":null,"content_length":"24488","record_id":"<urn:uuid:cc04a269-72d0-4811-9a4c-c3a1834aebed>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What characteristics of the graph of a function can we discuss by using the concept of differentiation (first and... - Homework Help - eNotes.com
What characteristics of the graph of a function can we discuss by using the concept of differentiation (first and second derivatives).
(1) The first derivative can be used to tell the intervals where a function is increasing or decreasing, as the first derivative is positive when the function is increasing and negative where the
function is decreasing.
Also, extrema (local maximums or minimums) can be found using the first derivative. Extrema only occur at critical points, which are found when the first derivative is zero or fails to exist. If a
function's first derivative is positive for x<c, zero when x=c, and negative when x>c then the function has a local maximum at x=c, etc...
The first derivative gives the slope of the tangent line drawn to the curve at a given point.
(2) The second derivative can be used to find the intervals where the graph is concave up or concave down. This tells you how the rate of change is changing.
Along with the x and y intercepts, vertical asymptotes, and horizontal asymptotes you can get a good feel in order to draw the sketch of a graph.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/what-characteristics-graph-function-can-we-discuss-347861","timestamp":"2014-04-19T01:49:23Z","content_type":null,"content_length":"25813","record_id":"<urn:uuid:7718d6a9-eeb6-4269-821f-d42481b575d8>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Odd Numbers
Luck in odd numbers. A major chord consists of a fundamental or tonic, its major third, and its just fifth. According to the Pythagorean-system, “all nature is a harmony,” man is a full chord; and
all beyond is Deity, so that nine represents deity. As the odd numbers are the fundamental notes of nature, the last being deity, it will be easy to see how they came to be considered the great or
lucky numbers. In China, odd numbers belong to heaven, and v.v. (See Diapason, Number.)
“Good luck lies in odd numbers ... They say, there is divinity in odd numbers, either in nativity, chance, or death.” —Shakespeare: Merry Wives of Windsor. v. 1.
No doubt the odd numbers 1, 3, 5, 7, 9, play a far more important part than the even numbers. One is Deity, three the Trinity, five the chief division (see Five), seven is the sacred number, and nine
is three times three, the great climacteric.
Source: Dictionary of Phrase and Fable, E. Cobham Brewer, 1894
Odd and Even
More on Odd Numbers from Fact Monster:
|
{"url":"http://www.factmonster.com/dictionary/brewers/odd-numbers.html","timestamp":"2014-04-19T20:52:41Z","content_type":null,"content_length":"22355","record_id":"<urn:uuid:8ca9d117-615c-418c-a4f2-3d57d4aaa550>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Technical report Frank / Johann-Wolfgang-Goethe-Universität, Fachbereich Informatik und Mathematik, Institut für Informatik
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and
must-convergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional
translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of
typing in implementations of language extension.
A finite simulation method in a non-deterministic call-by-need calculus with letrec, constructors and case (2008)
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with
cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus' semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of
letrec bindings, as well as non-determinism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe's proof technique, inapplicable in this setting. The
basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite
computation depth, then it is guaranteed to show contextual preorder of expressions.
Reasoning about the correctness of program transformations requires a notion of program equivalence. We present an observational semantics for the concurrent lambda calculus with futures Lambda
(fut), which formalizes the operational semantics of the programming language Alice ML. We show that natural program optimizations, as well as partial evaluation with respect to deterministic
rules, are correct for Lambda(fut). This relies on a number of fundamental properties that we establish for our observational semantics.
|
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/series/id/16122/start/0/rows/10/subjectfq/Operationale+Semantik+","timestamp":"2014-04-21T10:22:08Z","content_type":null,"content_length":"20112","record_id":"<urn:uuid:d1637eaf-894d-47ef-9462-ec8c0878c1e0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Itasca, IL Calculus Tutor
Find a Itasca, IL Calculus Tutor
...I enjoy helping students understand the subject and realize that Math can be fun and not stressful.Algebra 1 is the basis of all other Math courses in the future and is used in many
professions. Topics include: simplifying expressions, algebraic notation, number systems, understanding and solvin...
11 Subjects: including calculus, geometry, algebra 1, algebra 2
...I consistently monitor progress and adjust lessons to meet the specific needs of each individual student. Thank you for considering my services. I look forward to helping you succeed in
mathematics.I have a teaching certificate in mathematics issued by the South Carolina State Department of Education.
12 Subjects: including calculus, geometry, algebra 1, algebra 2
...My passion for education comes through in my teaching methods, as I believe that all students have the ability to learn a subject as long as it is presented to them in a way in which they are
able to grasp. I use both analytical as well as graphical methods or a combination of the two as needed ...
34 Subjects: including calculus, reading, writing, statistics
...By my senior year, I was named captain of the Women's varsity team and the number 1 singles player. As the oldest member of the team, other girls looked to me as their leader and my coaches
expected me to lead practices and team warm-ups. Although, I no longer play competitively, I am always looking for opportunities to practice, keep up my skills, and play a friendly match.
13 Subjects: including calculus, chemistry, geometry, biology
...I can be a little flexible about the timing if given prior notice. In my current role I work in a problem solving environment where I pick up issues and deliver solutions by working with
different groups and communicating to management level. This type of work helped me to communicate better and to make people understand at all levels.
16 Subjects: including calculus, chemistry, physics, geometry
Related Itasca, IL Tutors
Itasca, IL Accounting Tutors
Itasca, IL ACT Tutors
Itasca, IL Algebra Tutors
Itasca, IL Algebra 2 Tutors
Itasca, IL Calculus Tutors
Itasca, IL Geometry Tutors
Itasca, IL Math Tutors
Itasca, IL Prealgebra Tutors
Itasca, IL Precalculus Tutors
Itasca, IL SAT Tutors
Itasca, IL SAT Math Tutors
Itasca, IL Science Tutors
Itasca, IL Statistics Tutors
Itasca, IL Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Itasca_IL_calculus_tutors.php","timestamp":"2014-04-19T06:56:13Z","content_type":null,"content_length":"24176","record_id":"<urn:uuid:582ded8d-1eb5-4c32-b9b6-0778710c0cab>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Calculate Unknown Resistance Using Meter Bridge
Edit Article
Edited by Flickety, Teresa, Maluniu, Matthew Hopkins and 4 others
A meter bridge is an apparatus used to find the resistance of a coil; you will find it as part of the tools of a physics lab.
1. 1
Check that the meter bridge wire is connected. If not, just connect both ends of the wire tightly. The apparatus has 5 places for the connections. All gaps for connections are found above the
meter bridge wire; two on either sides and one in the middle.
2. 2
Imagine 5 dots plotted on a paper which are coplanar also imagine a line below it which is a little longer than the line which we can get if we join the points. The above connection is similar to
3. 3
Connect a known resistance on one side and unknown resistance on the other this fills 4/5 gaps left for connections. In the remaining gap, connect a galvanometer, high resistance and jockey all
in series.
4. 4
Slide the jockey over the meter bridge wire and note down the reading for which you get zero deflection in galvanometer.
5. 5
Let it be R1 now put R2=100-R1.
6. 6
Similarly, calculate another reading by interchanging the known and unknown resistances in the above circuit.
7. 7
Let the new readings be R3 and R4 where R3 is the length obtained for zero deflection and R4=100-R3.
8. 8
Add R1 and R3 and divide it by two. note it down as L. calculate the average for R2 and R4 too let it be M.
9. 9
Repeat the above for various known resistance values. Each time you can calculate the unknown resistance using the formula:unknown resistance(a)=known resistance(b)*L/M. The value is found to be
the same for all values of known resistance.
Article Info
Categories: Physics
Recent edits by: June, Denise, Peter
Thanks to all authors for creating a page that has been read 22,337 times.
Was this article accurate?
|
{"url":"http://www.wikihow.com/Calculate-Unknown-Resistance-Using-Meter-Bridge","timestamp":"2014-04-18T16:02:20Z","content_type":null,"content_length":"63275","record_id":"<urn:uuid:3497f0f4-68ab-4fb6-a38a-95c368c83127>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The time cone
Next: The expected number Up: A rederivation of Previous: A rederivation of
At time
If the magnitude of the grain radius
the point
The surface of the time cone, which is the event horizon are the nuclei points 2.3) is an equality. This allows the introduction of the method of characteristic applied to geometric growth [23][22]
[21] into the time cone theory. Such growth theories are based on the time [15]
If 2.3) becomes the equation of the points in a cone,
Compare this equation with one for the time horizon for the point
The time cone can be considered unbounded in the negative time direction, to times before the onset of nucleation, if a value is assigned to
Next: The expected number Up: A rederivation of Previous: A rederivation of
|
{"url":"http://www.ctcms.nist.gov/~cahn/cone/cone/subsection3_2_1.html","timestamp":"2014-04-18T10:43:01Z","content_type":null,"content_length":"7030","record_id":"<urn:uuid:27e4f6ec-6093-420b-a587-7747cebabcac>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
|
American Mathematical Society
Bulletin Notices
AMS Sectional Meeting Program by Day
Current as of Tuesday, April 12, 2005 15:10:09
Program | Deadlines | Registration/Housing/Etc. | Inquiries: meet@ams.org
2000 Western Section Meeting
Santa Barbara, CA, March 11-12, 2000
Meeting #951
Associate secretaries:
Bernard Russo
, AMS
Sunday March 12, 2000
• Sunday March 12, 2000, 7:45 a.m.-1:30 p.m.
Meetings Registration
Foyer, Girvetz Hall
• Sunday March 12, 2000, 7:45 a.m.-1:30 p.m.
Exhibit and Book Sale
Foyer, Girvetz Hall
• Sunday March 12, 2000, 8:00 a.m.-10:50 a.m.
Special Session on Geometric Methods in 3-manifolds, III
Room 2128, Girvetz Hall
Daryl Cooper, University of California, Santa Barbara cooper@math.ucsb.edu
Darren Long, University of California, Santa Barbara long@math.ucsb.edu
Martin Scharlemann, University of California, Santa Barbara mgscharl@math.ucsb.edu
• Sunday March 12, 2000, 8:00 a.m.-10:50 a.m.
Special Session on Schrodinger-Type Operators, III
Room 2115, Girvetz Hall
Abel Klein, University of California, Irvine aklein@math.uci.edu
Svetlana Jitomirskaya, University of California, Irvine szhitomi@math.uci.edu
• Sunday March 12, 2000, 8:30 a.m.-10:50 a.m.
Special Session on Subfactors and Free Probability Theory, III
Room 2112, Girvetz Hall
Dietmar Bisch, University of California, Santa Barbara bisch@math.ucsb.edu
Sorin Popa, University of California, Los Angeles popa@math.ucla.edu
Dan Voiculescu, University of California, Berkeley dvv@math.berkeley.edu
□ 8:30 a.m.
Invariant Subspaces for Voiculescu's Circular Operator (joint work with K. Dykema).
Uffe Haagerup*, Odense University, Denmark
□ 9:20 a.m.
The coalgebra of the free difference quotient in free probability theory.
Dan V Voiculescu*, U.C. Berkeley
□ 10:10 a.m.
Free diffusions and matrix models.
Philippe Biane*, CNRS, Ecole Normale Superieure
• Sunday March 12, 2000, 8:30 a.m.-10:50 a.m.
Special Session on Uniformly and Partially Hyperbolic Dynamical Systems, III
Room 2116, Girvetz Hall
Bjorn Birnir, University of California, Santa Barabara birnir@math.ucsb.edu
Nicolai T. A. Haydn, University of Southern California nhaydn@math.usc.edu
• Sunday March 12, 2000, 8:40 a.m.-10:50 a.m.
Special Session on Geometric Analysis, III
Room 2129, Girvetz Hall
Xian-Zhe Dai, University of California, Santa Barbara dai@henri.math.ucsb.edu
Doug Moore, University of California, Santa Barbara moore@math.ucsb.edu
Guofang Wei, University of California, Santa Barbara wei@henri.math.ucsb.edu
Rick Ye, University of California, Santa Barbara yer@henri.math.ucsb.edu
□ 8:40 a.m.
A Deformation of Noncompact Einstein Solvmanifolds.
Megan M Kerr*, Wellesley College
□ 9:25 a.m.
On Manifolds with Positive Curvature Almost Everywhere.
fred wilhelm*, Univ. of Calf., Riverside
Peter Petersen, UCLA
□ 10:10 a.m.
Orbits of SU(2) - representations and Minimal Isometric Immersions.
Christine M Escher*, Oregon State University
Gregor Weingart, Universitaet Bonn
• Sunday March 12, 2000, 9:00 a.m.-10:50 a.m.
Special Session on History of Mathematics, III
Room 2127, Girvetz Hall
James Tattersall, Providence College tat@providence.edu
• Sunday March 12, 2000, 9:00 a.m.-10:50 a.m.
Special Session on Automorphic Forms, III
Room 2119, Girvetz Hall
Ozlem Imamoglu, University of California, Santa Barbara ozlem@math.ucsb.edu
Jeffrey Stopple, University of California, Santa Barbara stopple@math.ucsb.edu
□ 9:00 a.m.
Hidden symmetries for a renormalized integral of Eisenstein series.
Jennifer Beineke*, Trinity College
Daniel Bump, Stanford University
□ 9:40 a.m.
A Rankin-Selberg integral using the automorphic minimal representation of $SO(7)$.
Daniel Bump, Stanford University
Solomon Friedberg*, Boston College
David Ginzburg, Tel-Aviv University
□ 10:20 a.m.
Sums of quadratic twists of L-functions.
Daniel W Bump*, Stanford University
• Sunday March 12, 2000, 9:00 a.m.-10:50 a.m.
Special Session on Representation Theory of Algebras, III
Room 2123, Girvetz Hall
F. W. Anderson, University of Oregon anderson@math.uoregon.edu
K. R. Fuller, University of Iowa kent-fuller@uiowa.edu
B. Huisgen-Zimmermann, University of California, Santa Barbara birge@math.ucsb.edu
□ 9:00 a.m.
A proof of the flat cover conjecture, and beyond.
Paul C. Eklof*, Univ. of California, Irvine
Jan Trlifaj, Charles University, Prague
□ 9:30 a.m.
Left and right approximations via endoproperties of modules.
Lidia Angeleri Huegel*, University of Munich, Germany
□ 10:00 a.m.
A Family of Noetherian Rings with Their Finite Length Modules under Control.
Markus Schmidmeier*, Florida Atlantic University
□ 10:30 a.m.
• Sunday March 12, 2000, 11:10 a.m.-12:00 p.m.
Invited Address
Zeta functions and Dirichlet series.
Room 1004, Girvetz Hall
Ram M Murty*, Queen's University
• Sunday March 12, 2000, 1:30 p.m.-2:20 p.m.
Invited Address
1D Quasiperiodic operators. Arithmetics and localization.
Room 1004, Girvetz Hall
Svetlana Jitomirskaya*, UCI
• Sunday March 12, 2000, 3:00 p.m.-5:50 p.m.
Special Session on Geometric Methods in 3-manifolds, IV
Room 2128, Girvetz Hall
Daryl Cooper, University of California, Santa Barbara cooper@math.ucsb.edu
Darren Long, University of California, Santa Barbara long@math.ucsb.edu
Martin Scharlemann, University of California, Santa Barbara mgscharl@math.ucsb.edu
• Sunday March 12, 2000, 3:00 p.m.-5:10 p.m.
Special Session on Geometric Analysis, IV
Room 2129, Girvetz Hall
Xian-Zhe Dai, University of California, Santa Barbara dai@henri.math.ucsb.edu
Doug Moore, University of California, Santa Barbara moore@math.ucsb.edu
Guofang Wei, University of California, Santa Barbara wei@henri.math.ucsb.edu
Rick Ye, University of California, Santa Barbara yer@henri.math.ucsb.edu
□ 3:00 p.m.
On locally conformally flat 4-manifolds.
Alice Chang, Princeton University
Jie Qing*, UCSC
Paul Yang, USCand PRINCETON UNIVERSITY
□ 4:30 p.m.
Determinant of Dirac Operators and Global Anomalies.
Siye Wu*, University of California at Santa Barbara and University of Adelaide, Australia
• Sunday March 12, 2000, 3:00 p.m.-4:50 p.m.
Special Session on Automorphic Forms, IV
Room 2119, Girvetz Hall
Ozlem Imamoglu, University of California, Santa Barbara ozlem@math.ucsb.edu
Jeffrey Stopple, University of California, Santa Barbara stopple@math.ucsb.edu
□ 3:00 p.m.
The modular symbol in the Siegel upper half plane.
Arpad Toth*, University of Michigan
□ 3:40 p.m.
Applications of Hecke operators on Siegel modular forms.
Lynne H Walling*, University of Colorado at Boulder
□ 4:20 p.m.
Towards a GL(3) generalization of the Katok-Sarnak formula.
Daniel B Lieman*, University of Missouri and the American Institute of Mathematics
• Sunday March 12, 2000, 3:00 p.m.-6:10 p.m.
Special Session on Subfactors and Free Probability Theory, IV
Room 2112, Girvetz Hall
Dietmar Bisch, University of California, Santa Barbara bisch@math.ucsb.edu
Sorin Popa, University of California, Los Angeles popa@math.ucla.edu
Dan Voiculescu, University of California, Berkeley dvv@math.berkeley.edu
• Sunday March 12, 2000, 3:00 p.m.-4:50 p.m.
Special Session on Representation Theory of Algebras, IV
Room 2123, Girvetz Hall
F. W. Anderson, University of Oregon anderson@math.uoregon.edu
K. R. Fuller, University of Iowa kent-fuller@uiowa.edu
B. Huisgen-Zimmermann, University of California, Santa Barbara birge@math.ucsb.edu
□ 3:00 p.m.
Splitting and Dual Pairs.
Juan Jacobo Simon Pinero, University of Murcia
Victor P Camillo*, University of Iowa
□ 3:30 p.m.
Torsion theories of dual extension algebras.
Xianneng Du*, Anhui University
□ 4:00 p.m.
Weakly cotilting bimodules of finite injective dimension.
Alberto Tonolo*, Universita' di Padova
□ 4:30 p.m.
• Sunday March 12, 2000, 3:00 p.m.-5:50 p.m.
Special Session on Uniformly and Partially Hyperbolic Dynamical Systems, IV
Room 2116, Girvetz Hall
Bjorn Birnir, University of California, Santa Barabara birnir@math.ucsb.edu
Nicolai T. A. Haydn, University of Southern California nhaydn@math.usc.edu
• Sunday March 12, 2000, 3:00 p.m.-5:10 p.m.
Special Session on Schrodinger-Type Operators, IV
Room 2115, Girvetz Hall
Abel Klein, University of California, Irvine aklein@math.uci.edu
Svetlana Jitomirskaya, University of California, Irvine szhitomi@math.uci.edu
□ 3:00 p.m.
First KdV integrals and a.c. spectrum for 1-D Schrodinger operator.
S Molchanov, UNC at Charlotte
M Novitskii, UNC at Charlotte
B Vainberg*, UNC at Charlotte
□ 3:45 p.m.
On the Periodic Magnetic Schr\"{o}dinger Operator.
Yulia Karpeshina*, UAB
□ 4:30 p.m.
A problem in constrained minimization.
Alexander Y Gordon*, UNC-Charlotte
Inquiries: meet@ams.org
|
{"url":"http://ams.org/meetings/sectional/2064_program_sunday.html","timestamp":"2014-04-16T04:36:18Z","content_type":null,"content_length":"61381","record_id":"<urn:uuid:8435ab68-d353-4d57-a6b7-7799ca42879b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Ben on Sunday, November 25, 2007 at 1:46pm.
Demand is q=-p^2+33p+9 copies of a book sold per week when price is p dollars. Can you help determine what price the company should charge to get the largest revenue?
I solved this as a Max Revenue problem and got x=0 and x=22 so the books should be sold for $22 each. IS THIS RIGHT?
Also, cost is C=9q+100 dollars to sell q copies of a book in a week. What price should the company charge to get the largest weekly profit? What is the max possible profit weekly profit and how can
you be certain that the profit is maximized?
I got the profit function to be P=p(-p^2+33p+9)-9(-p^2+33p+9)+100. I simplified this to P=(-p^2+33+9)(p-9+100). I then took the derivative using the product rule to get P'=-2p+33(p-9+100)+(1)(-p^
2+33p+9). I know I then need to set this equal to zero if it is right, I then have to factor and say which value maximizes profit. IS THIS RIGHT? Please help me to solve this part, I do not know how
to solve P' equal to zero and factor
• Calc - bobpursley, Sunday, November 25, 2007 at 2:22pm
largest revenue is q*p
dR/dq=d/dq (-p^3 + 33p^2 + 9p)=0
solve that equation.
I get (22+-22.3)/2= 22
Now, cost.
Profit= revenue-cost
Profit= qp -9q+100
dP/dp= p(dq/dP) + q -9 dq/dp
where dq/dP=(-2p+33) and q=-p^2+33p+9
setting to zero
check that, then use the quadratic equation to solve it.
• Calc - bobpursley, Sunday, November 25, 2007 at 2:50pm
Yes, p=22 for max R.
On the second, profit, just start with revenue-cost, both as a function of p, simplify it as as abest you can, then dProfit/dp=0 and solve for p. The factoring will take care of itself...you need
to end up with a quadratic in standard form.
□ Calc - Matt, Sunday, November 25, 2007 at 3:27pm
I did simplify P' to be P'=-2p+33(p-9+100)+1(-p^2+33p+9) now I have to set the derivative,P', equal to zero and I do not know how to solve it.
□ Calc - Ben, Sunday, November 25, 2007 at 3:31pm
We did what Matt said but we do not know what to do next, our teacher told us to set the derivative equal to zero and then factor to get more than one answer and say which one maximizes the
profit but we cannot solve P'=0
• Calc - bobpursley, Sunday, November 25, 2007 at 4:56pm
I didn't check that to make certain the derivative is correct. But if it is, gather terms, and
-p^2+ (-2+33+33)p + 3300 +9=0
check that.
Now use the quadratic equation to find the values of p that solve it.
Related Questions
Calculus - ohaganbooks is offering a wide range of online books, including ...
calc - A new software company wants to start selling DVDs with their product. ...
calc - I have everything right but the last question asking how many cases per ...
Calc w/ Business - I have to determine the price that needs to be charged to ...
math - A new software company wants to start selling DVDs with their product. ...
math - A new software company wants to start selling DVDs with their product. ...
math - A new software company wants to start selling DVDs with their product. ...
calculus - (1 pt) A new software company wants to start selling DVDs with their ...
calculus - (1 pt) A new software company wants to start selling DVDs with their ...
brief calculus - I have everything right but the last question asking how many ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1196016409","timestamp":"2014-04-23T18:06:37Z","content_type":null,"content_length":"11957","record_id":"<urn:uuid:ac489181-87e3-4464-9cd3-25bf6f362a22>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Uncorrelated, orthogonal and independent???
June 6th 2010, 01:37 PM #1
Jun 2010
Uncorrelated, orthogonal and independent???
Hi, i have a couple of questions...
I know that when two random vectors are independent, they are uncorrelated. The opposite it isn't true necessarily.
What about orthogonal random vectors??
Orthongonality implies independence??
Independence implies orthogonality?
i know for example, that orthogonality exists when the correlation matrix is diagonal, and uncorrelation exists when covariance matrix is diagonal. But i don't understand clearly the relation
with independence...
It's a little confusing to me... Thanks....
Orthogonality imposes independence. The opposite may not hold.
To prove it:
Take two vectors X and Y such that they are orthogonal.
i.e. x^Ty=0
let us assume that if possible they are linearly dependent.
i.e. there exists scalar c(not equal to 0)
such that,
this means, c Y^TY=0
implies, Y^TY=0
implies, each element of Y vector is 0.which is absurd.
hence x and y are independent.
The opposite can be proved taking any counter example
x=(1 0 2)
y=(0 1 1)
the above two vectors are independent but not orthogonal.
Dooti, I think the OP was referring to independence in the sense of random variables, not in the sense of "linear dependence."
I would think that what he means by orthogonal is either something along the lines of
$Ex^\top y = 0$
$E(x - Ex)^\top (y - Ey) = 0$.
In neither case does orthogonality imply independence (this is trivial to show if you take x, y to be real numbers). In the first case, independence does not imply orthogonality, while in the
second case it does.
Last edited by theodds; June 10th 2010 at 11:40 AM.
June 10th 2010, 09:52 AM #2
Jun 2010
June 10th 2010, 11:16 AM #3
Senior Member
Oct 2009
|
{"url":"http://mathhelpforum.com/advanced-statistics/148000-uncorrelated-orthogonal-independent.html","timestamp":"2014-04-17T07:02:56Z","content_type":null,"content_length":"35709","record_id":"<urn:uuid:2c6de0c2-a511-4a02-b527-b8fe2b1bcc11>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Minimizing Total Flow Time and Total Completion Time with
Immediate Dispatching
Nir Avrahami Yossi Azar y
February 5, 2003
We consider the problem of scheduling jobs arriving over time in a multiprocessor
setting, with immediate dispatching, disallowing job migration. The goal is to minimize
both the total ow time (total time in the system) and the total completion time.
Previous studies have shown that while preemption (interrupt a job and later continue
its execution) is inherent to make a scheduling algorithm eÆcient, migration (continue the
execution on a dierent machine) is not. Still, the current non-migratory online algorithms
suer from a need for a central queue of unassigned jobs which is a "no option" in large
computing system, such as the Web.
We introduce a simple online non-migratory algorithm IMD, which employs immediate
dispatching, i.e., it immediately assigns released jobs to one of the machines. We show that
the performance of this algorithm is within a logarithmic factor of the optimal migratory
oine algorithm, with respect to the total ow time, and within a small constant factor of
the optimal migratory oine algorithm, with respect to the total completion time. This
solves an open problem suggested by Awerbuch et al [STOC99].
1 Introduction
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/930/3863920.html","timestamp":"2014-04-20T14:08:35Z","content_type":null,"content_length":"8397","record_id":"<urn:uuid:3fe77a9c-4406-43a6-89f6-f485ad7b7396>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lucien Alexandre Charles René de Possel
Born: 7 February 1905 in Marseilles, France
Died: 1974 in Paris, France
Click the picture above
to see two larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
René de Possel's parents were Marthe Seignon and Raoul de Possel, who was an examining magistrate. René attended the Institution Mélizan and the Lycée at Marseille before going to Paris to study at
the École Normale Supérieure. After completing his Agrégé de mathematique, de Possel continued to undertake research for his doctorate. He spent time studying at Munich, Göttingen and Berlin which
had a major effect on broadening his education. In 1931 he published a result on the conformal mapping of a simply connected domain in a Göttingen journal. He introduced the notion of sets of maximum
type in a paper published in 1932, and this concept was also studied in his doctoral thesis which was published in 1933.
His studies were helped financially by the award of a Rockefeller Fellowship in 1930-32 before he gained his first appointment as Maître de Conférences at the Faculty of Science at Marseille in 1933.
He then held the same position in the Faculty of Science at Clermont-Ferrand from 1934 where he joined Szolem Mandelbrojt. They would go to Paris to attend the Séminaire de mathématiques in the
Institut Henri-Poincaré every second Monday. This gave them the chance to visit bookstores and libraries as well as meet up with other friends which they had made during their studied in Paris. They
met regularly for lunch with their friends at the Café Capoulade in boulevard Saint-Michel near the Luxembourg Gardens and it was here that he became one of the founder members of the Bourbaki group.
In the early stages of Bourbaki, de Possel was involved on a subcommittee which considered how they would present integration theory. His views on integration are to be found in the expository
article Les théories modernes de l'intégration which he published in 1946 and in which he sketched an abstract form of the Riemann-Stieltjes integral. On many issues de Possel and André Weil were on
opposite sides in the arguments. At the first Bourbaki congress in July 1935 de Possel was still an active member of the group and much involved with contributing but, largely due to differences with
Weil, he dropped out of the project. De Possel married Yvonne Liberati on 12 August 1935; they had three children, Yann, Maya, and Daphné. In 1936 he published an important book on game theory,
namely Sur la théorie mathématique des jeux de hasard et de réflexion. He published an article in 1935 which was the first to present a general theory of differentiation in abstract measure spaces
and in the following year he published another article in which he gave complete proofs.
De Possel was appointed as professor in the Faculty of Science at Bresançon, then in 1938 professor in the Faculty of Science at Clermont. In 1941 he was appointed to a chair of mathematics in
Algiers. He held the chairs of rational mechanics, then differential and integral calculus. In 1951 he was appointed to the specially created chair of higher analysis at the Faculty of Science at
Algiers. From 1959 he was professor of numerical analysis at the Faculty of Science in Paris. We look briefly at the mathematics papers which de Possel published during these year.
In 1939 he published Sur la représentation conforme d'un domaine à connexion infinie sur un domaine à fentes parallèles in which he developed further the ideas on conformal mapping of a simply
connected domain which he had published in 1931. In the second half of the 1940s he published a considerable number of papers. We mention Sur l'indétermination de la puissance d'un torseur réparti in
which he gave proofs of some formulas of use in the mechanics of continuous media, where the differential elements are subjected to couples as well as forces per unit volume; Les principes
mathématiques de la mécanique classique which was based in ideas due to Brelot; Sur la définition d'un torseur réparti et sur l'évaluation de sa puissance which examines when external forces on part
of a body are equivalent to couples alone; Initiation à la topologie resulting from work carried out in Portugal; Sur les systèmes dérivants et l'extension du théorème de Lebesgue relatif à la
dérivation d'une fonction à variation bornée extending the classical theorem for linear Lebesgue measure; and La notion physique d'énergie vis-à-vis des définitions du travail et de la force which
considers the formulation of classical mechanics given by Brelot.
De Possel worked on computer science at the Institut Blaise Pascal of the National Centre for Scientific Research, in particular working on optical character recognition. His pioneering work in this
field means that for many people this is the contribution for which he is best known. He became director of the Institut Blaise Pascal in 1962 and held this post until it was reorganised in 1968. He
continued to publish mathematics papers at during the years when he worked at the Institut Blaise Pascal: Deux lemmes sur les variétés à deux dimensions, et leur application au polygone fondamental
d'une variété conforme prolongeable (1962); Sur les variétés à deux dimensions, les ensembles de capacité maximale et le prolongement des variétés conformes (1964); and Sur l'existence d'ensembles
d'entiers qui interviennent en théorie ergodique (1966). We mentioned that one of his early publications was a book on game theory and one of his last publications was also on game theory. In 1968 he
published Graphes et jeux de "prise" (nimspiele) in which he:-
... announces the solution of several variants of the game of Nim with the following rules: Two players alternate in removing matches from p piles T[1],... ,T[p]. The matches removed by a player
in a turn must come from a single pile T[n] and the number removed must belong to a set A[n] of positive integers, fixed by the rules, but always containing 1. The player to remove the last match
wins. (In another version, the player to remove the last match loses.)
Among his hobbies were watersports, and in particular he loved motor boats and underwater fishing.
Article by: J J O'Connor and E F Robertson
Mathematicians born in the same country
Cross-references in MacTutor
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © December 2005 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is:
|
{"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Possel.html","timestamp":"2014-04-16T22:20:26Z","content_type":null,"content_length":"14835","record_id":"<urn:uuid:eafba5f3-4a44-4d89-9778-58969f2c94cb>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Labeling variance of an oberved variable
Xu, Man posted on Friday, October 24, 2008 - 5:15 am
Hello, I am trying to cacluate a effect size, but apparently I couldn't get it run. I think it's the labeling that I used caused the problem. Could you help out please? Thank you very much!
ZST37Q01 ON ZPVEPS(b1)
ES = b1*(2*sqrt(var1)/sqrt(var1*b1**2+Theta));
Xu, Man posted on Friday, October 24, 2008 - 5:29 am
Sorry, it's a labeling and typing mistake indeed. I have corrected it to be
ES1 = b1*(sqrt(var1)/sqrt(var1*b1**2+Theta)
But the estimate for ES1 is 0.476, whereas the STDYX standardization output is 0.529, and their t values are 17.190 and 13.814.
Could you advice me why is this please? Thanks!
Linda K. Muthen posted on Friday, October 24, 2008 - 1:08 pm
I think your problem is that theta is a residual variance not a variance.
Xu, Man posted on Friday, October 24, 2008 - 1:17 pm
Oh i see. thank you. i forgot to add explained variances of the other two variables. i will take another try.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=12&page=3679","timestamp":"2014-04-21T11:02:49Z","content_type":null,"content_length":"20098","record_id":"<urn:uuid:a05312e9-0d09-406b-ae8d-2d83029e6458>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jigsaw puzzle - what is the possible number of combinations?
November 16th 2011, 04:44 AM #1
Nov 2011
Jigsaw puzzle - what is the possible number of combinations?
I'm new here, I do hope I've chosen the right sub-forum to post this question. Anyway - I'm a graphic designer, and recently I've 'designed' a sort of a jigsaw puzzle made out of 16 cubes which
you have to use to create one character (it's for typography project) in a 4 x 4 grid (you have to use every single cube in a grid specified before).
The question is: what is the approx. number of combination if you have: 16 cubes, with each side having different geometric shape (6 in total), out of which 2 shapes can be used in 4 different
So I have: 1 circle, 1 hollow circle, 1 right angle triangle, 1 quarter of a circle (forming sort of an arch), 1 square and 1 'empty' square.
Both right-angle triangle and quarter-of-a-circle can be placed in 4 different ways, therefore I suppose increasing the possible number of combinations.
You can see the typographic puzzle by clicking here.
I know you can have 10,000 combination with a 4 pin number, so I wonder approx. how many combinations you can have with this puzzle?
I did A level maths but I'm afraid I can't think of formula to work it out myself.
Thank you,
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/math-puzzles/192032-jigsaw-puzzle-what-possible-number-combinations.html","timestamp":"2014-04-17T13:34:14Z","content_type":null,"content_length":"30148","record_id":"<urn:uuid:900edb1a-64f7-4c21-a204-8ecc7199b146>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Logic, Sin, and Love
I came across this in Ben Myers'
Faith and Theology
blog. It's a quote from Carl Schmitt,
Roman Catholicism and Political Form
“The Catholic Church is a complex of opposites, a
complexio oppositorum
. There appears to be no antithesis it does not embrace…. Ultimately, most important is that this limitless ambiguity combines with the most precise dogmatism and a will to decision as it culminates
in the doctrine of papal infallibility."
Ignoring what Schmidt's book and the blog post is about (politics and the Church), what I find interesting is to compare this quote with one from a book I read recently,
How Mathematicians Think: Using Ambiguity, Contradiction, and Paradox to Create Mathematics
by William Byers.
"...the power and profundity of mathematics is a consequence of having deep ambiguity under the strictest logical control." [p. 77]
It is a pity that, in theology, decision tends to come down to something on the order of papal infallibility, but lacking the means for applying the strictest logical control, what else is there?
Well, one can at least step back from the 19th century, when the doctrine of papal infallibility was made explicit, and consider control by the magisterium. As I've mentioned previously, I have to
admire what the magisterium did with respect to central Christian doctrines, namely that of the Trinity, and of Christ's nature, as described in part 4 of Robert Magliola's
Derrida on the Mend
. What it did was to preserve the ambiguity. The doctrines should not be taken as saying
how it is
that God is One, yet Three, or
how it is
that Christ is 100% human and 100% divine. Rather, they should be taken as saying that in denying or fudging or in any way lessening either side of the contradiction one is being led into heresy.
And that, according to Byers, is how mathematics operates. Take the concept of 'zero', (or better for those acquainted with elementary set theory, the concept of the empty set which is used to define
'zero'). There was mathematics without 'zero' for a long time, but with it, it could do much, much more. Yet 'zero' is inherently ambiguous. It is the presence of absence. True, within ordinary
language one can say "I don't have any apples", so in that sense, there was always an implicit concept of 'zero', but what happened in mathematics is that that concept became reified in such a way
that it could be used with "strictest logical control".
Now I do not expect that this ability can be carried over to theology. Spinoza's dream remains unfulfilled. But what I do expect is that the principle of "preserving the ambiguity" can be more
heavily emphasized, and of course the Path of Reason that I've been describing has that as its method. Its method
its content:
"Whereas the attempt to make a
object out of a "deep" concept cannot succeed, the drive to do so is itself an expression of the Absolute. Mathematics, we could say, is driven by the need to express the "Infinite" in finite terms,
and this very drive is the "Infinite" in action." [Byers, p. 296]
Can this not be just as true for theology?
Although I have mentioned Owen Barfield a couple of times, I have not so far in this blog said how important he is in shaping my thinking about religion. I wrote a short essay (here) but that
presupposes some acquaintance with contemporary theology. For a longer and more in-depth account, by Caryl Johnston, see here. But there is no substitute for reading his book Saving the Appearances:
A Study in Idolatry, for there is where you will find all the reasoning behind his thesis. Here's a much too short overview of that thesis that I wrote in a comment on Sam Norton's blog:
"Briefly, then, Barfield argues -- based on changes in language, art, and ideas -- that human consciousness has evolved in historical time, that the consciousness common in the pre-Axial Age (prior
to c. 500 BCE) experienced nature differently from after, and that another major change happened c. 1500 CE. (Julian Jaynes, in The Origin of Consciousness from the Breakdown of the Bicameral Mind
makes a similar argument about the earlier change, though Jaynes argues from a materialist perspective). Barfield calls the earlier stage "original participation" which he describes as perceiving
"spirit" on "the other side" (from the observer) of the phenomena. Hence, the corresponding religion was shamanistic, or pagan, and it worked. For we moderns, on the other hand, this participation
has become unconscious, which made the rise of Cartesian dualism, eventually descending into materialism, possible. The intervening period was one of gradually losing this sense of participation, but
the metaphysics of the time remained "participatory", that is, it was a given that the phenomenal was a representation of the spiritual. Mystics are those who have glimpses of a further stage, "final
participation", in which the spiritual aspect of nature is rediscovered, but now experienced as "within" us, rather than, as it was for original participation, "without". The key point is that, as
consciousness changes, so does nature and our relationship with it. Though now we need two words, "breath" and "spirit", the Greeks just had one ("pneuma"), or we have two words "word" and "thing",
biblical Hebrew had one ("daber"). It was not the case (as the modernist filter would have it), that those "primitive" Greeks used "pneuma" in a strictly metaphorical sense for "spirit", but that for
the Greeks, they were perceived as the same, occurring both in nature (as breath) and inside us (as mentality). What must be understood, though, is that the flip side of this sundering is the rise of
intellect, which requires a stronger distinction between "inside" and "outside", so that we can think about the phenomena, rather than, as in pre_Axial times, be thought by the phenomena. "
In my opinion, religious philosophy/theology that does not take this change in human consciousness into account is going nowhere. It will either replace the wisdom of the ancients and medieval
thinkers with modernist idolatry (what I call "materialism plus God"), or replace it with postmodern negativity, or attempt to reconstitute that ancient/medieval wisdom without acknowledging that it
no longer works as it used to.
In the previous post on this topic, the logic of contrafactory identity was introduced, but not much was said on what to do with it. Here I want to continue on with that discussion.
In monotheist religions, God is revealed as unknowable, because God is not an object (cannot be perceived, cannot be captured in discursive reason, etc.). Not being an object, one cannot apply
Aristotelian logic to questions about God. If one does ascribe some attribute to God, that ascription must immediately be put "under erasure", as deconstructionists like to put it, that is,
accompanied by loud warnings that the attribute is being used analogically. To do otherwise is to fall into idolatry. All this, of course, is familiar ground to theologians. Now the problem with
this, for one without faith, is that it is all kicked off by the assumption that God is real yet unknowable, and so one might ask: who cares?
The reason to care is that bit about falling into idolatry. Another place this pops up is in Buddhism, where the problem is self-idolatry, that is, believing in the "inherent existence" of one's
self, which causes attachment, which causes suffering. But here, the lack of inherent existence of the self is not just something revealed (though it is), but something one can work out on one's one,
and indeed in one strain of Buddhism (the Gelukba sect of Tibetan Buddhism), this thinking is the primary practice.
The common sense way of thinking about the self is that it persists as one observes objects. I was not seeing the tree, and now I am. But in stating this, I have also stated that in seeing the tree
the self has changed. Are there two 'I''s involved? If so, what lets me connect them? Nor can one say that most of the self has stayed the same, while just part of it has changed in observing the
tree, for that would imply that the part that has stayed the same is totally unaware of the part that observed the tree (to be aware, it would have to have changed). And so, in order to persist when
it observes objects, the self must not persist when it observes objects. Hence, one concludes: one cannot say that the self persists.
Can one just say that the self is an illusion, that all there really is is change? Then where does the illusion of the self come from? Or to put it another way: how is experiencing an illusory self
different from experiencing a non-illusory self? In either case, there is a sense of persistence, which is to say consciousness. Yet if all there is is change, then there can be no spanning of the
state before a change to after, for that which spans cannot be the change. Hence, one can cannot say that the self does not persist.
Now this sort of word-wrangling has been going on since there has been philosophy, East or West. Why shouldn't one, as Wittgenstein and contemporary pragmatists urge, just stop wrangling? There are a
couple of reasons. The first is that by asking "does the self persist", or more generally, "what is self", and as one goes through arguments like the foregoing, one is in effect deconstructing the
self, and that is, according to Buddhists, at least, a good thing to do. It is a way of breaking one's attachment toward self-idolatry. But there is another reason, which I think is more important,
though I admit there is a touch of speculation involved. In an earlier post, I quoted Denys Turner in regard to Aquinas' view of intellectus with respect to ratio. Here is another quote from the same
book (Faith, Reason and the Existence of God, p. 87):
For Thomas, ... reason's powers, pushed to their limit, open up into the territory of intellectus: and they do so, as I shall argue, precisely in the proofs of the existence of God. In those
proofs, we could say, reason self-transcends, and by its self-transcendence, becomes 'intellect'".
As before, I have yet to see whether he can make the case that this applies particularly to Aquinas' proofs of the existence of God, but even if not, I would say they do apply to any exercise of the
logic of contrafactory identity (LCI). Now I do not mean to imply that just by rattling off the above arguments over the persistence of the self one has transcended static intellect. All that that
does is take one to the limit, but not beyond. On the other hand, what better place is there to contemplate what might be beyond?
At this point, consider again the quote from Merrell-Wolff: "It [the experienced flow of contradiction] seemed to be the real underlying fact of all consciousness of all creatures." What I
hypothesize is that dynamic intellect just is the LCI, but seen from its own perspective. It is the LCI that creates the self, with its changing by not changing, etc., or one can say that the self
(or consciousness) is LCI doing its thing. From our (fallen) perspective, we imagine our self as separate from that reasoning, which is to say we only perceive it as a nest of contradictions, and not
as creative contrafactions. And that arises from our belief that we are an inherently existing self that is doing the reasoning, which is to say, is a result of self-idolatry.
Now I mentioned that, for me, this line of thought is speculation, though given Wolff's statement (which I assume is not speculative, but experiential), I have some confidence in it. What is missing,
for me, is what Wolff calls a "shift in the basis of consciousness", which is one way he describes mystical states. From the discussion in the previous paragraph, I would say that the shift is from a
basis of self-idolatry, to a basis of LCI. In seeing the LCI-type reasoning from this (self-idolatrous) side all one sees is limit. By "becoming" the LCI, one would then "see" the self being created
by the LCI. But, of course, there is no button to push to accomplish that shift -- it just happens. Perhaps by living at the limit (i.e., engaging in LCI exercises) one makes the shift more likely,
but that is definitely speculative.
Sam Norton has started a series of post on "Reasonable Atheism", the intent being to see if the conversation between theists and atheists can be raised above that of Dawkins and company. The problem
largely lies in the difference between how many atheists understand what is meant by 'God' and how many theists understand the word. The problem is compounded in that I had to use the word 'many' in
both cases: the reasonable theist usually means something different from what most atheists mean, but also different from many of those who say they believe in God. Primarily, this disjunct lies in
that (most) atheists think that theists believe in the existence of a being, while the reasonable theist says that God is not a being, and so the phrase "God exists" is problematic. (Note: by
atheists' I am referring to secular atheists, not, say, Buddhists. Given the common usage in Western countries of the word 'atheist' as meaning someone who rejects all religion, a Buddhist might be
better referred to as a 'religious non-theist' than as an atheist.)
What I want to do here is define some vocabulary that both reasonable theists and reasonable atheists can agree on. Or rather, suggest these definitions, and see what needs to be done in addition to
either get to a common vocabulary, or to see where this attempt fails.
Firstly, I suggest making a distinction between the words 'real' and 'exists'. As I shall use the terms, something exists if (as its etymology suggests) it can "stand out", that is, can be an object
of perception, in that it can be discerned from a background (where 'perception' is to be taken here more generally than sense perception, for example a mathematical object exists because it can be
thought of, an emotion exists because it can be felt). The word 'real', on the other hand (admittedly despite its etymology) is to be taken as that which one takes into account in making decisions,
in the way one lives one's life, and so forth. Needless to say, these definitions are rough around the edges, but I think they are enough to go on with.
I define God as at least the eternal (i.e., non-spatiotemporal), loving intellect that grounds all that exists. Thus, to believe in God (i.e., to be a theist) is at a minimum to claim that the ground
of all that exists is to be characterized with these words (eternal, loving, intellective), while not believing in God is to claim that the ground of all that exists does not have these
characteristics. And, given my distinction between 'real' and 'exists', while one runs into difficulties saying "God exists" (can something be its own ground? can the ultimate background "stand
out"?), one can say without much difficulty, "God is real (or not real)".
Some comments:
I put in that "at least" because I recognize that many theists will demand that something more be put into that definition. In particular, the word "personal" might be thought to be required. But
such objections, I think, are only relevant within the theological community. Similarly if words like 'omnipotent' are added -- additional argumentation is required to discern what is meant by such
On the other hand, I consider the three words used to be a minimum needed to distinguish a theist's God from, say, the God of Einstein. Also, just saying that God is "the ground of all that exists"
does not make a difference that makes a difference.
Many theists may also object in that they take as given that God cannot be defined at all. To this I make two responses. The first, is that I am under no illusion that the definition I gave is final.
It is only intended to provide a common ground for conversations between theists and atheists -- that is, to see if with this definition, the atheist can understand what the reasonable theist roughly
means by 'God'. The second is to point out that all three words used to characterize God are themselves undefined. Again, my intention is to indicate that the difference between the theist and the
atheist is in what each party considers the ground of all that exists is "like", and the three words serve as a minimum to understand roughly what is being claimed in saying that God is or is not
The effort that the atheist must bring to this is to understand that the words "loving" and "intellect" are being used analogically. Which is to say, we only know what these words mean insofar as
they are applied to humans. Applying them to the ground of all that exists is, no doubt, problematic. And, of course, we are unable to imagine what non-spatiotemporal reality is like. But these
limitations in our ability to think about God do not in themselves invalidate the possibility that God is real. They just indicate that the concept of God cannot be "thought through", which is to be
expected concerning the ground of all that exists -- the same problem exists in trying to imagine a context in which a Big Bang might occur.
I've just come across the following in Denys Turner's Faith, Reason, and the Existence of God:
...for Thomas, as for the long tradition which he inherits, you begin to occupy the place of intellect when reason asks the sorts of question the answers to which you know are beyond the power of
reason to comprehend. They are questions, therefore, which have a double character: for they arise, as questions, out of our human experience of the world; but the answers, we know, must lie
beyond our comprehension, and therefore beyond the experience out of which they arise. And that sense that reason, at the end of its tether, becomes an intellectus, and that just where it does,
it meets the God who is beyond its grasp, is, I argue, the structuring principle of the 'five ways' of the Summa Theologiae [p. xv].
This is the same idea as in the Goethe quote in my last post: "one is only truly thinking when that which one thinks cannot be thought through", that is, describing what I am calling 'dynamic
intellect'. But what also intrigues me is the idea that this is what Aquinas was getting at with the 'five ways'. Like any modernist thinker (which I am trying to cease to be), I had pretty much
dismissed them as "proofs", but apparently, like Barthians, I have misinterpreted what is going on. But I haven't read Turner's book yet, just the preface, so it remains to be seen if his argument
Anyway, that such questions do arise out experience is what happened to me with respect to consciousness, as all-too-briefly discussed here. I conclude that consciousness could not exist unless the
eternal is real, but that, of course, does not resolve the mystery, which is how the eternal and spatiotemporal relate, and that cannot be "thought through". Nevertheless, I can say that by reason
alone, reflecting on normal, everyday experience, one can grasp that there are real mysteries to which "the answers...must lie beyond our comprehension, and therefore beyond the experience out of
which they arise."
My favorite mystic, Franklin Merrell-Wolff, says that there are two ways to salvation, that of Love and that of Reason. But, he emphasizes, they both lead to the same place, where (in theistic
language) God is Known as Loving Intellect, or Intellective Love. (Please excuse the neologism 'intellective' -- just that 'intellectual' and 'intelligent' have connotations I want to avoid. Also,
the excessive use of capital letters can, I realize, be annoying. But it serves a purpose: to remind one that God's Love must be distinguished from human love, and the same for Intellect/intellect).
Christianity has emphasized the path of Love, while Buddhism has put more emphasis on the path of Reason. But, of course, neither downplays the other (except for certain Christian Protestant
strains). The Catholic Catechism, for example, says that "sin is an offense against reason", while the Buddhist should never let Compassion get under the radar.
Some notable names who have taken the path of Reason are the Buddhist Nagarjuna, the Vedantist Shankara, the pagan Plotinus, and the Christian Nicholas of Cusa. And, of course, Wolff, who has, I
think, provided the contemporary reader with the most articulate expression of this path, in his two books Pathways Through to Space and The Philosophy of Consciousness Without an Object, both
reprinted in a single volume: Experience and Philosophy. Another philosophic overview, not of the path, but of the resulting worldview, can be found in Owen Barfield's What Coleridge Thought.
Both paths have dangers. Love can be misdirected, and reason can go astray, which is to say that both are susceptible to idolatry. For this reason, one might say that the first commandment of the
path of Reason is to learn to think with the danger of idolatry in mind. Buddhism has a tool for that, which I will get to shortly. First, though, there is a need to make a distinction within the
general category of reason. Unfortunately, different authors have used different words, or the same words in opposite ways. In particular, Cusa distinguished between intellectus and ratio to mean
what Coleridge meant by reason and understanding respectively. Hence I will borrow some terminology from Pirsig's Metaphysics of Quality (MoQ -- see his book Lila) and call the first 'dynamic
intellect' and the second 'static intellect'. I should mention that the concept of 'dynamic intellect' does not occur in the MoQ, which in my opinion is a major reason that it fails as a metaphysics,
but that's another story. Pirsig applied the words to Quality: dynamic quality and static quality, with the idea that dynamic quality is the undefinable creative whatever (force? power?) that leaves
in its wake static quality. Hence, I am using the words in the same way: dynamic intellect is creative, while our understanding of the created is static intellect. But they cannot really be divided,
in the sense that they are mutually dependent, but that is getting ahead of the story.
The story begins, and never gets past, the intellectual confrontation with mystery. Mystery is that which is real, but can never be captured by static intellect, which is to say, can never be
understood. Idolatry and heresy consist largely of replacing the mystery with something understandable (a practice that is not restricted to religious mysteries). But if there is no possibility of
understanding, why pursue the matter with intellect? In fact, many mystics, and writers on mysticism, enjoin just that -- better to drop it and take the path of Love, for example. Well, that is an
option, no question, and I make no claim that one should pursue it intellectually. But for those who choose to do so...
Goethe said: "One is only truly thinking when that which one thinks cannot be thought through", and that's the kind of thinking we are talking about. The path of Reason is one of purifying one's
intellect. Hence, both Plotinus and Wolff recommend as therapy the study of philosophy and mathematics. It is not that one will find in either discipline the answers to mysteries, but that both serve
as discipline in purifying one's intellect. One thing to note in philosophy, though, is that certain issues constantly recur without resolution (which is to say that resolving them leads to bad
philosophy): free will and determinism, the one and the many, being and becoming, and so on. The path of Reason, however, has a tool that consists of holding these oppositions in tension without
resolving them. It has various names. Cusa called it the 'coincidence of opposites', which I think as a name does not work well. Coleridge called it 'polar logic'. Nishida Kitaro called it the logic
of contradictory identity, along with various variations like 'self-contradictory identity'. I shall adopt this but with one change, thanks to a remark of Barfield's in his discussion of Coleridge,
and that is to substitute the word 'contrafactory' for 'contradictory'. Hence: the logic of contrafactory identity, or LCI. The reason for the substitution is that the logic does not just apply to a
situation where two words are needed that contradict each other, but that in emphasizing one word, the need for the other is made. This situation has been admirably expressed by Wolff:
While in the State [of High Indifference, as he called it], I was particularly impressed with the fact that the logical principle of contradiction had no relevancy. It would not be correct to say
that this principle was violated, but rather, that it had no application. For to isolate any phase of the State was to be immediately aware of the opposite phase as the necessary complementary
part of the first. Thus the attempt of self-conscious thought to isolate anything resulted in the immediate initiation of a sort of flow in the very essence of consciousness itself, so that the
nascent isolation was transformed into its opposite as co-partner in a timeless reality....It seemed to be the real underlying fact of all consciousness of all creatures. [Franklin Merrell-Wolff,
Experience and Philosophy, p.286]
It is that "sort of flow" and "transform[ing] into its opposite" that I mean to get at with the word 'contrafactory'. What this says about consciousness in particular is also to be looked into,
briefly below, but I hope in more detail in another post.
Nishida's logic of contradictory identity was a contemporary version of something that has long existed in Buddhist philosophy, stemming from Nagarjuna, and codified into what is called the
tetralemma. It is invoked when one is confronted with one of these perennial philosophical puzzles, like "does the self exist". What the Buddhist logician does is refute the following four
possibilities: (1) the self exists, (2) the self does not exist, (3) the self both exists and does not exist, and (4) the self neither exists nor does not exist.
Now the actual refutations are carried out using familiar Aristotelian logic: one assumes (1) and deduces a contradiction, then one assumes (2) which also produces a contradiction. Now at this point,
I believe I am veering off from the original Nagarjunic approach (though perhaps not Nishida's -- I'm just not sure). For as far as I can tell, (3) is considered to be refuted simply because it
violates the law of contradiction, and (4) because it violates the law of the excluded middle. One reason for veering off is that nowadays we have quantum mechanics to consider: is an object in a
superposition of states an existing somewhat that violates the principle of contradiction and/or the law of the excluded middle? I think there is an interesting question here, but tangential. But
sticking to questions like the existence of the self, I see (3) and (4) as having more to them. Now (3) may just be an acknowledgment that we cannot think what it means to both exist and not exist,
but (4) I see as a commandment: thou shalt not stop asking the question. This, by the way, is in defiance of the pragmatist, who says: it is a question that is not worth pursuing. Or more accurately,
it is in defiance of the nominalist pragmatist, who believes that words are simply human inventions, and if there are areas where the application of a particular word (like 'self') gets problematic,
just back off from those areas. In contrast, the path of Reason says "don't stop the question", because the mystery is real and can't be ignored -- in this case, without the word 'self' how can one
say that it must -- in some mysterious sense -- die, as most mystics are wont to say?
There is more to say, of course, but for now I want to conclude by referring back to the Wolff quote, in which he is talking about noetic experience while in a mystical state of consciousness. It
ends with "It [the flow from one state to its opposite] seemed to be the real underlying fact of all consciousness of all creatures." What this indicates to me -- and I'll grant that I am speculating
here -- is that the LCI is not just a way for fallible human intellects to confront mystery, but is "in fact" what consciousness is. (The scare quotes are -- as usual -- needed because this is of
course not some fact lying around to be observed.) That is, what Wolff seems to be saying is that consciousness "works" by contrafactory identity. And it is possible that one can say the same about
dynamic/static intellect.
In response to a student who complained that he didn't understand quantum mechanics, Von Neumann is supposed to have answered: nobody understands it. You just get used to it. Of course, quantum
physicists have a set of mathematical tools that -- though they do not allow one to visualize what is going on in the subatomic world, at least allow one to make predictions that coincide with
measurements -- and so they have something to work with which helps in "getting used to it".
Believers, in response to arguments against religion from non-believers, claim that such arguments do not take mystical reality into account -- that God, or more generally, the transcendent, is real,
but ordinary language is incapable of dealing with it. The non-believer responds with charges of obscurantism, that the believer is evading the issue by taking refuge in nonsense.
But there are two issues here. The first is whether or not there is anything that is real but where all attempts at description -- staying within the confines of common sense language and
Aristotelian logic -- fail. The second is what we do about it. I would argue, first, that the subatomic world is such a reality, though that alone does not grant license to the believer to believe
(in God or whatever). To the objection that there is a mathematical language (which of course obeys Aristotelian logic -- the laws of identity, contradiction, and the excluded middle) for quantum
physics, I repeat that this language does not describe the reality -- it just allows the physicist to make predictions. Hence there are a multitude of interpretations of that world, all of which are
metaphysical positions, not scientific.
But, secondly, I would argue that there is an even more obvious reality that qualifies, namely plain, ordinary, everyday consciousness. The reason it qualifies is that the "now" is not an instant --
a point on time's continuum, but instead is extended over a small stretch of time (and space). Because the now is extended, I don't see any way that it could emerge from a strictly spatiotemporal
process. In a spatiotemporal process every event is separated in time and/or space from every other event. Consciousness, on the other hand, puts together zillions of these separated events to form
the "now". Within the "now" is the experience of time passing, but how is that possible? Consciousness somehow connects those zillions of events into one flowing whole, while within a strictly
spatiotemporal process there is no way for events to aggregate as experience of anything larger than a single event. As I see it, this means that consciousness transcends time (and space), and so
cannot itself be a consequence of a spatiotemporal process.
Granted, the argument in the preceding paragraph is no more than arm-waving. But there is enough of a mystery to consciousness that it leads a diehard materialist like Colin McGinn to assert that he
doesn't expect there ever to be an explanation of consciousness, and another (David Chalmers) to hypothesize what he calls "naturalist dualism" to account for consciousness. What I propose instead is
to assert the reality of the non-spatiotemporal (which in theological language is called the eternal -- not to be confused with time everlasting). What if the reason that quantum reality defies
comprehension is that it too is non-spatiotemporal? That would "explain" how an unobserved electron could be in a superposition of states, that the position/momentum uncertainty is there simply
because -- unobserved -- quantum particles are simply not at definite spatiotemporal locations, because at that level there is no space and time. And, of course, it would "account for" the
non-locality observed in the Aspect experiments. But note that I put the words "explain" and "account for" in scare quotes, because appealing to non-spatiotemporal reality is not an understandable
answer. But the point is that if one buys into this line of argumentation, then one should not expect one. Yet something definite has been argued for: that there is a reality for which our ordinary
language fails.
What clinches the argument for me -- and is the reason I became religious -- is that mystics have been saying for millenia that fundamental reality is not spatiotemporal. And they have said so, or so
they claim, by virtue of knowledge (of "experiencing" non-spatiotemporal reality), not by metaphysical guesswork. Should we believe them? Given the argumentation above I have no problem believing
them. But it should be pointed out that mystics also say something else, that just arguing from consciousness and/or quantum physics does not, and that is that the eternal is not merely real, but
also Good, and that it is possible to realize that Goodness. It is that addition that turns all this from metaphysical speculation to religion.
This, then, is my answer to the first issue: there is a reality that defies common sense language, and why it must be dealt with. Still to come: how to forge a language to deal with it.
Pragmatists will point out that they consider many things to be true, but deny that there is likely to be one sufficient theory of truth. Instead, there are varieties of scientific truth, artistic
truth, historical truth, etc., and each has its own ways of establishing themselves. There is, then, no overarching method or criterion for all truths, but if one is a scientist, say, then there are
ways of working that are better than others in leading one to new scientific truths.
I add to that list of varieties of truth what I call salvific truth, and it has its own criterion: the truth will make you free of sin and death. (I'll set aside for now what being "free of death"
might mean.) Thus, a point of doctrine X is salvifically true if by believing in it, one is helped on one's way to salvation. Some things that follow:
- First, I am being more flexible than the Athanasian Creed, which held that to be saved one must believe the items in the creed. I would counter that claim by saying that I think the Buddha became
free of sin and death, yet couldn't have believed in the Resurrection.
- Second, the logic of the criterion does not require X to be itself true in some objective sense. Suppose, for example, that the Virgin Birth was not historically true. Yet, in believing it, one
might have been helped in believing in the power of Jesus to save. On the other hand, if one has no need of believing in the VB in order to be convinced of Jesus' power to save, then the question of
the truth of the VB becomes irrelevant.
What these two points indicate is that salvific truth varies from culture to culture, epoch to epoch, and person to person. Yet that does not mean that anything goes. For there is a test to be made,
and that is whether believing in X or Y has actually helped anyone on their way to becoming free of sin. Unfortunately, that doesn't help, since -- from the non-saved perspective -- how can we know?
Of course, one way to approach this thorny issue is to think that one is saved simply by declaring that, say, Jesus is Lord. And I think this is how early official Christianity had to deal with this
issue, due to its denial of reincarnation and universalism. But nowadays that sounds too arbitrary -- one is not inclined to believe in a God that orders things in that way. Another way is to claim
that everyone is always already saved, and we just don't know it. This might be true in some ultimate sense, but seems to me to be irrelevant. For I can't see any practical difference between "not
saved" and "not knowing that one is safe". Just change the criterion to "knowing the truth will make you free of sin", since the problem is one's propensity to sin. So, given that we have no
mechanical way of applying the criterion, what do we do?
What, I think, we need to do, and what seems to me to be missing in most discussions of Christian doctrine, from a pragmatic standpoint, is ask the question of how a salvific truth works. I don't
mean by this that one should expect to find some formula, the following of which leads to salvation. It is rather a matter of thinking about what revelation has to tell us about ourselves and reality
in general (metaphysics) , to get some idea of why we are obstructed by false beliefs. That is, I think that salvific truths work more through negating than through positing. As various people have
pointed out (I learned it from Robert Magliola's Derrida on the Mend) a heresy is substituting something understandable for that which must be maintained as a mystery. We can't understand how Jesus
could be fully divine and fully human, but we can understand that he could be not fully divine, or not fully human, and in so understanding we fall into error. Hence, to believe in a salvific truth
-- and to think about it -- is to face mystery on its own terms, not on what we as sinners substitute for it. What we are doing, then, is learning to think in a new way. As Coleridge put it, it is
making a distinction between Reason and Understanding. Only the former works creatively.
(I'll have more to say about this in future posts.)
"Religious philosophy" is what I put down as my "interests" in my profile. It is not "philosophy of religion". The term is a generalization of "Christian philosophy" (as used in the opening chapter
of Gilson's The Spirit of Medieval Philosophy), or "Buddhist philosophy", etc. That is, it is philosophizing from a religious perspective, or "faith seeking understanding". But, it may be objected,
there can be no such thing as a general religious philosophy, since the faith of one religion is different from that of another. To this I respond that I think of myself as religious, but since I am
not a practitioner of one or the other of these more or less established faiths, I have no choice but to, in effect, define my own religion, though of course drawing on the wisdom of the established
faiths. As I see it, this is one way of responding to the problem raised, for example, in Peter Berger's The Heretical Imperative. We live in pluralist society, and if I am unable to bring myself to,
say, choosing some Christian denomination, or some Oriental tradition, then this is the only way forward for me -- given that I reject the agnostic option.
William James characterized religion (in Varieties of Religious Experience) as (my paraphrase)
1. Acknowledging that there is something fundamentally wrong with me, and
2. Seeing the fix for that wrongness in reconnecting to something transcendent.
What I shall be doing in these posts is taking that characterization as a definition of religion. (By "definition", by the way, I only mean to indicate that when I define X, that this is how I shall
be using the word 'X'. Others may use it in other ways, which is ok as long as one is aware of the different usages.) And what I see myself as doing is carrying out a religious experiment. Could it
be that -- now that modernism is failing, yet there are good reasons to not fall back into a pre-modern monocultural form of religious practice, that the time for "being a Christian (or whatever) is
passing? To be clear, I do not think it is wrong to belong to a religious tradition (on the other hand, I do think it is wrong to be without religion altogether), so the experiment is to see if being
religious without signing up for a particular tradition works out. Not that I expect anything conclusive to come of it.
I don't know what the author of the song "Church of Logic Sin and Love" (by The Men) meant by it. I've just borrowed the three words, since they serve as central figures around which I can discuss
what I think religion is about.
Love: There won't be much if anything on this topic, not because it isn't supremely important -- just that I have nothing new or unusual to say about it.
Sin: In my view, religion isn't religion unless it acknowledges Original Sin, or something similar. In Hinduism there is Maya, in Buddhism, Ignorance. Granted, I don't think the Christian tradition
of Original Sin as being "disobedience" plays well any more, nevertheless, I accept the notion that there is something fundamentally wrong with us, and it has to do with being out of touch with the
transcendent. They are most out of touch who deny the reality of the transcendent.
Logic: In this blog, the word "logic" should be treated as the adjectival form of the word "Logos", as in: By [the Logos] was everything made that was made. Logic, then, becomes a means to Logos. The
usual meaning of the term, which I shall identify as Aristotelian logic, is what you get when you restrict one's thinking to the sense perceptible (the spatiotemporal). It is of little value when
directed at the mental, or the transcendent. But (see Sin), attempts at creating a logic that is of value in these areas is fraught with peril. Which is why the apophatic tradition exists. I think,
though, that something can be done that is more positive.
|
{"url":"http://logicsinandlove.blogspot.com/","timestamp":"2014-04-17T13:03:39Z","content_type":null,"content_length":"95516","record_id":"<urn:uuid:781c729b-bf3e-4dee-8c8e-8213ec3b8281>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Estimation to Check Decimal Multiplication
2.11: Estimation to Check Decimal Multiplication
Difficulty Level:
Created by: CK-12
Practice Estimation to Check Decimal Multiplication
Have you ever wondered about high school track records? Travis has been doing some research.
In his preparation for running the 440, Travis began to research national track records. On July 24, 1982, Darrell Robinson from Wilson High School in Tacoma Washington ran the 440 in 44.69. Travis
is amazed by the time and ready to work on his own personal best.
If Darrell ran this 1.5 times, what would the total time be?
You can solve this problem by multiplying leading digits. Pay attention and you will understand how to do this by the end of the Concept.
With addition and subtraction of decimals, you have seen how estimation works to approximate a solution. It is a good idea to get in the habit of estimating either before or after solving a problem.
Estimation helps to confirm that your solution is in the right ballpark.
With multiplication, rounding decimals before multiplying is one way to find an estimate. You can also simply multiply the leading digits.
Remember how we used front-end estimation to approximate decimal sums and differences? Multiplying leading digits works the same way. The leading digits are the first two values in a decimal. To
estimate a product, multiply the leading digits exactly as you have been—align decimals to the right, then insert the decimal point into the solution based on the sum of places in the original
Estimate the product, $6.42 \times 0.383$
First we need to identify the leading digits, being careful to preserve the placement of the decimal point in each number. In 6.42, the leading digits are 6.4. In 0.383, the leading digits are .38.
Note that 0 is not one of the leading digits in the second decimal. Because zero is the only number on the left side of the decimal point, we can disregard it. Now that we have the leading digits, we
align the decimals to the right and multiply.
$& \quad \ \ 6.4\\& \ \ \underline{\times \; .38\ }\\& \quad \ \ 512\\& \underline{+ \; 1920 \ }\\& \quad 2.432$
Find the product. Then estimate to confirm your solution. $22.17 \times 4.45$
This problem asks us to perform two operations—straight decimal multiplication followed by estimation. Let’s start multiplying just as we’ve learned: aligning the decimals to the right and
multiplying as if they were whole numbers.
$& \quad \ \ \ 22.17\\& \quad \ \underline{\times \; 4.45 \ }\\& \quad \ \ 11085\\& \quad \ \ 88680\\& \underline{+ \; 886800 \ }\\& \ \ 98.6565$
Note the placement of the decimal point in the answer. The original factors both have two decimal places, so once we have our answer, we count over four decimal places from the right, and place the
decimal point between the 8 and the 6.
Now that we have multiplied to find the answer, we can use estimation to check our product to be sure that it is accurate. We can use the method multiplying leading digits. First, we reduce the
numbers to their leading digits, remaining vigilant as to the placement of the decimal point. The leading digits of 22.17 are 22. The leading digits of 4.45 are 4.4. Now we can multiply.
$& \quad \ 22\\& \underline{\times \; 4.4 \ }\\& \quad \ 88\\& \underline{+ \; 880 \ }\\& \ \ 96.8$
If you look at the two solutions, 98.6 and 96.8, you can see that they are actually very close. Our estimate is very close to the actual answer. We can trust that our answer is accurate.
Find each product. Use multiplying leading digits to check your answer.
Example A
$67.9 \times 1.2$
Example B
$5.321 \times 2.301$
Solution: $12.19$
Example C
$8.713 \times 9.1204$
Solution: $79.17$
Here is the original problem once again.
In his preparation for running the 440, Travis began to research national track records. On July 24, 1982, Darrell Robinson from Wilson High School in Tacoma Washington ran the 440 in 44.69. Travis
is amazed by the time and ready to work on his own personal best.
If Darrell ran this 1.5 times, what would the total time be?
To figure this out, we can first write a problem showing only the leading digits.
$44 \times 1.5$
Next, we multiply.
If Darrell had run this distance 1.5 times, his total time would be about 66 minutes.
Here are the vocabulary words in this Concept.
the answer in a multiplication problem.
finding an approximate answer through rounding or multiplying leading digits
Guided Practice
Here is one for you to try on your own.
$.4561 \times .32109$
To multiply by using leading digits, we need to identify the leading digits first.
Now we multiply.
$.45 \times .32 = .144$
This is our answer.
Video Review
Here is a video for review.
- This is a Khan Academy video on multiplying decimals.
Directions: Estimate the products by multiplying the leading digits.
1. $7.502 \times 0.9281$
2. $46.14 \times 2.726$
3. $0.39828 \times 0.16701$
4. $83.243 \times 6.517$
5. $5.67 \times .987$
6. $7.342 \times 1.325$
7. $17.342 \times .325$
8. $.34291 \times 1.525$
9. $.5342 \times .87325$
10. $.38942 \times .9825$
11. $7.567 \times 3.325$
12. $12.342 \times 11.325$
13. $21.342 \times 14.555$
14. $.110342 \times .098325$
15. $37.1342 \times 1.97325$
Files can only be attached to the latest version of Modality
|
{"url":"http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts---Grade-7/r4/section/2.11/","timestamp":"2014-04-21T02:02:22Z","content_type":null,"content_length":"132339","record_id":"<urn:uuid:2ff2bbf5-6d48-4232-86dc-f3136128a1f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Directory tex-archive/macros/latex/contrib/mathexam
mathexam, a LaTeX package for typesetting exams.
Copyright (C) 2007 Jan Hlavacek
This package can help you typeset exams (mostly in mathematics and related
disciplines where students are required to show their calculations followed by
one or more short answers). It provides commands for inclusion of space for
calculations, as well as commands for automatic creation of ``answer spaces''.
In addition, the package will automatically create page headers and footers,
and will let you include instructions and space for students to put their
README This file
mathexam.ins Batch file, run through LaTeX to generate mathexam.sty
mathexam.dtx Docstrip archive, run through LaTeX to generate documentation
sample.tex A sample exam
sample.pdf A pdf version of the sample exam
mathexam.pdf A pdf verrsion of the documentation
This program may be distributed and/or modified under the
conditions of the LaTeX Project Public License, either version 1.2
of this license or (at your option) any later version.
The latest version of this license is in
and version 1.2 or later is part of all distributions of LaTeX
version 1999/12/01 or later.
Download the complete contents of this directory in one zip archive (155.5k).
mathexam – Package for typesetting exams
The package can help you typeset exams (mostly in mathematics and related disciplines where students are required to show their calculations followed by one or more short answers). It
provides commands for inclusion of space for calculations, as well as commands for automatic creation of “answer spaces”. In addition, the package will automatically create page
headers and footers, and will let you include instructions and space for students to put their name.
Documentation Sample exam
Version 1.00
License The LaTeX Project Public License
Copyright 2007 Jan Hlavacek
Maintainer Jan Hlavacek
Contained in TeXLive as mathexam
MiKTeX as mathexam
Topics support for typesetting mathematics
exams, quizzes, etc.
|
{"url":"http://www.ctan.org/tex-archive/macros/latex/contrib/mathexam","timestamp":"2014-04-20T03:15:11Z","content_type":null,"content_length":"9181","record_id":"<urn:uuid:c3e0d982-0974-45b6-b1ec-1ddb0fb7b794>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ML analysis on PAUP
Mary K.Kuhner mkkuhner at kingman.genetics.washington.edu
Fri Jan 28 01:28:09 EST 2000
In article <86qkp2$4ae$1 at mercury.hgmp.mrc.ac.uk>,
Brice Quenoville <quenovib at si.edu> wrote:
[If you could keep your lines to 70 characters it would be
much easier to respond to your messages.]
>1/ I am analyzing nuclear data sequences for which I have some sites
>coded as ambiguities (following a IUB code). I am wondering how
>the last Paup version exactly treats such positions during a ML search.
I only know in detail how PHYLIP treats them, but I suspect PAUP*
is the same.
Normally the likelihood of an observed site is 1.0 for the base
observed, and 0.0 for the three bases not observed. An ambigous
site has a likelihood of 1.0 for each of the bases it could be.
If you'd like to think of this in a parsimony framework, it's
analogous to saying that we'll assume whichever of A or G allows
the more parsimonious solution; if we have several taxa who
are ambiguous for this site, we'll assume A or G for each
one separately to get the most parsimonious solution.
There is still information in such a site, and it is worth including
in the analysis.
>2/The last updated version of Paup provides a table including
>branch lengths, their standard errors and a LRT test under the null
>hypothesis that a branch has zero length. For some branches I have a
>standard error that is weakly higher or equal to the branch length
>itself. However, the LRT tests still tells me that these branches
>are significantly different from zero and thus statistically do exist.
These are two quite different statistical tests, making different
assumptions and approximations. They frequently disagree on
borderline cases. As far as I know there is no clear rule for
preferring one test over the other. I'd say that a branch on which
the tests disagree is weakly supported at best.
PHYLIP shows the same behavior--I inadvertantly used such a case
as a teaching example once, and the students really grilled me on
Can't help with your other two questions, sorry.
Mary Kuhner mkkuhner at eskimo.com
More information about the Mol-evol mailing list
|
{"url":"http://www.bio.net/bionet/mm/mol-evol/2000-January/006865.html","timestamp":"2014-04-19T18:09:24Z","content_type":null,"content_length":"4452","record_id":"<urn:uuid:85272c38-01a9-4a14-b7fd-dac0dbd433b4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Increasing probability
May 19th 2010, 03:49 AM #1
Junior Member
Apr 2010
Increasing probability
Hello all,
suppose there is a probability x that an event happens (and clearly 1-x it doesn't). The catch is, x is not constant, it depends on how many times x has happened before. In other words, with a
probability x, the event happens and x increases. With a probability 1-x, the event doesn't happen, and x remains the same. Hence the probability of the event happening increases in how many
times the event has happened.
I know this isn't a lot of detail. I am just looking for a suggestion of how to begin modelling something like this. I don't know where to even start. Does anybody have any quick comments about
where I should look?
Thank you very much,
This is called a reinforced process. Simplest example is Polya's urn: Consider an urn with balls of two different colors. Each time we pick a ball, we put it back together with a new ball of the
same color (hence this colors becomes more likely).
What you're asking for is quite vague. For any sequence of numbers $(N_k)_{k\geq 0}$ in $[0,1]$, you can define a process $(X_n)_{n\geq 1}$ with values in $\{0,1\}$ such that, given $X_1,\
ldots,X_n$, the probability of $X_{n+1}=1$ is $N_{K(n)}$ where $K(n)=\#\{1\leq i\leq n: X_i=1\}$. If $(N_k)_k$ is increasing, this is a general reinforced process. The choice of this sequence is
the only data you need.
Thanks for your response,
Basically every time the event occurs, the probability of it happening again increases by x%. For example, suppose the probability of drawing a red ball is .5, then if you draw a red ball today
the probability you draw a red ball tomorrow is .55 (if x=.1) etc... I want to express as a function of time t, the expected number of times the event has happened at any given time. Is this
possible to do? Thanks in advance,
May 19th 2010, 06:17 AM #2
MHF Contributor
Aug 2008
Paris, France
May 21st 2010, 04:36 AM #3
Junior Member
Apr 2010
|
{"url":"http://mathhelpforum.com/advanced-statistics/145522-increasing-probability.html","timestamp":"2014-04-20T19:39:45Z","content_type":null,"content_length":"37005","record_id":"<urn:uuid:2cf256e3-5370-4d38-a831-745ed8ea3c8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
d t
A. Gyrokinetics with velocity shear
B. Numerical setup
A. Heat flux
B. Momentum flux
C. Turbulent Prandtl number
A. Possibility of bifurcation
B. Inverting the problem
C. Interpolation
D. Neoclassical transport
E. Bifurcation
A. Modelling Q[t]
1. Dependence of Q[t] on κ
2. Parameterisation of Q[t]
B. Modelling Π[t]
C. The modelled bifurcation
A. Heat flux at constant Π/Q
B. Temperature gradient jump
C. Neoclassical heat flux, turbulent momentum flux
D. Bifurcation by lowering Q/Q[gB]
E. Optimum Q
F. Transition region
|
{"url":"http://scitation.aip.org/content/aip/journal/pop/18/10/10.1063/1.3642611","timestamp":"2014-04-16T17:07:41Z","content_type":null,"content_length":"102949","record_id":"<urn:uuid:60dd1aa4-9644-4cf5-966d-71b6c058d0e0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
air conditioning intensity
The ratio of air-conditioning consumption or expenditures to square footage of cooled floor space and cooling degree-days (base 65°F). This intensity provides a way of comparing different types of
housing units and households by controlling for differences in housing unit size and weather conditions.
The square footage of cooled floor space is equal to the product of the total square footage times the ratio of the number of rooms that could be cooled to the total number of rooms. If the entire
housing unit is cooled, the cooled floorspace is the same as the total floorspace. The ratio is calculated on a weighted, aggregate basis according to this formula:
air-conditioning intensity = Btu for air conditioning/(cooled square feet × cooling degree days)
Related category
|
{"url":"http://www.daviddarling.info/encyclopedia/A/AE_air_conditioning_intensity.html","timestamp":"2014-04-18T08:47:50Z","content_type":null,"content_length":"6959","record_id":"<urn:uuid:2f354188-511c-4b6d-b997-25839dc7b390>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why use Quantitative Writing
Quantitative writing (QW) promotes quantitative literacy as well as writing proficiency. QW assignments link "writing-across-the-curriculum" with "mathematics-across-the-curriculum". At the heart of
both movements is the importance of critical thinking. A good QW assignment engages students with an open-ended, ambiguous, data-rich problem requiring the thinker to understand principles and
concepts rather than simply to apply formulae. Assignments ask students to produce a claim with supporting reasons and evidence rather than a reductive "right answer." By asking students to find
meaning in data and to use numbers in argument, QW assignments promote growth in critical thinking and real world problem-solving.
Reasons Why QW Assignments Are Valuable
Jump down to:Addresses the need for higher-order thinking skills
Powerful tool for enhancing learning
Models & encourages domain transfer
Use quantitative data in rhetorical contexts
Promotes citizenship & career skills
In the increasingly complex, data-rich global environments of the 21st Century, successful students need to be equipped with flexible, adaptive analytical higher-order strategies.
Quantitative writing addresses the need for these higher-order thinking skills.
• Typically, students regard mathematics as the pursuit of right answers through the algorithmic application of increasingly complex formulae and calculations. Doing math means to do the problem
sets at the back of a textbook chapter. Math teachers' attempts to create a real-world context for mathematics often result in contrived story problems posing questions that no one would really
ask: "Train A leaves St. Louis at 8:00 traveling at 60 miles per hour, while Train B . . ." Such story problems are really well-structured, algorithmic problems in disguise.
• By contrast, a QW assignment poses an ill-structured problem requiring students to construct complex, nuanced arguments with supporting reasons and evidence. QW assignments immerse students in a
rich critical thinking environment very different from that of a math story problem. In other words, QW asks and encourages students to acquire expertise by throwing away "the cookbook" and
thinking the way experts do as they analyze and explain real world problems.
• Such modes of learning require students to develop not only domain expertise, but also metacognitive "critical thinking" skills.
2. Quantitative writing is a powerful tool for enhancing student learning.
• The Writing to Learn movement has demonstrated that composition is a productive process. Students don't simply write down their finished thoughts; rather, they use writing to discover what they
• Writing promotes greater engagement with disciplinary concepts and theories.
3. QW also models and encourages domain transfer; for example, assignments could easily link "writing-across-the-curriculum" with "mathematics-across-the curriculum," or with a variety of social
science and humanities curricular emphases.
• QW assignments follow "best practices" recommended by the Mathematical Association of America to teach quantitative reasoning across-the-curriculum to enable students in all disciplines to solve
real world problems.
In its 1998 white paper Quantitative Reasoning for College Graduates: A Complement to the Standards the Mathematical Association of America argues that "colleges and universities should expect
every college graduate to be able to apply simple mathematical methods to the solution of real-world problems." The white paper calls for a "mathematics-across-the-curriculum" program analogous
to writing-across-the-curriculum. This proposal would fit particularly well in disciplines, such as economics or physics, where math is commonly used as a tool of discovery.
• Many theorists believe that writing for different audiences and purposes not only helps students learn to transfer writing skills from one context to another but also deepens engagement with
subject matter concepts.
Dennis J. Palmini in "Using Rhetorical Cases to Teach Writing Skills and Enhance Economic Learning" argues that:
"[S]tudents who practice writing for various audiences with differing needs for economic information evoke a complex cognitive process that requires them to think more intensively about
economics. . . . . They must do more than regurgitate their economic learning; they must, instead, actively transform their knowledge into a form useful to the reader". (
Palmini, 1996 , p. 208)
4. QW assignments require students to analyze and use quantitative data in rhetorical contexts,
skills not typically addressed in math courses. As Rutz & Grawe (2009) observe:
Numbers serve rhetorical functions: providing context, making evidence specific, showing change over time, imparting precision in language, and authorizing confidence in writers and respect on
the part of readers. Even well-prepared student writers need practice with these uses of numbers, because much of their experience with numbers is limited to formal situations that require them
to solve problems with correct answers. Using numbers to reason and persuade, in contrast, draws on skills that are less mathematical and more a function of logic.
• QW assignments require students to analyze and use quantitative data in rhetorical contexts in which writers aim to influence readers' views of a topic. Such contexts activate critical thinking
in ways often not required in a math class.
• To take a simple example, a math class teaches students the difference between a mean and a median. But in making a rhetorical argument with numbers, the thinker must decide when to use means
versus medians-for example, whether to report the median income of a certain population segment or the mean income-and to understand both the conceptual and ethical significance of the
• Likewise math classes teach students how to read tables and graphs but often not how to construct tables and graphs for a rhetorical purpose. In a research project with finance students at
Seattle University, a research team discovered that students were rarely taught to think of graphs rhetorically as arguments; many thought that including graphics in a paper meant attaching
spread sheets (Bean, Earenfight, and Carrithers, 2005).
• QW assignments can give students practice at creating rhetorically effective graphics, including composing the kinds of explicit titles, legends, and labels that readers need. These examples
suggest how QW assignments teach students to think about numbers in ways not typically addressed in math classes.
5. QW assignments promote quantitative skills needed for 21st Century citizenship and careers.
• Most issues of public policy have a significant quantitative dimension. Whether deliberating about health care, energy usage, or immigration policy, effective citizens must be able to interpret
and analyze numbers, read graphs, understand simple statistics, and recognize the ways that numerical data can be manipulated for rhetorical effect. QW assignments thus help develop students for
responsible citizenship.
• Many careers require extensive use of quantitative skills. Persons most likely to advance in their careers need not only to analyze quantitative data but also to argue with numbers. Likewise
career success requires effective writing skills. QW assignments simultaneously promote the quantitative and verbal literacy needed for career success. As Colander and McGoldrick (2009, 4) note:
"Employers are looking for inquisitive students who have a passion for learning, not ones who have learned specific skills. They prefer general skills such as critical thinking,
quantitative, and communication skills."
• Writing ability is also important for graduate study and lifelong learning.
|
{"url":"http://serc.carleton.edu/sp/carl_ltc/quantitative_writing/why.html","timestamp":"2014-04-17T17:52:41Z","content_type":null,"content_length":"33982","record_id":"<urn:uuid:23827f6e-7cab-492a-b640-41ee66c6ff55>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
i have a loan of 50,000, interest rate of 9%, which compounds monthly for 7 years. what is the monthly payment. please show the formula and variable to calculate the payment. I have been told the
answer is 804.45, but I dont know how to come up with that answer
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50d4726ee4b0d6c1d54162f6","timestamp":"2014-04-20T06:15:00Z","content_type":null,"content_length":"42121","record_id":"<urn:uuid:d1c9d148-8c4a-447a-b1f1-013803108750>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Four-Color Map Problem: Some History
Date: 12/10/97 at 19:57:56
From: Inoka Perera
Subject: The Four colour Problem
I need a description about the above topic. Could you please help me
out? Thanks.
Date: 12/11/97 at 05:50:23
From: Doctor Anthony
Subject: Re: The Four colour Problem
The subject was first raised by a part-time mathematician, Francis
Guthrie, in 1852. He was colouring in a map of the counties of Britain
and was intrigued by the fact that four colours appeared to be
sufficient regardless of the complexity of boundary shapes or how many
regions had a commom border. He passed the problem on to University
College London, to the eminent De Morgan, who in turn passed it to the
great William Hamilton. Hamilton was unable to invent a map that
required five colours but neither could he prove that no such map
Like Fermat's Last Theorem, this apparently trivial problem generated
a great deal of interest and activity. In 1879 a British mathematician
Alfred Kempe published a 'proof' that was accepted by the mathematics
establishment until in 1890 Percy Heawood of Durham University showed
that the so-called proof was fundamentally flawed. The search
continued, and like Fermat's theorem led to great advances in number
theory, so the four-colour problem gave a stimulus to the new and
increasingly important topic of topology.
The first breakthrough in the four-colour problem came in 1922 when
Philip Franklin ignored the general problem and settled for a proof
which showed that any map containing 25 or fewer regions required only
four colours. This was extended in 1926 to 27 regions, in 1940 to 35
regions, and in 1970 to 39 regions. Then in 1976 two mathematicians at
the University of Illinois, Haken and Appel, came up with a new
technique which would revolutionise the concept of mathematical proof.
Haken and Appel used the ideas of Heinrich Heesch that the infinity of
infinitely variable maps could be constructed from a finite number of
finite maps. They reasoned that by studying these building-block maps
it would be possible to attack the general problem.
This proved very difficult in practice to achieve, because the number
of building block configurations could not be reduced below 1482. To
crank through all the permutations that might occur with this number
of configurations would take a lifetime. Enter the age of the
computer. In 1975, after five years of working on the problem, they
turned to the new number-cruncher and in 1976 after 1200 hours of
computer time were able to announce that all 1482 maps had been
analysed and none of them required more than four colours.
The problem with this type of proof is that only another computer can
carry out the customary check on its validity. Some mathematicians are
most reluctant to accept it because no one, however patient, could
work through the exposition, line by line, and verify that it is
correct. It has been disparagingly referred to as a 'silicon proof'.
The fact is that mathematicians will increasingly have to rely on such
methods. The age of the purist of pure logic as the only acceptable
technique in mathematics has probably passed.
-Doctor Anthony, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/52535.html","timestamp":"2014-04-18T15:50:57Z","content_type":null,"content_length":"8115","record_id":"<urn:uuid:2164103d-323b-4628-b2c5-1defde5b4ce9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by algebrajunkie on Tuesday, April 26, 2011 at 8:40pm.
Shellie pays $4.00 for a square piece of wood, which she makes into a stop sign by cutting the corners off. what is the cost of the wasted part?
I don't even know where to begin?
do I take $4.00*1/8....which equals half and subtract that from 4.00...getting $2.50
• college algebra - drwls, Tuesday, April 26, 2011 at 9:10pm
Check this out:
It looks like 2/9 of the original square gets removed.
• college algebra - Reiny, Tuesday, April 26, 2011 at 9:11pm
Each of the cut-off corners would be an isosceles right-angled triangle.
The stop sign will be an octagon with each side equal to x.
Consider one of the triangles, the hypotenuse will be x.
by Pythagoras you can show that each of the equal sides must be x/√2
each side of the original square is then equal to
x/√2 + x + x/√2 = x(2+√2)/2
and the original area = x^2(2+√2)^2/4 = x^2(3+2√2)/2
area of one triangle = (1/2)(x/√2)(x/√2) = x^2/4
or the total wasted part is x^2
so the part wasted is x^2/(x^2(3+2√2)/2))($4)
= 8/(3+2√2) dollars or appr. $1.37
• correction: college algebra - Reiny, Tuesday, April 26, 2011 at 9:19pm
last part should be .....
each side of the original square is then equal to
x/√2 + x + x/√2 = x(2+√2)/√2
and the original area = x^2(2+√2)^2/2 = x^2(3+2√2)
area of one triangle = (1/2)(x/√2)(x/√2) = x^2/4
or the total wasted part is x^2
so the part wasted is x^2/(x^2(3+2√2))($4)
= 4/(3+2√2) dollars or appr. $0.69
• college algebra - drwls, Tuesday, April 26, 2011 at 9:23pm
Reiny is correct for a regular octagon. In looking at the figure of
I erroneously assumed that each pair of cut-off corners amounted to 1/9 of the square. That would amoount to $0.89 wasted.
Related Questions
algebra math - Lucas makes crosses out of a scrap piece of wood by cutting it ...
calculus - right circular cylindrical tin cans are to be manufactured to ...
calculus - A 33 by 33 square piece of cardboard is to be made into a box by ...
algebra - Volume of a Box A box is constructed by cutting out square corners of ...
college algebra word problem - An open box is made from a square piece of ...
college algebra - An open box is made from a square piece of cardboard 20 inches...
Math--Please Help - A pizza bx with a square base is to be made from a ...
Math - A man is cutting a board that is 6 3/4 ft into pieces that are 8 inches ...
maths - a square piece of tin 31cm and20cm is to be made into a box without top,...
Mathematics - A regular octagon is formed by cutting off the corners of a square...
|
{"url":"http://www.jiskha.com/display.cgi?id=1303864822","timestamp":"2014-04-20T18:40:13Z","content_type":null,"content_length":"10509","record_id":"<urn:uuid:cd89c28a-b887-4d89-abb5-76bd9eac92c5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Simple Application of Graph Theory
< Back Page 2 of 11 Next >
A Simple Application of Graph Theory
A graph is simply a set of vertices (or endpoints) and a set of edges that connect some or all of the vertices.
Figure 1 illustrates a basic graph.
The vertices in Figure 1 are the numbered circles, and the edges are the lines joining the circles. A vertex can represent any entity that you want to model; for example, a vertex might represent a
geographical location, and the edges might represent routes between those locations. Vertices that are joined are said to have an adjacency with each other.
So how does all this relate to the real world?
Imagine that the graph in Figure 1 represents geographical locations or nodes in a network. Further, let’s assign each edge a cost—the higher the cost the more expensive the route.
I’ve added some costs (or weights) in Figure 2.
Now, to get the best route from one node to another, you add up the weights of the edges traversed. The journey with the lowest overall weight is the cheapest; to get from 0 to 4 via vertices 5 and 3
incurs a cost of 3 + 5 + 1 or 9.
To get from 0 to 4 via vertices 1 and 6 incurs a cost of 2 + 4 + 6, or 12. So we can say that to get from 0 to 4, the cheapest route is 0-5-3-4.
Clearly, this is a very small example, and you can imagine the problems faced using graphs to represent very large data sets: traversal becomes expensive; storage requirements shoot up, and so on.
But, you get the idea.
Once you start to understand these simple concepts you’re beginning to get the basics of graphs. Bear in mind that there is an enormous amount of theory behind this area, but it is possible to gain
an overview.
Moving up the value chain is all about being able to tackle complex problems and subjects (see references [3] and [4] for more information).
Let’s now look at some simple Java to help us to program some concrete examples.
< Back Page 2 of 11 Next >
|
{"url":"http://www.informit.com/articles/article.aspx?p=1155868&seqNum=2","timestamp":"2014-04-20T11:33:57Z","content_type":null,"content_length":"30188","record_id":"<urn:uuid:1776152c-eb1a-407b-a900-52e944b52fcc>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Binning a continuous independent variable for flexible nonparametric models [in Stata]
Sometimes we want to a flexible statistical model to allow for non-linearities (or to test if an observed relationship is actually linear). It's easy to run a model containing
a high-degree polynomial
(or something similar), but these can become complicated to interpret if the model contains many controls, such as location-specific fixed effects.
Fully non-parametric models
can be nice, but they require
partialling out the data
and standard errors can be come awkward if the sample is large or something sophisticated (like
accounting for spatial correlation
) is required.
An alternative that is easy to interpret, and handles large samples and complex standard errors well, is to convert the independent variable into discrete bins and to regress the outcome variable on
dummy variables that represent each bin.
For example, in a
with Jesse we take the typhoon exposure of Filipino households (a continuous variable) and make dummy variables for each 5 m/s bin of exposure. So there is a
dummy variable that is zero for all households except for those households whose exposure was between 10 and 15 m/s, and there is a different dummy for exposure between 15 and 20 m/s, etc. When we
regress our outcomes on all these dummy variables (and controls) at the same time, we recover their respective coefficients -- which together describe the nonlinear response of the outcome. In this
case, the response turned out to be basically linear:
This approach coarsens the data somewhat, so there is some efficiency loss and we should be wary of
Type 2
error if we compare bins to one another. But as an approach to determine the functional form of a model, this is a great approach so long as you have enough data.
I found myself rewriting Stata code to bin variables like this in many different contexts, so I wrote
to do it for me quickly. Running these models can now be done in two lines of code (one of which is the regression command).
allows you to specify a bin width, a top bin, a bottom bin and a dropped bin (for your comparison group). It spits out a bunch of dummy variables that represent all the bins which cover the range of
the specified variable. It also has options for naming the dummy variables so you can use the wildcard notation in regression commands. Here's a short example of how it can be used:
set obs 1000
gen x = 10*runiform()-5
gen y = x^2
bin_parameter x, s(1) t(4) b(-4) drop(-1.6) pref(x_dummy)
reg y x_dummy*
Help file below the fold.
/* ====================================================
S. HSIANG SHSIANG@PRINCETON.EDU 9/2012
bin_parameter X [if], Size(integer) Top_bin_lower_bound(integer) Bottom_bin_upper_bound(integer) DROPped_bin(real) [PREF(string) NONAME NODROP]
BIN_PARAMETER takes variable X and generates a sequence of dummy variables for binned values of X. The edges of the bins are determined by supplied arguments.
The new dummy variables have a common prefix that may contain the variable name as well as the range of each bin. The prefix and inclusion of the variable name are options that can be changed.
If the edges of the bin land on values of X that are negative, the new dummy vars are labeled with an "M" instead of a minus sign, since the minus sign is not allowed in variable names.
A specified bin is dropped as the comparison group, unless the option NODROP is specified.
Required arguments: Size - width of bins (default = 1) Top_bin_lower_bound - lower cutoff for maximum bin, all values above this number are binned Bottom_bin_upper_bound - top cutoff for minimum bin,
all values below this number are binned
DROPped_bin - value of X that denotes which bin is dropped. The bin that contains this value will be dropped. Example: if DROP(1) is specified and there is a bin for values of 0 < x < 3, than this
bin will be dropped.
PREF() - a string that changes the prefix on the variable names for the generated dummy vars (default prefix is "_bin")
NONAME - if specified, the new dummy vars will not have the variable name for X used after the prefix in their names NODROP - prevents any bin from being dropped. If NODROP is specified, then no bin
will be dropped and all coefficients in later estimates will represent comparisons with zero, rather than the dropped bin. In these cases, DROP() must still be specified with a real argument, however
the value of the argument is irrelevant.
The functions PLOT_RESPONSE and PARM_BIN_CENTER are designed to be used along with BIN_PARAMETER, following regression estimations.
Hsiang, Lobell, Roberts and Schlenker, 2012: "Climate and the Location of Crops"
bin_parameter X, size(2) t(20) b(4) drop(5) pref(_dummy_var)
|
{"url":"http://www.fight-entropy.com/2013/01/binning-continuous-independent-variable.html","timestamp":"2014-04-17T07:17:29Z","content_type":null,"content_length":"150501","record_id":"<urn:uuid:a87533e4-e390-485e-9eea-7c5df8d577d0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
|
System of Equations Word Problems
Follow Algebra-Class
Recommend Algebra Class
Algebra Class E-course Members
Sign Up for Algebra Class E-courses
Click here to retrieve a lost password.
Search This Site
Algebra Class Courses
Algebra Class
Most Popular Pages
What People Are Saying...
"I'd like to start off by relaying my sincerest gratitude for your dedication in teaching algebra. Your methodology is by far the simplest to follow, primarily because you take the time to include
the small steps in between that most other teachers leave out.
It helps to know why you are doing something. I am 45 and heading to college to get my BS in Business. I need to brush up, hence the visit to your site. Great Job!"
Jimmy - United States
"I stumbled onto your site after I found out that I needed to use some fundamental algebra for an assignment. Turns out I had forgotten some things and your great site helped me remember them like
"that" (snap of fingers). The organization of the site let me find exactly what I was looking for so easily. Kudos to you for maintaining such a great resource for students of all ages!"
Tom - United States
"I just wanted to write and basically thank you for making such a wonderful website! I'm 20 years old and about to take a basic placement test for college. I wanted to brush up on my Algebra skills
and I stumbled upon your site. I'm amazed at how simple you make it and how fast I'm remembering Algebra! I don't remember getting most of the answers right when I had an actual teacher in front of
me teaching this. Thanks a lot!"
Elizabeth - United States
"I am a pensioner living in South Africa. I stumbled on your website, the best thing that could ever happen to me! Your course in Algebra has helped me a lot to better understand the different
concepts. Thank you very much for sharing your skills for teaching math to even people like me. Please do not stop, as I am sure that your teachings have helped thousands of people like me all over
the world."
Noel - South Africa
This is an amazing program. In one weekend I used it to teach my Grade 9 daughter most of the introductory topics in Linear Relations. I took her up to Rate of Change and now she can do her homework
by herself.
Reg - United States
|
{"url":"http://www.algebra-class.com/system-of-equations-word-problems.html","timestamp":"2014-04-20T13:19:34Z","content_type":null,"content_length":"34887","record_id":"<urn:uuid:f906e436-cc19-41ab-a37d-ba53c2cf6149>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 11
Trigonometry (Law of Sine)
Fire towers A and B are located 10 miles apart. They use the direction of the other tower as 0°. Rangers at fire tower A spots a fire at 42°, and rangers at fire tower B spot the same fire at 64°.
How far from tower A is the fire to the nearest tenth of a mile? Wit...
Trigonometry (Law of Sine)
Triangulation can be used to find the location of an object by measuring the angles to the object from two points at the end of a baseline. Two lookouts 20 miles apart on the coast spot a ship at
sea. Using the figure below find the distance, d, the ship is from shore to the n...
Trigonometry (Law of Sine)
Triangulation can be used to find the location of an object by measuring the angles to the object from two points at the end of a baseline. Two lookouts 20 miles apart on the coast spot a ship at
sea. Using the figure below find the distance, d, the ship is from shore to the n...
Trigonometry (Law of Sine)
Fire towers A and B are located 10 miles apart. They use the direction of the other tower as 0°. Rangers at fire tower A spots a fire at 42°, and rangers at fire tower B spot the same fire at 64°.
How far from tower A is the fire to the nearest tenth of a mile?
Just divide it on what do you want to removed and left what formula do you want to get. if you want to get the x then it must be 7x/7=t/7 then you must cancel all the same in 7x/7 so cancel '7'
because 7 divided by 7 is equal to 1 so it is just the same. After that all...
Trigonometry (Math)
Thank you! :D Please don't be tired of helping others! :D God Bless you. :D
Trigonometry (Math)
From the top of a building 70 ft. high, two objects in a straight line from the building are observed at angles of depression of 28° and 41°6' respectively. Find the distance between the two objects.
-Complete Solution please. :'(
Trigonometry (Math)
woah! it doesn't look the way i made it. :( it's just look like letter 'N'. if it's fix.
Trigonometry (Math)
l i\ l i \ l i \ l i \ l i____\l
Trigonometry (Math)
Thank you!! I trust you. :D I hope this is right. :D *This is my illustration l i\ l i \ l i \ l i \ l i____\l -the 'l' is the 1st building. -the 'i' is the 2nd building. -the "backslash(\)" is the
ladder. -the "underscore(_)" is the street....
Trigonometry (Math)
A ladder 42 feet long is place so that it will reach a window 30 feet high (first building) on one side of a street; if it is turned over, its foot being held in position, it will reach a window 2o5
feet high (second building) on the other side of the street. How wide is the s...
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Alianne","timestamp":"2014-04-19T02:15:17Z","content_type":null,"content_length":"9053","record_id":"<urn:uuid:7494732d-553e-48d9-8197-1824b8e8eb3b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
|
If a divides b, then a's Fib number divides b's Fib number
November 11th 2013, 11:59 PM #1
Nov 2013
If a divides b, then a's Fib number divides b's Fib number
The hint is to use F[a+b] = F[a]F[b+1] + F[a-1]F[b].
What i keep trying:
a divides b => there is a q in Z such that aq = b.
so aq + a = b + a, and then i use the hint.
But i never get anywhere.
I want to use that if a divides b and a divides c then a divides mb + nc for m,n in Z, but i can't figure out how/ I tried to let c = a, but I got nowhere.
Re: If a divides b, then a's Fib number divides b's Fib number
Re: If a divides b, then a's Fib number divides b's Fib number
ok so, to use induction on q:
assume a divides b
by def of division, there is a q in Z such that aq = b
now if q = 0, then F[b ]= 0, so b = 1 and if b = 1, a must equal 1
and if q = 1, then a = b and of course F[a] = F[b]
assume, as an inductive hypothesis, that since there is a q such that aq = b there is an r such that F[a]r = F[b]
now i just need to show that there is such an r for a(q+1) = b?
Re: If a divides b, then a's Fib number divides b's Fib number
Hmm, don't you think it is strange that aq = b and q = 0 imply that b = 1?
If q = 0, then b = 0 and every number a divides 0. But hopefully F[0] = 0, and so every number divides F[0] as well.
Re: If a divides b, then a's Fib number divides b's Fib number
Edit: Post deleted
Last edited by SlipEternal; November 15th 2013 at 08:46 AM.
Re: If a divides b, then a's Fib number divides b's Fib number
i think i have made a bit of an error in saying 'now if q = 0, then Fb = 0, so b = 1 and if b = 1, a must equal 1'
its more like if r = 0, then Fb = 0, so b =1, since the first fib number is 0. and of course Fa can be whatever, since anything times 0 is 0. this case is a bit werid because if b = 1, then a = q
= 1. generally, q and r are not the same; i guess that is for q > 1. a pattern is that as q increases by one, b increases by a factor of a.
but if q is 0, then b is 0, like you said, and a can be anything. actually, tho, this case shouldn't arise right? since there is no zeroth fib number.
i want to know about this hint. when i try to do the inductive step:
aq = b
aq + a = b + a
a(q+1) = a + b
then F[a(q+1)] = F[b]
i expand then right side, F[a(q+1)] = F[a]F[b+1] + F[a-1]F[b]
then i play aroudn and look for a way to apply the inductive hypothesis.
is that the right way, or are you clueing me in on a way without the hint?
i noticed that if q is 2, then r is the sum of the two previous r's, like 2*2=4, 3*2=6, 4*2=8 and on the fib side of things 1*3=3, 2*4=8, 3*7=21 where seven is r. that struck me but i quickly
lost interest in looking for a pattern for higher q's in the sort of incremental increase of a and b approach.
Re: If a divides b, then a's Fib number divides b's Fib number
Let's agree on the definition of Fibonacci numbers. The indices (e.g., a, b) may start from 0 or from 1, and the first Fibonacci number may be 0 or 1, so there are at least four variants. From
what you wrote, the definition seems to be
F[1] = 0; F[2] = 1; F[n+2] = F[n] + F[n+1] for n ≥ 1.
Is this correct?
There is no reason to consider the case r = 0 because induction is on q.
There is nothing strange about a = b and q = 1. Induction has to start somewhere. If the first index is 1, then induction on q starts from q = 1, and the base case is trivial.
Yes. The inductive hypothesis is that F[a] divides F[b], so the conclusion is now obvious. The only technical remark is that F[a-1] does not exist if a = 1 according to the definition above, so
the case a = 1 has to be treated separately.
Re: If a divides b, then a's Fib number divides b's Fib number
After a brief search, i conclude that it isn't against the forum rules to post proofs. still, since much of the work has already been posted, i will not go into every detail.
So i am trying to prove that a divides b implies F[a] divides F[b]
i assume that it does, to show that that implication holds larger q.
aq = b implies there is an r such that F[a]r = F[b]
comment: a previous result is that if a > 1, F[a+b] = F[a]F[b+1] + F[a-1]F[b] and we assume the first Fibonnaci number, F[1], is 1, contrary to other of my posts. (i checked and this is the
definition of the book i am using). if q is 0, then a*0=b implies b is zero, but there is no zeroth fib number according to my book, so this might be a problem. if so, i want to say that the F[0]
=0, so that r = 0.
but if q is 1, then a = b and F[a]r = F[b] where r = 1 and a,b are >= 1
*** now we take a,b > 1 such that a(q+1) = b ***
then F[a(q+1)] = F[b]
or F[aq + a], which is F[b + a]
so, by the equality mentioned in the comment, F[b]F[a+1] + F[b-1]F[a] = F[b]
but I am assuming that F[a]r = F[b], so F[a]rF[a+1] + F[b-1]F[a] = F[b]
or F[a](rF[a+1] + F[b-1]) = F[b]
and there we have it because that is an expression of F[b] as a product of F[a] and an integer.
two issues:
what about the base case, is it OK to take the base case as one? there are are no negative Fib numbers, and no Fib numbers of negative numbers, so i think q must b in Z^+ or we must state the
theorem with the Fibonacci numbers of the absolute values of a and b.
i dont really understand is the starred step; is it OK?
thanks for your patience on this.
Last edited by jtoem; November 18th 2013 at 10:50 AM. Reason: typo
Re: If a divides b, then a's Fib number divides b's Fib number
This is a good base case.
No need to require a > 1; we may allow a = 1.
Here and below, you are using b in two senses: as aq and as a(q+1).
Re: If a divides b, then a's Fib number divides b's Fib number
You are right, I used b = aq and b = a(q+1). That is not OK.
To fix it, can't I just introduce a new number, c?
assume: aq = b implies there is an r such that F[a](r) = F[b]
base case: if q is 1, then a = b and Far = Fb where r = 1 and a,b are >= 1
then, a(q+1) = c for c>1 and a>=1
F[a(q+1)] = F[c]
F[aq + a] = F[c]
F[aq]F[a+1] + F[aq-1]F[a] = F[c]
since b = aq, F[b] = F[aq] and F[b] = F[a](r)
F[a](r)F[a+1] + F[aq-1]F[a] = F[c]
F[a](rF[a+1] + F[aq-1]) = F[c]
The proposition is true because if you assume it is true for some q, it is true for the next q, even if b changes.
Re: If a divides b, then a's Fib number divides b's Fib number
The proof is OK now, but there are a couple of smaller remarks.
The first line in the quote you need to prove, not assume. You do assume it as the induction hypothesis, but this happens during the proof of the induction step, after the base case.
It is still not clear why F[aq-1] makes sense, i.e., why aq > 1. You assumed that a(q+1) > 1, but if a = 1 and you are making the induction step from q = 1 to q = 2, then aq = 1. Fortunately, F
[a] = 1 and so F[a] | F[b] for any b.
Here is how I would write it.
Prove: If a | b, then F[a] | F[b].
If a = 1, then F[a] = 1 and the statement is obvious. For the rest of the proof, assume that a > 1.
Let P(q) be the following statement: F[a] | F[aq]. We prove P(q) by induction on q.
Base case: q = 1. Then P(q) is "Fa | Fa", which is obviously true.
Induction step. Assume the induction hypothesis P(q), i.e., F[a] | F[aq]. Need to prove: P(q+1), i.e., F[a] | F[a(q+1)]. But F[a(q+1)] = F[aq+a] = F[aq]F[a+1] + F[aq-1]F[a]. Here aq - 1 >= 1
because a > 1 and q >= 1, so F[aq-1] is defined. Since F[a] | F[aq] by the induction hypothesis, we have F[a] | F[aq]F[a+1] + F[aq-1]F[a], which concludes the induction step and the proof.
Note that I used only variables a and q instead of a, b, c, q and r, which makes it easier to grasp the proof.
Re: If a divides b, then a's Fib number divides b's Fib number
I think you are right that it is better to make clear the statement to be proved and the inductive hypothesis. Also, my proof does not make clear that the fib number of aq-1 exists. It is
remedied by saying that a is strictly greater than one after the base case, you did.
In your proof, mentioning these points helps. I noticed that I used the definition of division while you used the fact that if a divides b and a divides c then a divides mb + nc for m,n in Z,
which in my book, is called something like the linear combination lemma. It is much nicer with just the two variables. Thanks for your guidance.
November 12th 2013, 04:58 AM #2
MHF Contributor
Oct 2009
November 15th 2013, 01:23 AM #3
Nov 2013
November 15th 2013, 07:44 AM #4
MHF Contributor
Oct 2009
November 15th 2013, 08:15 AM #5
MHF Contributor
Nov 2010
November 17th 2013, 12:48 PM #6
Nov 2013
November 17th 2013, 01:15 PM #7
MHF Contributor
Oct 2009
November 18th 2013, 10:45 AM #8
Nov 2013
November 18th 2013, 12:33 PM #9
MHF Contributor
Oct 2009
November 19th 2013, 11:49 PM #10
Nov 2013
November 20th 2013, 02:40 AM #11
MHF Contributor
Oct 2009
November 20th 2013, 10:32 PM #12
Nov 2013
|
{"url":"http://mathhelpforum.com/number-theory/224159-if-divides-b-then-s-fib-number-divides-b-s-fib-number.html","timestamp":"2014-04-17T20:58:27Z","content_type":null,"content_length":"79492","record_id":"<urn:uuid:92ed6fd3-1936-41b8-9c47-9122707e590a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] "Hidden" contradictions
Prof. Ranjit Nair director at cpfs.res.in
Sat Aug 17 06:01:16 EDT 2013
Let's not
forget that Hilbert’s 6th problem was the mathematical
treatment of the axioms of physics. Also that the question of
completeness of physics in 20 years has been the subject
of bets by
Stephen Hawking. He conceded his first such bet in
his Dirac lecture
2002 using Gödel's theorem to argue
that if mathematics is
'incomplete', physics has to be so too.
The same bet offered in
Delhi in
2001 was taken up by me and
since the earlier concession
was generic in
nature, it follows
that the later bet stands conceded
too. However it
that Hawking has reneged, after recovering
his optimism about the
possible completeness of physics, in work with
Hertog, Hartle and
others. It is true that physics often employs theories
that are
literally false, such as recovering the classical limit by
Planck's constant equal to zero (the correspondence principle of
Niels Bohr), when in fact the constant is what it is, a non-zero
The use of counterfactual approximations evident in e.g.
"effective field theory" approach to quantum field
theory. No
further elaboration is required.
Professor Ranjit
Centre for Philosophy & Foundations of
P.B. No. 4017, Malaviya Nagar P.O.
New Delhi
Tel. +91 11 65951738 / 46170795
Cell +91
9810332846 / 8826585999
director at cpfs.res.in
href="mailto:irector.cpfs at gmail.com">irector.cpfs at gmail.com
/>president at wias.in
This message and any attachments included are
solely for
the addressee/s and the information
conveyed is
privileged and
confidential. If perchance you are
not the person/s
to whom this message
is directed, any copying
or distribution is
strictly interdicted and you
are advised to
delete the message
forthwith. Furthermore, the usual
about e-mail exchanges
apply. Such communications are not wholly
secure and while every
effort is made to scan outgoing messages
viruses and other
malware, you are also advised to be
vigilant and help us
notifying any discrepancies that come
to light, for us to take remedial
/>/>From:"Timothy Y. Chow"
To:"Foundations of
Date:Thu, August 15, 2013 8:05 am
/>/>Subject:Re: [FOM] "Hidden" contradictions
/>/>> On Wed, 14 Aug 2013, Mark Steiner wrote:
>> I
/>appreciate this response. However, my physicist
friends tell me that
/>/>>> the theory known as QED is
thought to be inconsistent, but
people use it
anyway, with great success in
predictions. I
think what this
means is
>> the claim
that there is no way to
formalize QED in a consistent
axiomatic system. If
this is
right, then there is a sense in which
formal systems do
play some kind of role in physics.
> The alleged
inconsistency of QED is a
complicated topic that has been
discussed in great
before on FOM and I don't think we want to
rehash it all
here, but I'll just say that even if we grant the
/>> debatable) propositions that (1) "QED is
to be
inconsistent" and
> (2) this means
that "there is
/>no way to formalize QED in a consistent
> axiomatic
then really all this shows is
the exact opposite:
/>> namely, that
formal systems *do
not* play an important role
in physics.
> If
did, then the physicists would be
compelled to abandon QED. The
/>> only role formal systems are
playing here is in framing
/>> philosophical
discussions *about* physics.
mailing list
> FOM at cs.nyu.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20130817/5ff52ba1/attachment.html>
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2013-August/017488.html","timestamp":"2014-04-20T01:12:19Z","content_type":null,"content_length":"7574","record_id":"<urn:uuid:3931d6b6-7114-405e-b563-2236d4f87686>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
edHelper.com logic puzzle Logic Worksheet
1. One box has a length of 42 cm and a height of 55 cm 1 mm.
2. One box has a width of 8 cm 4 mm and a height of 94 cm 7 mm.
3. The length of the white box is 0.234 meters.
4. The yellow box has the smallest width.
5. The white box has the largest height.
6. If the length of purple box was increased by 2 cm, the volume of purple box would increase by 553,140 cubic millimeters.
7. The volume of the yellow box is 9,548,910 cubic millimeters.
|
{"url":"http://www.edhelper.com/logic/Logic63.htm","timestamp":"2014-04-19T20:03:40Z","content_type":null,"content_length":"4223","record_id":"<urn:uuid:93b5d402-481e-47fa-aedd-0a5d1d8a692f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: More Results on r-Inflated Graphs: Arboricity, Thickness,
Chromatic Number, and Fractional Chromatic Number
Michael O. Albertson
L. Seelye Clark Professor of Mathematics
Department of Mathematics and Statistics
Smith College, Northampton MA 01063
Debra L. Boutin
Department of Mathematics
Hamilton College, Clinton, NY 13323
Ellen Gethner
Department of Computer Science
University of Colorado Denver, Denver, CO 80217
September 8, 2010
1 Introduction
In 1890, P. J. Heawood published his famous article Map-colour theorem in the Quar-
terly Journal of Mathematics [Hea90] that illustrated the flaw in A.B. Kempe's "proof"
of the Four Colour Theorem [Kem79]. Heawood's main intention was to investigate gen-
eralizations of the Four Colour Problem "of which strangely the rigorous proof is much
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/624/1149268.html","timestamp":"2014-04-20T12:32:27Z","content_type":null,"content_length":"8048","record_id":"<urn:uuid:62e2599c-6156-4d92-b952-80dba90d52ad>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
|
= Preview Document = Member Document = Pin to Pinterest
• Includes pentagon, hexagon, octagon, trapezoid cone,and right triangle. Common Core: Geometry K.3.1, 1.G.1 2.G.1, 3.G.1
• Describes (with pictures) two dimensional figures, and then provides worksheets that can be used as quizzes or review pages.
• Graphic chart to help students study volumes and areas of geometric shapes. Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3
• Includes circle, oval, square, rectangle, triangle, star and diamond Common Core: Geometry K.3.1, 1.G.1 2.G.1, 3.G.1
• Includes circle, oval, square, rectangle, triangle, star, and diamond. Common Core: Geometry K.3.1, 1.G.1 2.G.1, 3.G.1
• Spin the wheel and advance to the corresponding shape. This rocket-themed game is a fun way to master basic shapes.
• Spin the wheel and advance to the corresponding shape. This colorful rocket-themed game is a fun way to master basic shapes.
• Includes pentagon, hexagon, octagon, trapezoid, cone, and right triangle. Common Core: Geometry K.3.1, 1.G.1 2.G.1, 3.G.1
This great mini-unit combines graphing and geometry to review and build these important math skills.
"Put a hexagon on the right side of the large circle..."
A simple game to review the names of various polygons. Print several copies and have students race in pairs.
Curved shapes: circle, oval, crescent; one per page, each with a picture and a definition.
Cut out the two dimensional shape and fold it to make a three dimensional cylinder.
Describes rays and angles, and then provides worksheets that can be used as quizzes or review pages.
Describes (with pictures) two dimensional figures, from line segments to angles to polygons, and then provides worksheets that can be used as quizzes or review pages.
Triangle shapes: equilateral, isoceles, scalene, right, obtuse, acute; one per page, each with a picture and a definition.
Describes lines (line segments, rays, points, and more) and then provides worksheets that can be used as quizzes or review pages.
Describes line relationships (line segments, and rays; intersecting, parallel, perpendicular) and then provides worksheets that can be used as quizzes or review pages. Common Core: Geometry:
4.G.A.1, 5.GA.1
Describes (with pictures) congruent figures, and then provides several worksheets that can be used as quizzes or review pages.
Answer questions about area, perimeter, line of symmetry, triangles and polygons while making a simple origami. Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3
Cut out the shapes and arrange them into this shape of a Christmas ornament.
Cut out the shapes and arrange them into this shape of a school bus.
Spin the wheel and advance to the corresponding shape. This castle-themed game is a fun way to master basic shapes.
Spin the wheel and advance to the corresponding shape. This colorful castle-themed game is a fun way to master basic shapes.
Includes line, line segment, points, end points, ray, intersecting, parallel and perpendicular line posters. Common Core: Geometry: 4.G.A.1, 5.GA.1
Cut out the shapes and arrange them into this shape of a Christmas tree.
Five pages of colorful shapes -square, rectangle, circle, and triangle- each in five different colors, unlabeled.
Five pages of colorful shapes -trapezoid, oval, pentagon, hexagon- each in five different colors, unlabeled.
Circle the item that does not belong in each group (circle, ball, orange, soap).
Quadrilateral shapes: square, rectangle, rhombus, parallelogram; one per page, each with a picture and a definition.
Assemble the shapes correctly to make a buckle hat.
Eight colorful math posters that help teach the concepts of area, perimeter and dimensional figures. Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3
Circle the item that does not belong in each group (square, orange, soap, cards).
Assemble the shapes correctly to make a buckle hat.
Cut out the two dimensional shape and fold it to make a three dimensional rectangular prism.
Cut out the two dimensional shape and fold it to make a three dimensional triangular prism.
Students practice basic shapes with five pages of activities, including tracing, coloring, and cut and paste.
Cut out the shapes and arrange them into this shape of a Christmas ornament.
Cut out the two dimensional shape and fold it to make a three dimensional "pyramid" with a triangular base (a tetrahedron).
Cut out the shapes and arrange them into this shape of a hot air balloon.
Cut out the two dimensional shape and fold it to make a three dimensional pyramid with a square base.
Polygon shapes: pentagon, hexagon, octagon; one per page, each with a picture and a definition.
An explanation of Scale Factors with examples, plus a page of transformation problems and a prompt for creating scale model. Common Core: 6.RP.1, 6.G.A
Turned Shapes Game: A game to help students learn shapes, regardless of position or orientation in space.
Match the pictures of the circle, triangle, square, rectangle and hexagon. Common Core Math: K.G.2
Match the pictures of the sphere, cone, cube and cylinder. Common Core Math: K.G.2
six pages with answer sheet Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3
|
{"url":"http://www.abcteach.com/directory/prek-early-childhood-mathematics-geometry-3001-2-0","timestamp":"2014-04-18T18:36:46Z","content_type":null,"content_length":"321327","record_id":"<urn:uuid:83b3a38e-e4b0-464e-b85a-c6d82ca005f6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
August 12th 2010, 02:36 AM #1
Junior Member
Sep 2009
Johannesburg, South Africa
Let C be the binary code of length 12 en dimension 4 of the form (abcd abcd abcd). Thus every message of lenght 4 is repeated 3 times.
Then C is has minimum distance 3.
Let D be the dual code.
$D = \{x \in \mathbb{F}^n_q : \forall c \in C <x,c> = 0\}$ with $<x,c> = \sum_i x_ic_i$
I want to show that the minimum distance of D is 2.
And determine the number of codewords with weight 2 in D.
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/discrete-math/153485-codes.html","timestamp":"2014-04-20T05:05:36Z","content_type":null,"content_length":"28770","record_id":"<urn:uuid:26e66314-f1a3-4bb0-8b4c-b7859d0fd101>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
|
timing accuracy?
07-15-2003 #1
Registered User
Join Date
Jul 2003
timing accuracy?
This is my first post and any help would be appreciated. I'm writing a program that reads a file of integers of unknown length, then allows the user to choose 1 of 5 kinds of sorts. At the end.
the time it took to sort the list will be displayed. The problem has been finding something to time it in ms. What I have below is the closest I've gotten but at times still shows 0 seconds,
other times .06, or .11 (all my testing so far has been with a selection sort and bubble sort of 200 integers).
void SelectionSort( vector <int>& listOfData,
double& duration)
int temp;
int counter;
int index;
int minIndex;
clock_t start;
clock_t finish;
start = clock(); // grabs beginning count
for (counter = 0; counter < listOfData.size(); counter++)
minIndex = counter;
for (index = counter + 1; index < listOfData.size(); index++)
if (listOfData[index] < listOfData[minIndex])
minIndex = index;
temp = listOfData[counter];
listOfData[counter] = listOfData[minIndex];
listOfData[minIndex] = temp;
finish = clock(); // grabs ending count
duration = double (finish - start) / CLOCKS_PER_SEC;
I hope I got the code tags right. If there's any other methods of timing anyone knows that are more consistent and accurate I would appreciate it. Thanks!
Also on a somewhat unrelated topic. With vectors, is there an upper range limit of elements that a vector can hold? Is it necessary to do checks for out of bounds? And if so, how might I do that
without knowing the number of elements I'll be adding? I'll appreciate any help. Thanks!
Last edited by mfknitz; 07-15-2003 at 01:34 PM.
Thanks, I'll give those a shot. Appreciate the info on vectors.
With your guess of the initial size, use the reserve(size_type n) member function (where n is the number of objects to hold). Then, continue to use push_back(). The vector will not reallocate its
array until you've filled up the n spots available, and you call push_back() once more.
The word rap as it applies to music is the result of a peculiar phonological rule which has stripped the word of its initial voiceless velar stop.
What I've done so far is created a zero length vector and called push_back() as I read from the file. Is it better to create a vector of say 2000 (my estimate) up front, even though there could
be significantly more or less elements than that? I guess creating the spaces up front would be more efficient, but wouldn't it waste memory if they aren't all used? Thanks
Whats the default number of elements an vector holds if you don't use reserve() but immediately do push_back, before it allocates another block of memory?
It would be a waste of memory, but it all depends on which is more important: time efficiency, or memory efficiency.
If time is an issue, allocate a lot, and use however much you need. If memory is an issue, allocate on demand only.
There are also some strategies for allocating memory, such as the following: allocate memory based on an initial guess - once you run out, allocate twice as much memory, and repeat as necessary.
This way, no more than half of the memory is wasted (assuming that at least one reallocation did occur).
Here is some (untested) code which illustrates what I am talking about:
const int elements_to_add = 700;
int elements added = 0;
void foo()
vector<int> array;
while(elements_added < elements_to_add)
array.reserve(2 * array.capacity());
void add_elements(vector<int>& vec)
while(vec.size() < vec.capacity() && elements_added < elements_to_add)
The word rap as it applies to music is the result of a peculiar phonological rule which has stripped the word of its initial voiceless velar stop.
As far as I can tell it's just an empty container with zero length and memory is allocated every time push_back() is called (not very efficient as I see now).
Zach L.--
That makes a lot of sense. I'm going try something like you showed and see how it works out. It looks like resize() would be helpful.
This is my first try at dynamic allocation so I appreciate all your guys' help.
Glad to be of help.
Just some more info on resizing vectors. resize() and reserve() do very different (externally), yet very similar (internally) things (yes, I know that makes no sense... hang on a second). reserve
() sets the capacity of the vector, but you don't want to access elements until its size (number of indexable elements) reaches that particular index. The size() returns the size (essentially,
the number of indexable elements), while capacity() returns how many can 'fit' into the currently allocated space. Granted, you can write to that memory using the subscript operator, your vector
will not know about it (at() would fail, and end() will point to the wrong location.
For example:
vector<int> vec;
cout << vec.size(); // Print "0".
cout << vec.capacity(); // Print "10"
vec.push_back(10); // size() == 1, capacity() == 10
The word rap as it applies to music is the result of a peculiar phonological rule which has stripped the word of its initial voiceless velar stop.
I'll bet you're running Windows.
With Windows, clock() does NOT get updated every millisecond. It has somethng to do with the interrupt clock rate. I don't recall what the interrupt rate is, but once every ~50ms might be right.
So, clock() still returns milliseconds, but the resolution is about 50ms.
There is a Windows function that works. I think it's GetClockTick().
Re: timing accuracy?
Originally posted by mfknitz
What I have below is the closest I've gotten but at times still shows 0 seconds, other times .06, or .11
That's because you are using the default system timer that ticks approximately 18.2 times per second (about every 55 msec). In fact, it is the original 4.772720 MHz clock of the original IBM PC
divided by 2^18. You should use the QueryPerformanceCounter function, instead, if you are programming under Windows.
Last edited by JasonD; 07-15-2003 at 05:26 PM.
About vector memory:
The exact allocation strategy is implementation-defined, but the most common is this:
initial size: 0
first addition: allocate 1
whenever not enough memory: allocate twice as much as there already is
I know that SGItech STL and STLport use this. I'm quite sure the the MS and GNU STL use it too.
All the buzzt!
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
Thanks everyone for your great replies. The GetTickCount() worked but still wasn't accurate enough for really small lists. The QueryPerformanceCounter works great though.
I've allocated more memory initially in my vector now and it's working pretty well. I think it'll take me a little bit to learn all the ins and outs of dynamic memory but this is a pretty good
start. Thanks
There's also a
clock_t clock( void );
function in <time.h>
On my WinXP system, it has a resolution of 1/100th of a second, even though it returns values in milliseconds, as CLOCKS_PER_SEC on my machine is 1000.
I guess this is something you could use if your system doesn't have a High-Resolution Timer.
In a program I've written that talks to an external device on the serial port, the device requires millisecond accuracy to "wake up". I've implemented this as follows which works perfectly as the
resulting serial trace on an oscilloscope was measured and found to be at exactly 1ms intervals.
timeBeginPeriod (1); // request windows to be accurate to within 1ms instead of the normal 5ms. This function will return TIMERR_NOCANDO if the system does not support 1ms accuracy (works fine
under 2000 and XP, which are my main OS's).
timeGetTime(); // Returns the system time, in milliseconds. This is very efficient and low-overhead function. It returns a DWORD result, and so wraps around to 0 every 2^32 milliseconds, which is
about 49.71 days.
timeEndPeriod(1); // Call this function immediately after you are finished using timer services. Parameter is the value you used in timeBeginPeriod.
07-15-2003 #2
Registered User
Join Date
Jul 2003
07-15-2003 #3
07-15-2003 #4
Registered User
Join Date
Jul 2003
07-15-2003 #5
Registered User
Join Date
Jun 2003
07-15-2003 #6
07-15-2003 #7
Registered User
Join Date
Jul 2003
07-15-2003 #8
07-15-2003 #9
Hardware Engineer
Join Date
Sep 2001
07-15-2003 #10
07-16-2003 #11
07-16-2003 #12
Registered User
Join Date
Jul 2003
07-16-2003 #13
07-16-2003 #14
Registered User
Join Date
Jun 2003
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/42055-timing-accuracy.html","timestamp":"2014-04-20T20:22:40Z","content_type":null,"content_length":"95767","record_id":"<urn:uuid:e3e7ce18-55d6-4a6c-a2a1-0cfd04b293aa>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
OFDM Uncovered Part 1: The Architecture | EE Times
Design How-To
OFDM Uncovered Part 1: The Architecture
OFDM Uncovered Part 1: The Architecture
Over the past several years, orthogonal frequency division multiplexing (OFDM) has received considerable attention from the general wireless community and in particular from the wireless LAN (WLAN)
standards groups. Groups such as IEEE802.11a and ETSI BRAN have selected OFDM as the best waveform for providing reliable high data rates for WLANs. This popularity is further highlighted by the
recent selection of OFDM by the IEEE 802.11g committee as the modulation for extending the data rates of the very successful IEEE 802.11b or Wi-Fi WLAN standard.
What makes OFDM such a popular choice? The primary reason is that OFDM is intrinsically able to handle the most common distortions found in the wireless environment without requiring complex receiver
algorithms. As it turns out, the wireless environment and, in particular, the WLAN environment presents a harsh channel for communications. Conventional modulation methods suffer from multipath in
both the frequency domain and the time domain. In the frequency domain, multipath causes groups of frequencies to be attenuated and shifted in phase relative to each other which severely distorts the
symbol. In the time domain, multipath basically smears adjacent symbols into each other. Many typical systems overcome these problems with expensive adaptive filters.
OFDM, on the other hand, uses groups of narrowband signals to pierce through this environment and employs a guard interval between symbols in order to counter the inherent time domain smearing. This
allows OFDM systems to use lower complexity receivers and still maintain robust performance. In short, OFDM is a popular choice because it delivers robust performance in multipath without the need
for complex receiver algorithms.
As with any waveform, OFDM has both advantages and disadvantages, but in many of the modern wireless applications, the disadvantages of OFDM can be overcome with careful design choices. Consequently,
OFDM is frequently the best fit when optimizing cost and performance for wireless environments like WLAN's where multipath is the primary impairment to reliable communications.
In Part 1 of this series, we'll describe OFDM and detail the characteristics that make it well suited for WLAN and other wireless communication systems. In part 2, which will appear next week, we'll
highlight some of the design issues required to implement OFDM like control of phase noise, peak-to-average ratio, and frequency offsets.
The OFDM Bundle
An OFDM signal is basically a bundle of narrowband carriers transmitted in parallel at different frequencies from the same source. In fact, this modulation scheme is often termed "multicarrier" as
opposed to conventional "single carrier" schemes.
Each individual carrier, commonly called a subcarrier, transmits information by modulating the phase and possible the amplitude of the subcarrier over the symbol duration. That is, each subcarrier
uses either phase-shift-keying (PSK) or quadrature-amplitude-modulation (QAM) to convey information just as conventional single carrier systems.
However, OFDM or multi-carrier systems use a large number of low symbol rate subcarriers. The spacing between these subcarriers is selected to be the inverse of the symbol duration so that each
subcarrier is orthogonal or non-interfering. This is the smallest frequency spacing that can be used without creating interference.
At first glance it might appear that OFDM systems must modulate and demodulate each subcarrier individually. Fortunately, the well-known Fast Fourier transform (FFT) provides designers with a highly
efficient method for modulating and demodulating these parallel subcarriers as a group rather than individually.
As shown in Figure 1a, an efficient OFDM implementation converts a serial symbol stream of PSK or QAM data into a size M parallel stream. These M streams are then modulated onto M subcarriers via the
use of size N (N ≤M) inverse FFT. The N outputs of the inverse FFT are then serialized to form a data stream that can then be modulated by a single carrier. Note that the N-point inverse FFT could
modulate up to N subcarriers. When M is less than N, the remaining N -- M subcarriers are not in the output stream. Essentially, these have been modulated with amplitude of zero. The IEEE802.11a
standard for example specifies that 52 (M = 52) out of 64 (N = 64) possible subcarriers are modulated by the transmitter.
Figure 1a: Block diagram of a simple OFDM transmitter.
Click here for larger version of Figure 1a
Although it would seem that combining the inverse FFT outputs at the transmitter would create interference between subcarriers, the orthogonal spacing allows the receiver to perfectly separate out
each subcarrier. Figure 1b illustrates the process at the receiver. The received data is split into N parallel streams that are processed with a size N FFT. The size N FFT efficiently implements a
bank of filters each matched to the N possible subcarriers. The FFT output is then serialized into a single stream of data for decoding. Note that when M is less than N, in other words there are
fewer than N subcarriers are used at the transmitter, the receiver only serialized the M subcarriers with data.
Figure 1b: Block diagram of a simple OFDM receiver.
Click here for larger version of Figure 1b
Multipath Challenges
In an OFDM-based WLAN architecture, as well as many other wireless systems, multipath distortion is a key challenge. This distortion occurs at a receiver when objects in the environment reflect a
part of the transmitted signal energy. Figure 2 illustrates one such multipath scenario from a WLAN environment.
Figure 2: Multipath reflections, such as those shown here, create ISI problems in OFDM receiver designs.
Click here for larger version of Figure 1b
Multipath reflected signals arrive at the receiver with different amplitudes, different phases, and different time delays. Depending on the relative phase change between reflected paths, individual
frequency components will add constructively and destructively. Consequently, a filter representing the multipath channel shapes the frequency domain of the received signal. In other words, the
receiver may see some frequencies in the transmitted signal that are attenuated and others that have a relative gain.
In the time domain, the receiver sees multiple copies of the signal with different time delays. The time difference between two paths often means that different symbols will overlap or smear into
each other and create inter-symbol interference (ISI). Thus, designers building WLAN architectures must deal with distortion in the demodulator.
Recall that OFDM relies on multiple narrowband subcarriers. In multipath environments, the subcarriers located at frequencies attenuated by multipath will be received with lower signal strength. The
lower signal strength leads to an increased error rate for the bits transmitted on these weakened subcarriers.
Fortunately for most multipath environments, this only affects a small number of subcarriers and therefore only increases the error rate on a portion of the transmitted data stream. Furthermore, the
robustness of OFDM in multipath can be dramatically improved with interleaving and error correction coding. Let's look at error correction and interleaving in more detail.
Error Correction and Interleaving
Error correcting coding builds redundancy into the transmitted data stream. This redundancy allows bits that are in error or even missing to be corrected.
The simplest example would be to simply repeat the information bits. This is known as a repetition code and, while the repetition code is simple in structure, more sophisticated forms of redundancy
are typically used since they can achieve a higher level of error correction. For OFDM, error correction coding means that a portion of each information bit is carried on a number of subcarriers;
thus, if any of these subcarriers has been weakened, the information bit can still arrive intact.
Interleaving is the other mechanism used in OFDM system to combat the increased error rate on the weakened subcarriers. Interleaving is a deterministic process that changes the order of transmitted
bits. For OFDM systems, this means that bits that were adjacent in time are transmitted on subcarriers that are spaced out in frequency. Thus errors generated on weakened subcarriers are spread out
in time, i.e. a few long bursts of errors are converted into many short bursts. Error correcting codes then correct the resulting short bursts of errors.
Handling ISI
The time-domain counter part of the multipath is the ISI or smearing of one symbol into the next. OFDM gracefully handles this type of multipath distortion by adding a "guard interval" to each
symbol. This guard interval is typically a cyclic or periodic extension of the basic OFDM symbol. In other words, it looks like the rest of the symbol, but conveys no 'new' information.
Since no new information is conveyed, the receiver can ignore the guard interval and still be able to separate and decode the subcarriers. When the guard interval is designed to be longer than any
smearing due to the multipath channel, the receiver is able to eliminate ISI distortion by discarding the unneeded guard interval. Hence, ISI is removed with virtually no added receiver complexity.
It is important to note that discarding the guard interval does have an impact on the noise performance since it reduces the amount of energy available at the receiver for channel symbol decoding. In
addition, it reduces the data rate since no new information is contained in the added guard interval. Thus a good system design will make the guard interval as short as possible while maintaining
sufficient multipath protection.
Why don't single carrier systems also use a guard interval? Single carrier systems could remove ISI by adding a guard interval between each symbol. However, this has a much more severe impact on the
data rate for single carrier systems than it does for OFDM. Since OFDM uses a bundle of narrowband subcarriers, it obtains high data rates with a relatively long symbol period because the frequency
width of the subcarrier is inversely proportional to the symbol duration. Consequently, adding a short guard interval has little impact on the data rate.
Single carrier systems with bandwidths equivalent to OFDM must use much shorter duration symbols. Hence adding a guard interval equal to the channel smearing has a much greater impact on data rate.
Wrap up on Part 1
In conclusion, OFDM is extremely well suited for wireless communication in environments where multipath is a major source of distortion such as that found in typical WLAN deployments. The combination
of multiple narrow subcarriers with interleaving and error correction coding allows OFDM to perform well in multipath while the guard interval gives the receiver an extremely simple method for
eliminating ISI. These built in waveform features allow for the design of reliable, high-rate digital wireless communications systems without the complexity that would be required by conventional
single carrier systems.
That wraps up Part 1 of this series. In part 2, which will appear on CommsDesign.com next week, we'll look at some of the design challenges for implementing OFDM in a wireless system architecture.
About the Authors
Steve Halford is currently a systems engineer for Intersil's Prism Wireless Products group. Steve received B.S. and M.S. degrees in electrical engineering from the Georgia Institute of Technology and
a Ph.D. degree in electrical engineering from the University of Virginia. He can be reached at shalford@intersil.com.
Karen Halford is a stay-at-home mom that sometimes doubles as a consultant in the design and analysis of communications systems. Karen received B.S. and M.S. degrees from the Georgia Institute of
Technology and a Ph.D. degree from the University of Virginia in the field of electrical engineering. Karen can be reached at khalford.ee88@gtalumni.org.
|
{"url":"http://www.eetimes.com/document.asp?doc_id=1277600","timestamp":"2014-04-18T18:37:06Z","content_type":null,"content_length":"137739","record_id":"<urn:uuid:baf16c0a-3afa-4897-9336-86978eb848c4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maximum number of path for simple acyclic directed graph with start and end node
January 22nd 2013, 05:54 PM #1
Jan 2013
Maximum number of path for simple acyclic directed graph with start and end node
Say given a simple acyclic directed graph with n nodes , which includes a starting node s0 and ending node e0 (i.e., a kripke structure without loop)
what is the maximum number of path from s0 to e0?
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/discrete-math/211887-maximum-number-path-simple-acyclic-directed-graph-start-end-node.html","timestamp":"2014-04-19T19:10:27Z","content_type":null,"content_length":"30687","record_id":"<urn:uuid:5deb8541-d964-4ec0-a607-3baae3ca0756>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: RE: dmexogxt questions
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: dmexogxt questions
From "Steve Stillman" <stillman@motu.org.nz>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: dmexogxt questions
Date Tue, 14 Sep 2004 00:31:36 +1200
Hi Jean. The answers to your questions are below. Cheers, Steve
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu]On Behalf Of Salvati, Jean
Sent: Saturday, September 11, 2004 8:49 AM
To: statalist@hsphsun2.harvard.edu
Subject: st: dmexogxt questions
I have two questions about dmexogxt:
1) The joint test clearly rejects the null hypothesis that all
regressors are exogenous, but the tests on individual regressors don't
reject the null for any of the regressors (not even close).
More precisely, let's say I estimate my model with the following
xtivreg y x1 (x2 x3 = z2 z3), fe
When I do "dmexogxt", the null hypothesis that all regressors are
exogenous ism clearly rejected. However, when I do "dmexogxt x2" and
"dmexogxt x3", I definitly can't reject the null for either x2 or x3 at
the same level.
How can I interpret these results?
*** When you run the command dmexogxt x2, you are assuming that x3 is
definitely endogenous and are only testing that x2 is exogenous given
this assumption. For whatever reason, in your example, you cannot
clearly distinguish between (x2 endog, x3 exog), (x2 exog, x3 endog), or
(both endog). Since you do not seem to have a reason to assume either
one is definitely endogenous (thus, leading to the reduced test), my
instinct would be that you are best off treating both as being
2) After "xtivreg y x1 (x2 = z2 ), fe", both "dmexogxt" and "dmexogxt
x2" yield F-statistics.
*** with only one possible endogenous variable, "dmexogxt" and "dmexogxt
x2" are identical tests and thus give identical results
After "xtivreg y x1 (x2 x3 = z2 z3), fe", both "dmexogxt" still gives an
F-statistic, but "dmexogxt x2" yields a chi2(1). Why is that? Is a Wald
test used in the second case, and if so why?
*** more generally, if "dmexogxt" is only run on a subset of endogenous
variables you will end up with a chi2(number tested variables) instead
of an f-test. This occurs because the auxiliary regression being run
for the test is now an IV regression (we still need to instrument for
the variables left out of the test) as opposed to an OLS regression (the
case when all possible endogenous variables are being tested).
Thanks a lot.
Jean Salvati
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2004-09/msg00336.html","timestamp":"2014-04-16T04:37:22Z","content_type":null,"content_length":"7369","record_id":"<urn:uuid:04c6a3fa-f688-4120-a53f-e9bcbcc32269>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Upper Marlboro SAT Math Tutor
Find an Upper Marlboro SAT Math Tutor
...Finally, as an instructor at the U.S. Naval Academy, I conducted class with discussion and lecture styles for three years teaching over 350 students. I have worked with teachers, therapists,
and psychiatrists to adjust medication, train reactions, teach different approaches and create successful environments for both my own son and for my tutoring students.
47 Subjects: including SAT math, Spanish, reading, English
...I have been teaching elementary and secondary math for 21 years. In addition, I have been tutoring areas in basic math through college level math for 25 years. I enjoy math and I really enjoy
helping students gain a better understanding of mathematics.
15 Subjects: including SAT math, geometry, GRE, ASVAB
...I soon discovered that many students were losing ground in advancing because of gaps in the very fundamentals of their education. I chose to work at the 1st grade to work on the basic reading,
phonics, and math concepts that ensure each student is ready to move ahead with confidence to the next ...
20 Subjects: including SAT math, reading, English, study skills
...Early success in math is essential for so many fields of study. I want my students to feel like math is their strength and not something holding them back. I love teaching test preparation
because it helps my students achieve their dreams.
12 Subjects: including SAT math, geometry, ASVAB, GRE
...I can also guide your child in becoming more organized and making their study time more efficient. I have tutored students in elementary, middle, and high grades. I have developed fun
activities for students to actually have fun while they are learning.
18 Subjects: including SAT math, reading, writing, calculus
|
{"url":"http://www.purplemath.com/upper_marlboro_md_sat_math_tutors.php","timestamp":"2014-04-17T13:21:23Z","content_type":null,"content_length":"24158","record_id":"<urn:uuid:11437943-bb26-4280-8677-68753975630c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
OFDM Uncovered Part 1: The Architecture | EE Times
Design How-To
OFDM Uncovered Part 1: The Architecture
OFDM Uncovered Part 1: The Architecture
Over the past several years, orthogonal frequency division multiplexing (OFDM) has received considerable attention from the general wireless community and in particular from the wireless LAN (WLAN)
standards groups. Groups such as IEEE802.11a and ETSI BRAN have selected OFDM as the best waveform for providing reliable high data rates for WLANs. This popularity is further highlighted by the
recent selection of OFDM by the IEEE 802.11g committee as the modulation for extending the data rates of the very successful IEEE 802.11b or Wi-Fi WLAN standard.
What makes OFDM such a popular choice? The primary reason is that OFDM is intrinsically able to handle the most common distortions found in the wireless environment without requiring complex receiver
algorithms. As it turns out, the wireless environment and, in particular, the WLAN environment presents a harsh channel for communications. Conventional modulation methods suffer from multipath in
both the frequency domain and the time domain. In the frequency domain, multipath causes groups of frequencies to be attenuated and shifted in phase relative to each other which severely distorts the
symbol. In the time domain, multipath basically smears adjacent symbols into each other. Many typical systems overcome these problems with expensive adaptive filters.
OFDM, on the other hand, uses groups of narrowband signals to pierce through this environment and employs a guard interval between symbols in order to counter the inherent time domain smearing. This
allows OFDM systems to use lower complexity receivers and still maintain robust performance. In short, OFDM is a popular choice because it delivers robust performance in multipath without the need
for complex receiver algorithms.
As with any waveform, OFDM has both advantages and disadvantages, but in many of the modern wireless applications, the disadvantages of OFDM can be overcome with careful design choices. Consequently,
OFDM is frequently the best fit when optimizing cost and performance for wireless environments like WLAN's where multipath is the primary impairment to reliable communications.
In Part 1 of this series, we'll describe OFDM and detail the characteristics that make it well suited for WLAN and other wireless communication systems. In part 2, which will appear next week, we'll
highlight some of the design issues required to implement OFDM like control of phase noise, peak-to-average ratio, and frequency offsets.
The OFDM Bundle
An OFDM signal is basically a bundle of narrowband carriers transmitted in parallel at different frequencies from the same source. In fact, this modulation scheme is often termed "multicarrier" as
opposed to conventional "single carrier" schemes.
Each individual carrier, commonly called a subcarrier, transmits information by modulating the phase and possible the amplitude of the subcarrier over the symbol duration. That is, each subcarrier
uses either phase-shift-keying (PSK) or quadrature-amplitude-modulation (QAM) to convey information just as conventional single carrier systems.
However, OFDM or multi-carrier systems use a large number of low symbol rate subcarriers. The spacing between these subcarriers is selected to be the inverse of the symbol duration so that each
subcarrier is orthogonal or non-interfering. This is the smallest frequency spacing that can be used without creating interference.
At first glance it might appear that OFDM systems must modulate and demodulate each subcarrier individually. Fortunately, the well-known Fast Fourier transform (FFT) provides designers with a highly
efficient method for modulating and demodulating these parallel subcarriers as a group rather than individually.
As shown in Figure 1a, an efficient OFDM implementation converts a serial symbol stream of PSK or QAM data into a size M parallel stream. These M streams are then modulated onto M subcarriers via the
use of size N (N ≤M) inverse FFT. The N outputs of the inverse FFT are then serialized to form a data stream that can then be modulated by a single carrier. Note that the N-point inverse FFT could
modulate up to N subcarriers. When M is less than N, the remaining N -- M subcarriers are not in the output stream. Essentially, these have been modulated with amplitude of zero. The IEEE802.11a
standard for example specifies that 52 (M = 52) out of 64 (N = 64) possible subcarriers are modulated by the transmitter.
Figure 1a: Block diagram of a simple OFDM transmitter.
Click here for larger version of Figure 1a
Although it would seem that combining the inverse FFT outputs at the transmitter would create interference between subcarriers, the orthogonal spacing allows the receiver to perfectly separate out
each subcarrier. Figure 1b illustrates the process at the receiver. The received data is split into N parallel streams that are processed with a size N FFT. The size N FFT efficiently implements a
bank of filters each matched to the N possible subcarriers. The FFT output is then serialized into a single stream of data for decoding. Note that when M is less than N, in other words there are
fewer than N subcarriers are used at the transmitter, the receiver only serialized the M subcarriers with data.
Figure 1b: Block diagram of a simple OFDM receiver.
Click here for larger version of Figure 1b
Multipath Challenges
In an OFDM-based WLAN architecture, as well as many other wireless systems, multipath distortion is a key challenge. This distortion occurs at a receiver when objects in the environment reflect a
part of the transmitted signal energy. Figure 2 illustrates one such multipath scenario from a WLAN environment.
Figure 2: Multipath reflections, such as those shown here, create ISI problems in OFDM receiver designs.
Click here for larger version of Figure 1b
Multipath reflected signals arrive at the receiver with different amplitudes, different phases, and different time delays. Depending on the relative phase change between reflected paths, individual
frequency components will add constructively and destructively. Consequently, a filter representing the multipath channel shapes the frequency domain of the received signal. In other words, the
receiver may see some frequencies in the transmitted signal that are attenuated and others that have a relative gain.
In the time domain, the receiver sees multiple copies of the signal with different time delays. The time difference between two paths often means that different symbols will overlap or smear into
each other and create inter-symbol interference (ISI). Thus, designers building WLAN architectures must deal with distortion in the demodulator.
Recall that OFDM relies on multiple narrowband subcarriers. In multipath environments, the subcarriers located at frequencies attenuated by multipath will be received with lower signal strength. The
lower signal strength leads to an increased error rate for the bits transmitted on these weakened subcarriers.
Fortunately for most multipath environments, this only affects a small number of subcarriers and therefore only increases the error rate on a portion of the transmitted data stream. Furthermore, the
robustness of OFDM in multipath can be dramatically improved with interleaving and error correction coding. Let's look at error correction and interleaving in more detail.
Error Correction and Interleaving
Error correcting coding builds redundancy into the transmitted data stream. This redundancy allows bits that are in error or even missing to be corrected.
The simplest example would be to simply repeat the information bits. This is known as a repetition code and, while the repetition code is simple in structure, more sophisticated forms of redundancy
are typically used since they can achieve a higher level of error correction. For OFDM, error correction coding means that a portion of each information bit is carried on a number of subcarriers;
thus, if any of these subcarriers has been weakened, the information bit can still arrive intact.
Interleaving is the other mechanism used in OFDM system to combat the increased error rate on the weakened subcarriers. Interleaving is a deterministic process that changes the order of transmitted
bits. For OFDM systems, this means that bits that were adjacent in time are transmitted on subcarriers that are spaced out in frequency. Thus errors generated on weakened subcarriers are spread out
in time, i.e. a few long bursts of errors are converted into many short bursts. Error correcting codes then correct the resulting short bursts of errors.
Handling ISI
The time-domain counter part of the multipath is the ISI or smearing of one symbol into the next. OFDM gracefully handles this type of multipath distortion by adding a "guard interval" to each
symbol. This guard interval is typically a cyclic or periodic extension of the basic OFDM symbol. In other words, it looks like the rest of the symbol, but conveys no 'new' information.
Since no new information is conveyed, the receiver can ignore the guard interval and still be able to separate and decode the subcarriers. When the guard interval is designed to be longer than any
smearing due to the multipath channel, the receiver is able to eliminate ISI distortion by discarding the unneeded guard interval. Hence, ISI is removed with virtually no added receiver complexity.
It is important to note that discarding the guard interval does have an impact on the noise performance since it reduces the amount of energy available at the receiver for channel symbol decoding. In
addition, it reduces the data rate since no new information is contained in the added guard interval. Thus a good system design will make the guard interval as short as possible while maintaining
sufficient multipath protection.
Why don't single carrier systems also use a guard interval? Single carrier systems could remove ISI by adding a guard interval between each symbol. However, this has a much more severe impact on the
data rate for single carrier systems than it does for OFDM. Since OFDM uses a bundle of narrowband subcarriers, it obtains high data rates with a relatively long symbol period because the frequency
width of the subcarrier is inversely proportional to the symbol duration. Consequently, adding a short guard interval has little impact on the data rate.
Single carrier systems with bandwidths equivalent to OFDM must use much shorter duration symbols. Hence adding a guard interval equal to the channel smearing has a much greater impact on data rate.
Wrap up on Part 1
In conclusion, OFDM is extremely well suited for wireless communication in environments where multipath is a major source of distortion such as that found in typical WLAN deployments. The combination
of multiple narrow subcarriers with interleaving and error correction coding allows OFDM to perform well in multipath while the guard interval gives the receiver an extremely simple method for
eliminating ISI. These built in waveform features allow for the design of reliable, high-rate digital wireless communications systems without the complexity that would be required by conventional
single carrier systems.
That wraps up Part 1 of this series. In part 2, which will appear on CommsDesign.com next week, we'll look at some of the design challenges for implementing OFDM in a wireless system architecture.
About the Authors
Steve Halford is currently a systems engineer for Intersil's Prism Wireless Products group. Steve received B.S. and M.S. degrees in electrical engineering from the Georgia Institute of Technology and
a Ph.D. degree in electrical engineering from the University of Virginia. He can be reached at shalford@intersil.com.
Karen Halford is a stay-at-home mom that sometimes doubles as a consultant in the design and analysis of communications systems. Karen received B.S. and M.S. degrees from the Georgia Institute of
Technology and a Ph.D. degree from the University of Virginia in the field of electrical engineering. Karen can be reached at khalford.ee88@gtalumni.org.
|
{"url":"http://www.eetimes.com/document.asp?doc_id=1277600","timestamp":"2014-04-18T18:37:06Z","content_type":null,"content_length":"137739","record_id":"<urn:uuid:baf16c0a-3afa-4897-9336-86978eb848c4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nonantum Calculus Tutor
Find a Nonantum Calculus Tutor
...I currently hold a master's degree in math and have used it to tutor a wide array of math courses. In addition to these subjects, for the last several years, I have been successfully tutoring
for standardized tests, including the SAT and ACT.I have taken a and passed a number of Praxis exams. I even earned a perfect score on the Math Subject Test.
36 Subjects: including calculus, English, reading, chemistry
...My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students understand the core concepts and building blocks necessary to succeed not only in
their current class but in the future as well. I am a second year graduate student at MIT, and bilingual in French and English.
16 Subjects: including calculus, French, elementary math, algebra 1
...I also took 3D calculus at MIT. While tutoring in my junior and senior year of college, I tutored freshman in calculus. I took geometry in high school.
10 Subjects: including calculus, physics, geometry, algebra 2
...At the end of this course, a student should be able to make and critique logical arguments and calculate missing parts of a geometric diagram. Pre-calculus is the study of function families and
their behavior. It includes transformations, trig and lots of modeling of real world behavior.
23 Subjects: including calculus, physics, geometry, statistics
...I am a second year graduate student at Brandeis University in the International Business School. I tutor students in high school, college, and at home with various math, and macroeconomic/micro
economic problems. Each person should understand the steps and strategies towards different processes to approach the possible solutions.
25 Subjects: including calculus, statistics, algebra 1, geometry
|
{"url":"http://www.purplemath.com/nonantum_ma_calculus_tutors.php","timestamp":"2014-04-20T04:36:20Z","content_type":null,"content_length":"24002","record_id":"<urn:uuid:db1c230c-b214-4c30-af82-b68d3bf9f56e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
|
September 2
« August 2005 | Main | October 2005 »
September 30, 2005
Correct all mistakes from Thursday’s homework-worksheets. On a clean, new sheet of graph paper or notebook paper, redo each missed problem, showing your work. Circle or highlight your answers.
Parents-please sign the Vocabulary Quiz and return on Monday. Thank you!! Students who lost 5 points because they still forget their name, date, and period may regain the 5 points by remembering
their name, date, and period each day this next week. I look forward to adding 5 points back!!
DON'T LET THE 'FRENZY' GET YOU. (see Fraction Frenzy post)
Posted by Beals at 04:34 PM
September 29, 2005
Complete Vocabulary study guide for tomorrow's quiz.
Complete worksheets (6) converting improper fractions/mixed numbers. Show your work; label division problems with W=whole number, N=numerator, and D=denominator.
Posted by Beals at 03:49 PM
September 28, 2005
Math workbook, lesson 5-5, practice page, all problems. Show your work -- converting mixed numbers to improper fractions, show x and + for each problem; converting improper fractions to mixed
numbers, show division problem and label with W=whole number, N=numerator, and D=denominator. If you write carefully, you can show your work on the workbook page, if not use notebook or graph paper.
VOCABULARY QUIZ on Friday. Applying vocabulary words to problem solving.
Posted by Beals at 12:19 PM
September 23, 2005
Pick ONLY 2 word problems from Worksheet packets #21, 22, OR 23 that you missed. Re-work the 2 problems on the "Solving Word Problems" worksheet you received in class today. Write how you solved the
problems in complete sentences.
Posted by Beals at 12:01 PM
September 22, 2005
Complete worksheet packet #23, show your work on graph or notebook paper.
Posted by Beals at 03:59 PM
September 21, 2005
Complete worksheet packet, lesson 22. Use notebook or graph paper to show your work.
Remember: Study to beat the "Frenzy"!! Try the web sites for fraction fun.
Posted by Beals at 02:15 PM
September 20, 2005
Complete worksheet packet #21, show your work on graph or notebook paper.
Posted by Beals at 04:47 PM
September 19, 2005
Math textbook, page 570, lesson 5-4, problems 1-20. SYW. This is review and you can use your notes from Lesson 5-4 and 5-4 in the workbook to help you remember when to find and use the GCF to
simplify fractions.
Posted by Beals at 03:58 PM
September 16, 2005
Rational numbers are any number that can be expressed as a fraction or a ratio. That is a big world of numbers!! Don't get caught in a 'Fraction Frenzy', put your brain to work and THINK math. Old
skills and new skills will help you survive the 'frenzy'!
Skills you will need to beat the 'frenzy':
Use GCF to simplify (reduce to lowest term) fractions
Use LCM to compare fractions and write equivalent fractions
Changing fractions from improper to mixed numbers and mixed numbers to improper fractions
Adding and subtracting fractions with like/unlike denominators
Multiplying and dividing fractions with like/unlike denominators
Solving word problems involving fractions
Fraction websites for some fun practice, so the 'frenzy' doesn't catch you:
Don't forget hotmath.com, our school password was sent to your parents in an email.
Posted by Beals at 09:59 PM
Complete "Word Problems" worksheet. Pay close attention to the directions to "circle key words". Show your work (SYW)
Have a great weekend!
Posted by Beals at 11:21 AM
September 14, 2005
In class, we corrected all our mistakes on the Factors and Multiples Test. Please look over this Test with your son/daughter, then SIGN the test and return to school tomorrow.
To wrap up the unit, we are currently completing a "class project" completing the GCF/LCM worksheet in class-independently. This is the same as Monday's (9/12) homework; tonight would be a great time
to look over that worksheet again.
Posted by Beals at 12:40 PM
September 12, 2005
Although we will not be formally addressing Divisibility Rules after this week, they are an essential skill in working with large division problems, rational numbers, and more. Below is a link to a
Jeopardy game/power point that is a fun way to keep those rules sharp and fresh in your mind. Enjoy reviewing!
Posted by Beals at 01:24 PM
TEST tomorrow.
Complete worksheet solving for GCF and LCM.
Get Friday's Vocab. Quiz signed by a parent.
We went over Friday's Factors and Multiples Review homework in class today. Students should go over each problem, maybe even teach them to someone else, as they study for tomorrow's test. The test is
very like the review page.
I will have a study session at 8:30 tomorrow morning. I will pick up students from the theater at 8:30.
Posted by Beals at 12:13 PM
September 09, 2005
Complete worksheet - Factors and Multiples Review. This is the study guide for the Factors and Multiples Test we will take next Tuesday.
Posted by Beals at 12:38 PM
September 08, 2005
Complete Vocabulary Study Guide. Vocabulary quiz tomorrow.
We will have a test next Tuesday over Factors and Multiples. A review/study guide will go home Friday.
Posted by Beals at 10:42 AM
September 07, 2005
Math workbook, Practice page 5-7, odd problems. SYW
Posted by Beals at 02:54 PM
September 06, 2005
Math workbook, Practice page 5-7, even problems. SYW.
Posted by Beals at 09:52 AM
September 02, 2005
Worksheet – simplifying fractions, using GCF. Show your work on notebook or graph paper. Get Vocabulary Quiz signed by parent.
Posted by Beals at 02:00 PM
September 01, 2005
Complete Vocabulary Study Guide. Vocabulary Quiz tomorrow.
Posted by Beals at 06:42 PM
|
{"url":"http://mabryonline.org/blogs/beals/archives/2005/09/index.html","timestamp":"2014-04-20T12:03:28Z","content_type":null,"content_length":"15006","record_id":"<urn:uuid:3a5effa0-f917-4d0a-9f4c-deab79f9b347>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Z94.1 - Analytical Techniques & Operations Research Terminology
| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z |
FACTOR ANALYSIS. A branch of multivariate analysis in which the observed variates xi (i = 1,2,...,p) are supposed to be expressible in terms of a number m < p factors fj together with residual
FAILURE. The termination of the ability of any item to perform its required function under stated environmental conditions for a specified period of time. [3]
FAILURE ANALYSIS. The logical, systematic examination of an item or its diagram(s) to identify and analyze the probability, causes, and consequence of potential and real failures. [28]
FAILURE, CATASTROPHIC. Failures which are both sudden and complete. [20]
FAILURE, COMPLETE. Failure resulting from deviations in characteristic(s) beyond specified limits such as to cause complete lack of the required function. Note: The limits referred to in this
category are special limits specified for this purpose. [20]
FAILURE CRITERIA. Rules for failure relevancy such as specified limits for the acceptability of an item. [20]
FAILURE, DEGRADATION. Failures which are both gradual and partial. [20]
FAILURE, DEPENDENT. One which is caused by the failure of an associated item(s). Not independent. [28]
FAILURE, GRADUAL. Failures that could be anticipated by prior examination. [20]
FAILURE, INDEPENDENT. One which occurs without being related to the failure of associated items. Not dependent. [28]
FAILURE, INHERENT WEAKNESS. Failures attributable to weakness inherent in the item itself when subjected to stresses within the stated capabilities of that item. [20]
FAILURE MECHANISM. The physical, chemical or other process which results in a failure. [20]
FAILURE, MISUSE. Failures attributable to the application of stresses beyond the stated capabilities of the item. [20]
FAILURE MODE. The effect by which a failure is observed; for example, an open or short circuit condition, or a gain change. [20]
FAILURE, PARTIAL. Failures resulting from deviations in characteristic(s) beyond specified limits but not such as to cause complete lack of the required function. [20]
FAILURE, RANDOM. Any failure whose occurrence is unpredictable in an absolute sense but which is predictable only in a probabilistic or statistical sense. [28]
FAILURE RATE. The rate at which failures occur in a certain time interval; i.e., the probability that a failure per unit time occurs in the interval, given that a failure has not occured prior to the
start of the interval. [28]
FAILURE RATE ACCELERATION FACTOR. The ratio of the accelerated testing failure rate understated reference test conditions and time period. [20]
FAILURE RATE, ASSESSED. The failure rate of an item determined as a limiting value or values of the confidence interval with a stated confidence level, based on the same data as the observed failure
rate of nominally identical items.
FAILURE RATE, EXTRAPOLATED. Extension by a defined extrapolation or interpolation of the observed or assessed failure rate for durations and/or conditions different from those applying to the
conditions of that observed or assessed failure rate.
FAILURE RATE, OBSERVED. The ratio of the total number of failures in a sample to the total cumulation observed time on that sample. The observed failure rate is to be associated with particular, and
stated time intervals (or summation of intervals) in the life of the items, and with stated conditions.
FAILURE RATE, PREDICTED. For the stated conditions of use and the design considerations of an item, the failure rate computed from the observed, assessed or extrapolated failure rates of its parts.
FAILURE, SECONDARY. Failure of an item caused either directly or indirectly by the failure of another item. [20]
FAILURE, SUDDEN. Failures that could not be anticipated by prior examination. [20]
FAILURE, WEAR-OUT. A failure which occurs as a result of deterioration processes or mechanical wear and whose probability of occurrence increases with time. [20]
FAIR GAME. In probability theory, a game consisting of a sequence of trials is deemed to be a “fair” game if the cost of each trial is equal to the expected value of the gain from each trial. In game
theory, a game which with proper play neither adversary has an advantage.
FARKAS' LEMMA. If for every solution of WA ≤ 0 it is also true that Wb ≤ 0, then there exists a vector X ≥ 0 such that Ax = b; and thus (WA)X = Wb. (If a linear homogeneous inequality Wb ≤ 0 holds
for all W satisfying a system of homogeneous inequalities WA ≤ 0, then the inequality can be expressed as a non-negative combination of the inequalities of the system WA ≤ 0.) [11]
FATHOM. A term used in branch and bound algorithms to indicate a node has been fully explored; i.e., it has been determined that the node cannot contain a solution better than the incumbent.
FEASIBLE BASIS. A basis yielding a basic feasible solution (q.v). [ 19]
FEASIBLE SOLUTION. A solution satisfying the constraints of A mathematical programming problem in which all variables are non-negative. [19]
FINITE AND INFINITE GAMES. A game is finite if each player has only a finite number of possible pure strategies; it is infinite if at least one player has an infinite number of possible pure
strategies (e.g., a pure strategy might ideally consist of choosing an instant from a given interval of time at which to fire a gun). [2 1]
FRACTIONAL PROGRAMMING. A class of mathematical programming problems in which the objective function is the quotient of linear functions. [19]
FRAME. The list of units, or items, accessible for test. Each unit has a serial number associated with it, actually or conceptually. If there are units in the population that are not covered by the
frame, statistical inferences (estimates, confidence limits, etc.) refer to the frame, not the population. Generalizations from the frame to the population must be based on judgment. [3]
< Previous | Next >
|
{"url":"http://www.iienet.org/printerfriendly.aspx?id=1588","timestamp":"2014-04-17T07:50:24Z","content_type":null,"content_length":"9442","record_id":"<urn:uuid:adfb09d4-45f8-4f62-95a5-676eb659cec3>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Castle Pines, CO Calculus Tutor
Find a Castle Pines, CO Calculus Tutor
...I give a note includes all kinds of formulas every session. In order to understand the Newton Laws, I put three of them together and compare the difference. I find out comparing two or more
similar concepts is very effective.
27 Subjects: including calculus, physics, Chinese, geometry
...You might compare information about the population mean and standard deviation with a given sample value, and use the combination to calculate a z-score - and then you might use that z-score
to determine if a specific sample element is "typical" for the population, or is unusual. Or, you might ...
18 Subjects: including calculus, geometry, statistics, algebra 2
...I can teach how to read, write and speak in this language. I can help you to start a conversation with people and how to carry on with a conversation. Based on my great understanding of the
local culture, I can be a great companion for practicing a conversation.
11 Subjects: including calculus, physics, geometry, algebra 1
...I listen carefully to my students and work with them to devise strategies that work for them. Every student is unique and every session is personalized depending on where the student is at
during that session. With Math, Physics and Chemistry tutoring, my students learn to appreciate real-world applications of the concepts they grapple with making learning fun and relevant.
26 Subjects: including calculus, chemistry, physics, geometry
...This allowed me to bring my love of math to others. Helping others with math became as rewarding as my own studies. As a department tutor I was able to help students who were from many
backgrounds, not just math majors.
6 Subjects: including calculus, geometry, algebra 1, algebra 2
Related Castle Pines, CO Tutors
Castle Pines, CO Accounting Tutors
Castle Pines, CO ACT Tutors
Castle Pines, CO Algebra Tutors
Castle Pines, CO Algebra 2 Tutors
Castle Pines, CO Calculus Tutors
Castle Pines, CO Geometry Tutors
Castle Pines, CO Math Tutors
Castle Pines, CO Prealgebra Tutors
Castle Pines, CO Precalculus Tutors
Castle Pines, CO SAT Tutors
Castle Pines, CO SAT Math Tutors
Castle Pines, CO Science Tutors
Castle Pines, CO Statistics Tutors
Castle Pines, CO Trigonometry Tutors
Nearby Cities With calculus Tutor
Cadet Sta, CO calculus Tutors
Crystola, CO calculus Tutors
Deckers, CO calculus Tutors
Dupont, CO calculus Tutors
Fort Logan, CO calculus Tutors
Foxton, CO calculus Tutors
Lowry, CO calculus Tutors
Montbello, CO calculus Tutors
Montclair, CO calculus Tutors
Roxborough, CO calculus Tutors
Sedalia, CO calculus Tutors
Tarryall, CO calculus Tutors
Welby, CO calculus Tutors
Western Area, CO calculus Tutors
Woodmoor, CO calculus Tutors
|
{"url":"http://www.purplemath.com/Castle_Pines_CO_calculus_tutors.php","timestamp":"2014-04-19T23:41:16Z","content_type":null,"content_length":"24156","record_id":"<urn:uuid:2c071616-1f57-4313-b996-407317108b46>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math in Focus: The Singapore Approach
Publisher: Houghton Mifflin Harcourt/Saxon Homeschool Instant KeyPublishers InfoPricingProduct Photos
Review last updated: July 2010
Math in Focus
Singapore Math fans now have another choice!
Like the original Singapore Primary Math series, Math in Focus is especially strong in developing conceptual understanding. It differs from the original Singapore series in that it is also aligned
with the NCTM (National Council of Teachers of Mathematics) standards… the math standards that prevail in the U.S. However, unlike most programs aligned with the NCTM standards, it does not try to
teach every concept every year. Instead, it focuses on fewer topics but teaches them thoroughly so they need not be retaught continually.
The scope and sequence is advanced as with the original series. A major focus is upon preparing students for success in algebra. Consequently, algebraic thinking and expressions are introduced early
and used frequently throughout the series.
Throughout the series, concepts are taught moving through a sequence of concrete to pictorial to abstract. Concrete learning happens through hands-on activities with manipulatives such as counters,
coins, number lines, or Base Ten Blocks™. Pictorial learning uses pictures in student books, drawings, or other forms that illustrate the concept with something more than abstract numbers. The
abstract stage is the more familiar way most math problems are taught and practiced with numbers and symbols. Manipulatives are used throughout all levels, but they are used much more frequently in
younger levels than older. But even at fifth grade level, manipulatives are still used occasionally, while pictorial illustrations are prevalent in all the lessons—far more than in other upper
elementary programs.
While there are Classroom Manipulatives Kits for MIF, each Teacher’s Edition has a chart showing simple alternatives that will work. For example, I’ll chart here some of the first grade kit
components and the suggested alternative:
│ Component │ Alternate │
│ attribute blocks │ seashells, pasta, buttons │
│ coin and bill combination set │ real coins an bills made from construction paper │
│ craft sticks │ marker set, unused pencils │
│ demonstration clock │ cardboard clock face with hands attached with brad │
│ number cubes │ number cards, spinners │
│ pan balance │ ruler, paper clips, and string │
Virtual Manipulatives CD-ROMs are also available—one CD for grade K-2 and another for grades 3-5. They are a bit expensive for use by a single family, but some might find the investment worthwhile.
They are super easy to use. You can select a type of manipulative and move it on the computer screen to demonstrate concepts. They are quite simple to use—enough so that you could let your children
manipulate them too. Virtual Manipulatives do not replace all manipulatives in the program, but they could replace most of them. Keep in mind that the object of using manipulatives is to allow the
hands-on learning experience that might be critical for some learners, so Virtual Manipulatives should be used judiciously.
Along the same line, calculator usage is taught in the fifth level. It is sometimes suggested as optional within a lesson, but there are lessons dedicated to learning how to use it. In this program,
students should be well-grounded in their computation skills by the time they hit calculator use, so it should not interfere with mastery of computation skills.
Lessons at all levels also follow the same progression. A lesson begins with the teaching presentation. Next, the teacher walks students through guided practice. Then students do independent
Both guided and independent practice problems are in the hardcover student books, while the consumable workbooks are designed for only independent work. This means that students will need separate
paper or a notebook for their work in their hardcover textbook unless you don’t mind treating it as a consumable book.
Lessons concentrate on a single concept rather than providing continual practice on previously-learned concepts. However, review is provided in a section at the beginning of each chapter titled
“Recall Prior Knowledge.” In addition, word problems, practical application problems, and critical thinking activities included throughout the lessons frequently draw on a wide range of mathematical
knowledge. The goal of Math in Focus is to teach concepts so thoroughly that frequent review is unnecessary.
The entire presentation in Math in Focus really challenges students to think much more deeply about mathematics than do most other programs. Students can’t just breeze through the lessons. Students
who grasp concepts easily will likely do very well with this program. For students who struggle, you might slow the pace and take more time with the concrete and pictorial lessons as well as offer
extra guided practice.
Math in Focus mirrors the original Singapore Math’s layout with books A and B for each level for Teacher’s Editions, student textbooks, and student workbooks, essentially splitting the course for
each level into two parts. Kindergarten is the exception to this layout since it has only workbooks rather than separate texts and workbooks; there are four of these relatively thin kindergarten
student workbooks. (Note: Kindergarten level is very classroom oriented. It is very dependent upon presentation from the Teacher’s Edition. Much of the teaching involves group interaction. Some of
this can be adapted for a parent working with only one child, but this level is the most problematic for homeschool use.)
A separate assessment book is available for each level but is not essential. One book covers both parts A and B. Student texts have chapter reviews/tests. Student workbooks have cumulative reviews at
the end of every few chapters as well as a mid-year and end-of-year review/test. These reviews/tests should be adequate for assessment in most situations, but you can purchase an assessment book if
you need more.
The program is designed to be taught from the Teacher’s Edition. The TE for each level has lots of useful information in addition to detailed lesson plans. Reduced student pages are shown in the TE
so you know which pages to use when. Answers overprinted on the reduced student pages serve as your answer key. The TE is essential for kindergarten, but it is possible to work with only student
texts and workbooks for first grade and up as long as you’ve got time to work out problem to check student answers without an answer key. TE’s explain how to present each lesson and also suggest
additional activities. They point out common errors children tend to make and suggest solutions. In addition, at the back of each TE a section of reproducible Teacher Resource pages is to be used as
teaching aids for the pictorial and concrete lessons. You might find it difficult to figure out what these are supposed to be and how to use them without the TE. You will definitely miss out on some
elements of the lessons without the TE, but concepts are presented in so many ways, that you might find that you are not even aware that anything is missing. I would recommend to most parents that
unless they already have a strong background in teaching math with manipulatives, they should get the TE for at least grades 3 and up. (I’ve already mentioned that the kindergarten TE is
indispensable.) In the TE’s for K through 2, the teaching part of the lesson has fairly explicit instructions that tell you what to say and do. These instructions become less explicit for third grade
and up, but they are very helpful. Even with the TE, some assumptions are made that the teacher has a fairly good grasp of math and mathematical vocabulary. I suspect that parents who struggle with
math themselves might sometimes find the TE’s past the first few grade levels difficult to understand.
Math in Focus is presented as a more polished product than Singapore Math. Sturdy and extensive Teacher’s Editions, hardcover texts, and full-color printing of both TE’s and student texts contribute
toward making MIF a beautiful looking and thoroughly developed program, although it is more expensive than Singapore Math. I suspect that MIF will quickly become a very popular option among
Click here for FREE Shipping & Bonus SmartPoints for Math in Focus at the Homeschool Buyers Co-op
All prices are provided for comparison only and are subject to change. Click on prices to verify their accuracy.
Math in Focus: Singapore Math: Homeschool Package, 1st Semester Grade1 2010
• $129.70 List Price at Christianbook.com
• $129.70 List Price at Amazon.com
Math in Focus Manipulative Kit Grade 3
• $82.50 List Price at Rainbowresource.com
Math in Focus Complete Manipulative Kit - Grades K-5
• $237.23 List Price at Rainbowresource.com
Math in Focus: Singapore Math: Homeschool Package, 2nd semester Grade 6 2012
• $87.40 List Price at Christianbook.com
• $87.40 List Price at Amazon.com
Math in Focus: Singapore Math: Homeschool Package, 1st semester Grade 6 2012
• $87.40 List Price at Christianbook.com
• $87.40 List Price at Amazon.com
Math in Focus Grade 5 Homeschool Package - 2nd Semester
Math in Focus Grade 4 Homeschool Package - 2nd Semester
Math in Focus Grade 3 Homeschool Package - 2nd Semester
Math in Focus Grade KA Kit 1st Semester (Singapore Math)
• $116.75 List Price at Christianbook.com
Math in Focus Grade 5 Homeschool Package - 1st Semester
Math in Focus Grade 4 Homeschool Package - 1st Semester
Math in Focus: Singapore Math: Homeschool Package, 2nd Semester Grade 2 2010
• $129.70 List Price at Amazon.com
• $129.70 List Price at Christianbook.com
Math in Focus Grade 3 Homeschool Package - 1st Semester
Math in Focus Grade 2 Homeschool Package - 1st Semester
Math in Focus Grade 1B Kit 2nd Semester (Singapore Math)
• $129.70 List Price at Christianbook.com
Math in Focus: Singapore Math: Homeschool Package, 2nd Semester Grade K 2010
• $116.75 List Price at Amazon.com
• $116.75 List Price at Christianbook.com
Instant Key
• Suitable for: class or one-on-one teaching
Need for parent/teacher instruction: moderate to high
Prep time needed: moderate
Teacher's manual: very useful/sometimes essential
Religious perspective: Secular/neutral
Publisher's Info
• Houghton Mifflin Harcourt/Saxon Homeschool
Updates for 100 Top Picks
Great Book Reviews
About Cathy Duffy
Contact Us Ordering
Submit Products for Review
All reviews and articles on this site belong to Cathy Duffy unless otherwise identified. No review or article may be copied or reprinted without permission except for a single copy of a review made
for temporary use AND not shared with others. Our organization does not engage in any solicitation activities in California specifically targeting potential customers residing in California
(including distributing flyers, newsletters and other promotional materials, sending emails, initiating telephone calls or making referrals in person) that refer potential customers to the retailers
with whom we have links.
© Copyright 2010-2014 - Cathy Duffy Web Design by Servator Design
|
{"url":"http://cathyduffyreviews.com/math/math-in-focus.htm","timestamp":"2014-04-17T12:41:27Z","content_type":null,"content_length":"32667","record_id":"<urn:uuid:1015e44c-8fce-4895-acaa-505d15264b73>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Formulae Collections
Air Dispersion Modeling
Conversions and formulas
Astronomy Formulas
Bond Enthalpies
Average bond enthalpies in kJ/mol - Format: PDF
Reference Tables for Physical Setting - Format: PDF
Electrical Engineering
Basic electrical engineering formulas - Format: PDF
Engineering Formulas
Data table: acid-base indicators - Format: PDF
Common ions: periodic chart of ions - Format: PDF
Tables of common polyatomic ions
Magnet Formulas
A small web site devoted to the vanishing art of practical magnet design without FEA (Finite Element Analysis)
Mathematical Tables and Formulas
This site is a place for students and educators to quickly access mathematical formulas
Formulas and Tables
Measurement Formulas
Here are some measurement formulas from the different parts of geometry. You'll find some two-dimensional and some three-dimensional formulas (USA) [e]
… of common compounds in water - Format: PDF
Tesla Coil Formulas
Tesla coil formulas for a reference to those who prefer to make calculations on paper
Thermochemical Data
… of Selected Elements and Compounds - Format: PDF
Thermodynamics Formulas
|
{"url":"http://www.internetchemistry.com/chemistry/formulas.htm","timestamp":"2014-04-21T12:07:54Z","content_type":null,"content_length":"33531","record_id":"<urn:uuid:f45f7d54-3313-4b89-9ed6-e8d6aa04bc88>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dan Chisholm Mock Exam
Author Dan Chisholm Mock Exam
Ranch Hand
I am confused on a control question,
Joined: Jul 28, 2003 class JMM110 {
Posts: 30 public static void main (String[] args) {
int j = 0;
do for (int i = 0; i++ < 2
while (j++ < 2);
answer = Prints: 121212
This would make sense to me if it was i++ <= 2 and j++ <= 2, otherwise how does it get to 2 for either test.
Ranch Hand
in both conditional statements you use ++ POSTfix operator. It does the following: first returns the value of a variable, and then increaments it.
Joined: Jan 16, 2004 so in the for loop i++ < 2 means: compare i to 2, then increament it.
Posts: 41 if i is 0 initially this would happen:
is 0 less than 2? yes (the expression evaluates to true)
now add 1 to i (i is 1)
print i (which is 1)
on the next iteration i is 1 so...
is 1 less than 2? yes (the expression evaluates to true)
now add 1 to i (i is 2)
print i (which is 2)
the next iteration breaks out of the loop since 2 is not less than 2
change the postfix ++ to prefix and you'll get the desired results.
The sword of destiny has two blades, one of them is you.
Ranch Hand
Joined: Jan 12, 2004
Posts: 384 so in the for loop you see what is happening, you start by making i=0, then check 0 < 2 then increment.
so i=1, 1 < 2 then increment i=2, so output is 12
2 is not less than 2 so check while.
while is o < 2 then increment so start the for loop again from the start so ouptut is: 1212
check while 1 < 2, true, so do the for loop again.
ouptut is 121212 then check while.
2 is not less than 2 so false so stop the do statement.
hope this helps you with the output.
How simple does it have to be???
subject: Dan Chisholm Mock Exam
|
{"url":"http://www.coderanch.com/t/244913/java-programmer-SCJP/certification/Dan-Chisholm-Mock-Exam","timestamp":"2014-04-19T06:54:40Z","content_type":null,"content_length":"24519","record_id":"<urn:uuid:62e99d4f-24a1-49a5-b482-93b2c2d91652>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Everyday Mathematics
About the Everyday Mathematics Author Group
The Center for Elementary Mathematics and Science Education at the University of Chicago is home to the authors of Everyday Mathematics.
The development of Everyday Mathematics involves mathematicians, mathematics educators, classroom teachers, and experienced mathematics textbook writers and editors. Short bios are available for most
of the authors, advisors, and technical staff.
Click on a grade level to see authors:
Pre-K Kindergarten
Grade 1 Grade 2
Grade 3 Grade 4
Grade 5 Grade 6
Deborah Arron Leslie, Ann E. Audrain, Jean Bell, Max Bell, Jeanine O'Nan Brownell
Technical Art
Mathematics and Technology Advisor
UCSMP Editorial
Patrick Carroll, Tiffany Nicole Slade
Arthur J. Baroody
Regina Littleton, Kriszta Miner, Tracy Aiden, Jane Averill, Barbara Bryan, Pat Chamberlain, Barbara Clear, Pam Dayhoff, Amanda DiMattina, Christina Duffy, Shelia Forde, Dorothy Freedman, Anja
Gadbois, Maria Gallagher, Carmen Garay, Gabriella Gavino, Jeff Gillespie, Ana Gonzalez, Ethel Gue, Megan Hillegass, Barbara Jeffries, Ophe Jones, Julie Jurgens, Margaret Krulee, Vicki Laskaris,
Lesley May, Colleen McAllister, Barbara McMurray, Kathy McQueen, Jackie Morgan, Donna Oline, Jenny Hennig Parent, Diane Patterson, Elizabeth Regan, Esperanza Rivas, Irma Rodriguez, Alison
Schwartz, Sheila Sconiers, Earnice Sheppard, Chad Taylor, Mary Kate Toner, Linda Van Ausdall, Maria Viteri, Julie Vogel, Mary Watt, Derek Young, Karyl Zahorik
• Deborah Arron Leslie
Debbie Leslie is a senior curriculum developer, early childhood specialist, and Director of Science Companion Projects at the University of Chicago's Center for Elementary Mathematics and Science
Education (CEMSE). She provides professional development, coaching, and consultation to support the implementation of Everyday Mathematics and inquiry-based science curricula in the University of
Chicago charter schools, the Chicago Public Schools, and elsewhere.
Ms. Leslie is also the Early Childhood Team Leader for the elementary component of UCSMP. In this capacity, she led the author team that developed the new version of Pre-Kindergarten Everyday
Mathematics and the team that worked on the 3rd edition revisions for Kindergarten Everyday Mathematics. Leslie is also one of the authors of Science Companion, an inquiry-based elementary school
science curriculum.
At CEMSE, Leslie works on several projects that draw on her background in science and her interest in high-quality professional development in the areas of math, science, and early childhood
education. Leslie taught Pre-Kindergarten, Kindergarten, and First Grade for 10 years in Connecticut and in the Chicago area. She has also done work for the Bush Center for Child Development and
Social Policy at Yale University, the Field Museum, and the Rochelle Lee Fund in Chicago, Illinois, and has done presentations, consulting, and professional development for several other
educational organizations. She holds a Bachelor's Degree in Molecular Biochemistry and Biophysics from Yale University and a Master's Degree in Teaching from the University of Chicago.
• Jean Bell
Jean Bell is one of the founding authors of Everyday Mathematics and part of the Author panel that has responsibility for general oversight of the Publishing Agreement for Everyday Mathematics
and the Center Agreement that established the Center for Mathematics and Science Education (CEMSE) at the University of Chicago.
Early in her career Jean Bell was a microbiologist in the California State Public Health Laboratories and in the laboratory of Dr. Marc Beem at the University of Chicago. In a career change after
some years of full-time parenting Bell began teaching pre-school children and then with an MST degree from the University of Chicago became a primary school teacher.
In the early 1980's Bell did interview-based research on the mathematics capabilities of young children that showed very clearly that these capabilities had been seriously underestimated. This
led her to work with other founding authors within the University of Chicago School Mathematics Project (UCSMP) on development of the grades K-6 Everyday Mathematics curriculum.
On completion of the First Edition of Everyday Mathematics she returned to her science roots, founding with others the Chicago Science Group to develop the K-5 Science Companion curriculum. That
curriculum, distributed by Pearson Scott Foresman, seeks to be inquiry based, relatively easy to implement, firmly rooted in "reform" oriented national standards, and easily integrated with such
programs as Everyday Mathematics.
• Max Bell
Max Bell is Professor Emeritus, Department of Education and the Physical Sciences Division at the University of Chicago and is affiliated with the University of Chicago Center for Mathematics and
Science Education (CEMSE). He is one of the founding authors of Everyday Mathematics, and part of the Author panel that has responsibility for general oversight of the Publishing Agreement for
Everyday Mathematics and of the Center Agreement that established CEMSE.
Bell shifted in 1960 from teaching high school students to teaching teachers in the then-new MAT program at the University of Chicago. (He had earlier been in an influential NSF funded Academic
Year Institute for mathematics teachers conducted by the UC mathematics department.) He spent a decade as MAT Mathematics Coordinator while also working with UC people, SMSG, and other
organizations on reform-oriented secondary school mathematics materials. But as it became very clear that many children (and nearly all Chicago inner city children) entered secondary school with
little understanding of mathematics, Bell shifted his attention to elementary school mathematics instruction and teacher preparation.
Bell's widely reprinted 1973 article ("What does 'everyman' really need from school mathematics?") set an ambitious content agenda that anticipated the 1989 NCTM "Standards." Structured
interviews of several hundred five to nine year old children clearly showed that their mathematics learning capabilities were much greater than had been supposed. At the same time, textbook
analyses and interviews with teachers revealed an essentially vacuous primary school mathematics curriculum. With those foundations established by 1985, Bell joined with others in the University
of Chicago School Mathematics Project (UCSMP) in research, development, field testing, and widespread dissemination of the Everyday Mathematics curriculum for grades K-6.
Bell continues his interest in improvement of elementary school science and mathematics teaching, now focused on maximizing the potential for "educative curricula" (from which teachers learn as
they teach) to attack well known problems of scale in helping in-service teachers better understand and teach science and mathematics. Also, Bell and CEMSE colleagues are just beginning
conceptualization and specifications for a coherent "web based curriculum" for any individual who for any reason wishes to learn mathematics, from basic counting and arithmetic through data
analysis, algebra, or geometry.
• Diana Barrie
Diana Barrie received her BFA and MFA from the Art Institute of Chicago. She taught photography and film-making at the University of Northern Iowa and the University of Wisconsin at Milwaukee.
After working in the reprographics industry in New York City and Chicago, she came to UCSMP in 1990, where she has performed various duties, including creating technical illustrations for all
grade levels of Everyday Mathematics. She has also created illustrations for Science Companion and the Center for Urban School Improvement's STEP Literacy Assessments.
• Jim Flanders
Jim Flanders is a researcher at the Center for Elementary Mathematics and Science Education (CEMSE) at the University of Chicago where his focus is on integrating technology into the mathematics
curriculum. He is a contributing author of several University of Chicago School Mathematics Project (UCSMP) books including Everyday Mathematics, Transition Mathematics, Algebra, Advanced
Algebra, and Functions, Statistics and Trigonometry.
Prior to joining CEMSE he was a member of the Chicago Science Group helping develop the field testing of the Science Companion and evaluating software for elementary school mathematics. He also
wrote calculator software for the Core-Plus Mathematics Project, and was a consultant to the Everyday Learning Corporation and the Louisiana Systemic Initiative Project.
Flanders has been an assistant professor in the department of mathematics and statistics at Western Michigan University where he had NSF support to develop a course for preservice mathematics
teachers on integrating technology into secondary school mathematics. He has also been an instructor of mathematics at the Colorado College and an academic dean and mathematics department chair
at The Colorado Springs School. He has a B.A. in mathematics from Colorado College and a Ph.D. in mathematics education from the University of Chicago.
Jean Bell, Max Bell, David W. Beer* , Dorothy Freedman, Nancy Guile Goodsell^†, Nancy Hanvey, Deborah Arron Leslie, Kate Morrison
^† First Edition only
* Third Edition only
Technical Art
Teachers in Residence
Ann E. Audrain, Margaret Krulee, Barbara Smart
Mathematics and Technology Advisor
UCSMP Editorial
Patrick Carroll, Lila K. Schwartz, Tiffany Nicole Slade
3rd Edition ELL Consultant
Regina Littleton, Kriszta Miner, Deborah Adams, Patrick Carroll, Moira Erwine, Carolyn Frieswyk, Serena Hohmann, Amy Rose, John Saller, Sheila Sconiers, Ann Smelser, Penny Stahly, Izaak Wirszup,
Nancy Roesing
• Jean Bell
Jean Bell is one of the founding authors of Everyday Mathematics and part of the Author panel that has responsibility for general oversight of the Publishing Agreement for Everyday Mathematics
and the Center Agreement that established the Center for Mathematics and Science Education (CEMSE) at the University of Chicago.
Early in her career Jean Bell was a microbiologist in the California State Public Health Laboratories and in the laboratory of Dr. Marc Beem at the University of Chicago. In a career change after
some years of full-time parenting Bell began teaching pre-school children and then with an MST degree from the University of Chicago became a primary school teacher.
In the early 1980's Bell did interview-based research on the mathematics capabilities of young children that showed very clearly that these capabilities had been seriously underestimated. This
led her to work with other founding authors within the University of Chicago School Mathematics Project (UCSMP) on development of the grades K-6 Everyday Mathematics curriculum.
On completion of the First Edition of Everyday Mathematics she returned to her science roots, founding with others the Chicago Science Group to develop the K-5 Science Companion curriculum. That
curriculum, distributed by Pearson Scott Foresman, seeks to be inquiry based, relatively easy to implement, firmly rooted in "reform" oriented national standards, and easily integrated with such
programs as Everyday Mathematics.
• Max Bell
Max Bell is Professor Emeritus, Department of Education and the Physical Sciences Division at the University of Chicago and is affiliated with the University of Chicago Center for Mathematics and
Science Education (CEMSE). He is one of the founding authors of Everyday Mathematics, and part of the Author panel that has responsibility for general oversight of the Publishing Agreement for
Everyday Mathematics and of the Center Agreement that established CEMSE.
Bell shifted in 1960 from teaching high school students to teaching teachers in the then-new MAT program at the University of Chicago. (He had earlier been in an influential NSF funded Academic
Year Institute for mathematics teachers conducted by the UC mathematics department.) He spent a decade as MAT Mathematics Coordinator while also working with UC people, SMSG, and other
organizations on reform-oriented secondary school mathematics materials. But as it became very clear that many children (and nearly all Chicago inner city children) entered secondary school with
little understanding of mathematics, Bell shifted his attention to elementary school mathematics instruction and teacher preparation.
Bell's widely reprinted 1973 article ("What does 'everyman' really need from school mathematics?") set an ambitious content agenda that anticipated the 1989 NCTM "Standards." Structured
interviews of several hundred five to nine year old children clearly showed that their mathematics learning capabilities were much greater than had been supposed. At the same time, textbook
analyses and interviews with teachers revealed an essentially vacuous primary school mathematics curriculum. With those foundations established by 1985, Bell joined with others in the University
of Chicago School Mathematics Project (UCSMP) in research, development, field testing, and widespread dissemination of the Everyday Mathematics curriculum for grades K-6.
Bell continues his interest in improvement of elementary school science and mathematics teaching, now focused on maximizing the potential for "educative curricula" (from which teachers learn as
they teach) to attack well known problems of scale in helping in-service teachers better understand and teach science and mathematics. Also, Bell and CEMSE colleagues are just beginning
conceptualization and specifications for a coherent "web based curriculum" for any individual who for any reason wishes to learn mathematics, from basic counting and arithmetic through data
analysis, algebra, or geometry.
• David W. Beer
David Beer joined The Center for Elementary Mathematics and Science Education (CEMSE) as ethnographic evaluator in 2003. His work at the center has included: providing strategic consulting
support to the New York City Department of Education as it implements Everyday Mathematics; developing a survey group that has conducted numerous studies of mathematics teaching, learning,
professional development, and curriculum use in the U.S., in Chicago, and in New York City; developing a video library of Everyday Mathematics lessons in urban classrooms; co-leading the Early
Childhood team revising and authoring the 3rd Edition of Everyday Mathematics for Kindergarten, and directing the field test of the 3rd Edition of Everyday Mathematics for Pre-Kindergarten.
Prior to joining CEMSE, Beer was Research Assistant Professor in the Department of Occupational Therapy at the University of Illinois at Chicago (UIC). He taught occupational and physical therapy
students, trained graduate students in qualitative research, and evaluated human service, early childhood, and elementary school interventions. He also published articles and chapters on
evaluation, the experience of illness and disability, and qualitative research methods.
From 1984 until 1991, Beer was research associate at the Erikson Institute, where he taught, conducted various evaluation research projects concerning pre-primary and primary school education and
human service delivery, and helped author What Children Can Tell Us (Garbarino et al, 1987). Beer was trained as a cultural anthropologist at the University of Chicago.
• Deborah Arron Leslie
Debbie Leslie is a senior curriculum developer, early childhood specialist, and Director of Science Companion Projects at the University of Chicago's Center for Elementary Mathematics and Science
Education (CEMSE). She provides professional development, coaching, and consultation to support the implementation of Everyday Mathematics and inquiry-based science curricula in the University of
Chicago charter schools, the Chicago Public Schools, and elsewhere.
Ms. Leslie is also the Early Childhood Team Leader for the elementary component of UCSMP. In this capacity, she led the author team that developed the new version of Pre-Kindergarten Everyday
Mathematics and the team that worked on the 3rd edition revisions for Kindergarten Everyday Mathematics. Leslie is also one of the authors of Science Companion, an inquiry-based elementary school
science curriculum.
At CEMSE, Leslie works on several projects that draw on her background in science and her interest in high-quality professional development in the areas of math, science, and early childhood
education. Leslie taught Pre-Kindergarten, Kindergarten, and First Grade for 10 years in Connecticut and in the Chicago area. She has also done work for the Bush Center for Child Development and
Social Policy at Yale University, the Field Museum, and the Rochelle Lee Fund in Chicago, Illinois, and has done presentations, consulting, and professional development for several other
educational organizations. She holds a Bachelor's Degree in Molecular Biochemistry and Biophysics from Yale University and a Master's Degree in Teaching from the University of Chicago.
• Diana Barrie
Diana Barrie received her BFA and MFA from the Art Institute of Chicago. She taught photography and film-making at the University of Northern Iowa and the University of Wisconsin at Milwaukee.
After working in the reprographics industry in New York City and Chicago, she came to UCSMP in 1990, where she has performed various duties, including creating technical illustrations for all
grade levels of Everyday Mathematics. She has also created illustrations for Science Companion and the Center for Urban School Improvement's STEP Literacy Assessments.
• Jim Flanders
Jim Flanders is a researcher at the Center for Elementary Mathematics and Science Education (CEMSE) at the University of Chicago where his focus is on integrating technology into the mathematics
curriculum. He is a contributing author of several University of Chicago School Mathematics Project (UCSMP) books including Everyday Mathematics, Transition Mathematics, Algebra, Advanced
Algebra, and Functions, Statistics and Trigonometry.
Prior to joining CEMSE he was a member of the Chicago Science Group helping develop the field testing of the Science Companion and evaluating software for elementary school mathematics. He also
wrote calculator software for the Core-Plus Mathematics Project, and was a consultant to the Everyday Learning Corporation and the Louisiana Systemic Initiative Project.
Flanders has been an assistant professor in the department of mathematics and statistics at Western Michigan University where he had NSF support to develop a course for preservice mathematics
teachers on integrating technology into secondary school mathematics. He has also been an instructor of mathematics at the Colorado College and an academic dean and mathematics department chair
at The Colorado Springs School. He has a B.A. in mathematics from Colorado College and a Ph.D. in mathematics education from the University of Chicago.
• Kathryn B. Chval
Kathryn B. Chval is an Assistant Professor and Co-Director of the Missouri Center for Mathematics and Science Teacher Education at the University of Missouri-Columbia. Dr. Chval is also a
Co-Principal Investigator for the Center for the Study of Mathematics Curriculum and the Researching Science and Mathematics Teacher Learning in Alternative Certification Models Project which are
both funded by the National Science Foundation.
Prior to joining University of Missouri, Dr. Chval was the Acting Section Head for the Teacher Professional Continuum Program in the Division of Elementary, Secondary and Informal Science
Division at the National Science Foundation. She also spent fourteen years at the University of Illinois at Chicago managing NSF-funded projects.
Dr. Chval's research interests include (1) effective preparation models and support structures for teachers across the professional continuum, (2) effective elementary teaching of underserved
populations, especially English language learners, and (3) curriculum standards and policies.
• Serena Hohmann
Serena Hohmann received a B.A. in Spanish and International Studies in 2003 from Washington University in St. Louis and an M.A. in Latin American and Caribbean Studies from University of Chicago
in 2006. Throughout her undergraduate and graduate studies, Ms. Hohmann studied and worked in both Spain and Mexico.
From 2005 to 2006, she served as an editorial assistant for the third edition's language diversity team, focusing primarily on the Everyday Mathematics Differentiation Handbooks. In this
position, she observed bilingual classrooms, conducted extensive research on language acquisition, and consulted with several bilingual educators and specialists to determine how to best adapt
the curriculum for English Language Learners.
After completing her graduate work in 2006, Ms. Hohmann was selected for a Presidential Management Fellowship (PMF) at the U.S. Department of State, Office for Analysis of Inter-American Affairs.
She currently serves as a Foreign Affairs Analyst, providing policy-makers in-depth analyses of Mexico and Canada.
Grade 1
Jean Bell, Max Bell, John Bretzlauf, Amy Dillard, Robert Hartfield, Andy Isaacs, James McBride, Rachel Malpass McCall, Kathleen Pitvorec, Peter Saecker
^† First Edition only
Technical Art
Teachers in Residence
Jeanine O'Nan Brownell, Andrea Cocke, Brooke A. North
Mathematics and Technology Advisor
UCSMP Editorial
Rossita Fernando, Lila K. Schwartz
3rd Edition ELL Consultant
Teacher-in-Residence for the Assessment Handbook
Assistant for the Differentiation Handbook
Regina Littleton, Kriszta Miner, Allison Greer, Meg Schleppenbach, Cynthia Annorh, Amy DeLong, Debra Fields, Jenny Fischer, Nancy Glinka, Serena Hohmann, Robert Balfanz, Judith Busse, Mary Ellen
Dairyko, Lynn Evans, James Flanders, Dorothy Freedman, Nancy Guile Goodsell, Pam Guastafeste, Nancy Hanvey, Murray Hozinsky, Deborah Arron Leslie, Sue Lindsley, Mariana Mardrus, Carol Montag,
Elizabeth Moore, Kate Morrison, William D. Pattison, Joan Pederson, Erenda Penix, June Ploen, Herb Price, Danette Riehle, Ellen Ryan, Marie Schilling, Shelia Sconiers, Susan Sherrill, Patricia
Smith, Kimberli Sorg, Robert Strang, Jaronda Strong, Kevin Sweeney, Sally Vongsathorn, Esther Weiss, Francine Williams, Michael J. Wilson, Izaak Wirzup
• Jean Bell
Jean Bell is one of the founding authors of Everyday Mathematics and part of the Author panel that has responsibility for general oversight of the Publishing Agreement for Everyday Mathematics
and the Center Agreement that established the Center for Mathematics and Science Education (CEMSE) at the University of Chicago.
Early in her career Jean Bell was a microbiologist in the California State Public Health Laboratories and in the laboratory of Dr. Marc Beem at the University of Chicago. In a career change after
some years of full-time parenting Bell began teaching pre-school children and then with an MST degree from the University of Chicago became a primary school teacher.
In the early 1980's Bell did interview-based research on the mathematics capabilities of young children that showed very clearly that these capabilities had been seriously underestimated. This
led her to work with other founding authors within the University of Chicago School Mathematics Project (UCSMP) on development of the grades K-6 Everyday Mathematics curriculum.
On completion of the First Edition of Everyday Mathematics she returned to her science roots, founding with others the Chicago Science Group to develop the K-5 Science Companion curriculum. That
curriculum, distributed by Pearson Scott Foresman, seeks to be inquiry based, relatively easy to implement, firmly rooted in "reform" oriented national standards, and easily integrated with such
programs as Everyday Mathematics.
• Max Bell
Max Bell is Professor Emeritus, Department of Education and the Physical Sciences Division at the University of Chicago and is affiliated with the University of Chicago Center for Mathematics and
Science Education (CEMSE). He is one of the founding authors of Everyday Mathematics (EM), and part of the Author panel that has responsibility for general oversight of the Publishing Agreement
for Everyday Mathematics and of the Center Agreement that established CEMSE.
Bell shifted in 1960 from teaching high school students to teaching teachers in the then-new MAT program at the University of Chicago. (He had earlier been in an influential NSF funded Academic
Year Institute for mathematics teachers conducted by the UC mathematics department.) He spent a decade as MAT Mathematics Coordinator while also working with UC people, SMSG, and other
organizations on reform-oriented secondary school mathematics materials. But as it became very clear that many children (and nearly all Chicago inner city children) entered secondary school with
little understanding of mathematics, Bell shifted his attention to elementary school mathematics instruction and teacher preparation.
Bell's widely reprinted 1973 article ("What does 'everyman' really need from school mathematics?") set an ambitious content agenda that anticipated the 1989 NCTM "Standards." Structured
interviews of several hundred five to nine year old children clearly showed that their mathematics learning capabilities were much greater than had been supposed. At the same time, textbook
analyses and interviews with teachers revealed an essentially vacuous primary school mathematics curriculum. With those foundations established by 1985, Bell joined with others in the University
of Chicago School Mathematics Project (UCSMP) in research, development, field testing, and widespread dissemination of the Everyday Mathematics curriculum for grades K-6.
Bell continues his interest in improvement of elementary school science and mathematics teaching, now focused on maximizing the potential for "educative curricula" (from which teachers learn as
they teach) to attack well known problems of scale in helping in-service teachers better understand and teach science and mathematics. Also, Bell and CEMSE colleagues are just beginning
conceptualization and specifications for a coherent "web based curriculum" for any individual who for any reason wishes to learn mathematics, from basic counting and arithmetic through data
analysis, algebra, or geometry.
• Amy Dillard
Amy L. Dillard received a Bachelor of Arts degree in Elementary Education from Boston College in Chestnut Hill, MA. She taught elementary school for four years at Hoffman School in Glenview,
Illinois. In 1994, she earned a Master of Arts degree in Mathematics Education from DePaul University in Chicago, Illinois.
Ms. Dillard worked as a Teacher-in-Residence with the University of Chicago School Mathematics Project (UCSMP) from 1994 to 1997. She was involved in the development of the commercial publication
of the first edition of Fourth Grade Everyday Mathematics, as well as the field testing and commercial publication of the first editions of Fifth Grade Everyday Mathematics and Sixth Grade
Everyday Mathematics. Ms. Dillard worked from 1997 to 2001 as one of the authors of the second editions of Everyday Mathematics K-6.
In 2002 she began work for the UCSMP Everyday Mathematics Center. The NSF-funded center was established to support educators, parents and students who are using, or will soon be using, Everyday
Mathematics. Since 2003 and currently, Ms. Dillard serves as the Associate Director of Everyday Mathematics, third edition.
• Andrew Isaacs
Andy Isaacs received a BA in classical Greek from Northwestern University in 1974, an MST in elementary education from the University of Chicago in 1977, an MS in mathematics from the University
of Illinois at Chicago (UIC) in 1987, and a DA in mathematics (with concentrations in abstract algebra and theoretical computer science) from UIC in 1994. Philip Wagreich directed Isaacs's
dissertation, "Whole number concepts and operations in grades 1 and 2: Curriculum and rationale."
From 1977 to 1985, Isaacs taught fourth and fifth grades in Chicago-area public schools. In 1985, he joined the Department of Mathematics, Statistics, and Computer Science at UIC as a lecturer in
mathematics education. Beginning in 1986, Isaacs worked closely with Wagreich and Howard Goldberg on the NSF-funded Teaching Integrated Mathematics and Science Project (TIMS). In 1989 and 1990,
he worked with Wagreich and David Page on UIC's Maneuvers with Mathematics Project, another NSF-funded curriculum development effort. From 1990 to 1995, he was a full time writer for Math
Trailblazers, a comprehensive curriculum for grades K-5 based on TIMS and funded by NSF.
In 1995, Isaacs joined the University of Chicago School Mathematics Project to work on the Bridges to Classroom Mathematics Project, which was directed by Sheila Sconiers. Isaacs was an author on
the second edition of Everyday Mathematics, published in 2000 and 2001, and most recently, he directed revisions that led to a third edition of Everyday Mathematics in 2007. He is Co-Director of
the University's Center for Elementary Mathematics and Science Education, and a Senior Research Associate in the University's Physical Sciences Division.
• James McBride
James McBride is a Senior Research Associate in the Division of Physical Sciences at the University of Chicago, where he is Co-Director of the Center for Elementary Mathematics and Science
Education (CEMSE). Within CEMSE, McBride is director of sampling and statistical analysis for all survey research, including implementation studies, teacher surveys, and studies of student
achievement. And for the past fourteen years, he has been a principal author for all three editions of the Everyday Mathematics (EM) program.
From 1980 to 1992, McBride was Senior Research Director and Senior Mathematical Statistician at Response Analysis Corporation. His work focused on the areas of sample design, estimation
procedures, modeling, imputation, and statistical analysis. Assignments covered the gamut of survey research applications for both complex national surveys and smaller, special population
surveys. He is experienced in directing the data collection and budget monitoring activities associated with national population surveys.
From 1975 to 1980, McBride was an Assistant Professor in the Department of Statistics at Princeton University. He directed the undergraduate program in statistics, and taught graduate and
undergraduate courses in inference, probability, multivariate analysis, experimental design, demography, time series, econometrics, statistical computing and simulation, and exploratory data
analysis. He taught courses at the Woodrow Wilson School of Princeton University and the University of Chicago, and has been a Visiting Assistant Professor in the Department of Mathematics at
Cornell University. He holds a grades 6-12 teaching certificate and a B.S. in mathematics and physics, an MAT in mathematics, and a PhD in Statistics, all from the University of Chicago.
• Rachel Malpass McCall
Rachel Malpass McCall has been with the University of Chicago's Center for Elementary Mathematics and K-12 Science Education (CEMSE) for over four years. She is the lead author for the Grade 1
teacher and student materials for the Third Edition of Everyday Mathematics. Most recently, Ms. McCall has been focusing her efforts on updates to the third edition of Everyday Mathematics,
including the state editions and the forthcoming copyright update, and planning for future editions of Everyday Mathematics.
Before coming to the University of Chicago, Ms. McCall worked for three years as an editor for Pearson Education. She spent two years in the mathematics department as the primary editor for the
2004 edition of Scott Foresman-Addison Wesley Mathematics for First Grade. During that time McCall also collaborated with the team at TERC in updating Investigations in Number, Data, and Space.
In her last year with the publisher, Ms. McCall participated in the planning and initial writing of the 2006 edition of Scott Foresman Science for Grades 1 and 2.
McCall spent 7 years teaching elementary school in Nashville, Tennessee; Atlanta, Georgia; and Berwyn, Illinois prior to her time at Pearson Education. McCall holds an Early Childhood teaching
certificate, undergraduate degrees in Mathematics and Early Childhood Education, and a Masters degree in Reading and Learning Disabilities.
• Kathleen Pitvorec
Kathleen Pitvorec received a B.A. in Anthropology in 1987 and an M.S.T. in 2001 from the University of Chicago. Ms. Pitvorec began her career in education with eight years as an elementary school
teacher in both public and private schools. She left the classroom to become part of the author team of Everyday Mathematics, joining the team in late 1995 as a Teacher-in-Residence, and becoming
one of the authors for second and third editions.
In addition to participating in the writing of the materials, as one of the second-edition authors, she coordinated the preliminary information-gathering from teachers about first-edition
materials and the field test of the revised materials. From 2000 to 2004, Ms. Pitvorec served as the Associate Director of the Implementation Center at the University of Chicago School
Mathematics Project (UCSMP). In this position, she developed and implemented workshops for inservice teachers, teacher-leaders, and administrators at local and national levels.
From 2005 to 2007, Ms. Pitvorec was a third edition author of Everyday Mathematics. She oversaw the development, field testing, and revision of open-response assessment items for Grades 1-6. She
is a co-author on the grade-level specific third edition Differentiation Handbooks and Assessment Handbooks included with the program materials.
In 2006 and 2007, Ms. Pitvorec worked on an NSF-funded research project at the University of Illinois investigating teachers' implementations of elementary school standards-based mathematics
lessons. In 2008, she worked on an NSF-funded research project at the University of Illinois researching assessment tools in Everyday Mathematics. She is currently a doctoral student in the
Learning Sciences program at the University of Illinois doing her dissertation research on the training of preservice elementary school mathematics teachers.
• Peter Saecker
Peter Saecker received a B.A. in Mathematics from Lawrence University in 1959 and an M.A. in Mathematics as a Woodrow Wilson Fellow from UCLA in 1960. After a year of graduate study in
mathematics at Northwestern in 1961, Peter Saecker joined Science Research Associates as a mathematics editor. For the next 31 years, he wrote, edited, and managed a variety of educational
materials at SRA, including elementary and secondary math textbooks and software.
In the early 1990s he helped write the grant proposal that secured NSF funding for the development of Everyday Mathematics Grades 4-6. Saecker joined the team full time in 1992 and continued to
work on Everyday Mathematics through the completion of the second edition in 2001. He died in the summer of 2001, as the second edition was going into print.
• Diana Barrie
Diana Barrie received her BFA and MFA from the Art Institute of Chicago. She taught photography and film-making at the University of Northern Iowa and the University of Wisconsin at Milwaukee.
After working in the reprographics industry in New York City and Chicago, she came to UCSMP in 1990, where she has performed various duties, including creating technical illustrations for all
grade levels of Everyday Mathematics. She has also created illustrations for Science Companion and the Center for Urban School Improvement's STEP Literacy Assessments.
• Jim Flanders
Jim Flanders is a researcher at the Center for Elementary Mathematics and Science Education (CEMSE) at the University of Chicago where his focus is on integrating technology into the mathematics
curriculum. He is a contributing author of several University of Chicago School Mathematics Project (UCSMP) books including Everyday Mathematics, Transition Mathematics, Algebra, Advanced
Algebra, and Functions, Statistics and Trigonometry.
Prior to joining CEMSE he was a member of the Chicago Science Group helping develop the field testing of the Science Companion and evaluating software for elementary school mathematics. He also
wrote calculator software for the Core-Plus Mathematics Project, and was a consultant to the Everyday Learning Corporation and the Louisiana Systemic Initiative Project.
Flanders has been an assistant professor in the department of mathematics and statistics at Western Michigan University where he had NSF support to develop a course for preservice mathematics
teachers on integrating technology into secondary school mathematics. He has also been an instructor of mathematics at the Colorado College and an academic dean and mathematics department chair
at The Colorado Springs School. He has a B.A. in mathematics from Colorado College and a Ph.D. in mathematics education from the University of Chicago.
• Kathryn B. Chval
Kathryn B. Chval is an Assistant Professor and Co-Director of the Missouri Center for Mathematics and Science Teacher Education at the University of Missouri-Columbia. Dr. Chval is also a
Co-Principal Investigator for the Center for the Study of Mathematics Curriculum and the Researching Science and Mathematics Teacher Learning in Alternative Certification Models Project which are
both funded by the National Science Foundation.
Prior to joining University of Missouri, Dr. Chval was the Acting Section Head for the Teacher Professional Continuum Program in the Division of Elementary, Secondary and Informal Science
Division at the National Science Foundation. She also spent fourteen years at the University of Illinois at Chicago managing NSF-funded projects.
Dr. Chval's research interests include (1) effective preparation models and support structures for teachers across the professional continuum, (2) effective elementary teaching of underserved
populations, especially English language learners, and (3) curriculum standards and policies.
• Soundarya Radhakrishnan
Soundarya Radhakrishnan has a bachelor's degree in chemical engineering. She received her graduate degree in education from Northwestern University in 2001. Ms. Radhakrishnan was a Chicago Public
School teacher at Gray Elementary from 2001-2003. She worked as a Math Specialist with the Chicago Public School system advising elementary math teachers of five Area 1 schools in 2003-2004. Her
responsibilities included providing professional development and teacher training through workshops as well as co-teaching and modeling lessons in K-6 classrooms using Everyday Mathematics
In 2004-2005, she was part of developing the Everyday Mathematics open response assessment section of the third edition of the Assessment Handbook at the University of Chicago. This included
creating and developing open response problems for grades 1-6, field-testing these problems in classrooms at two Chicago schools, and developing rubrics for analyzing student work.
From 2005-2008, she worked as an education consultant for the Everyday Mathematics curriculum that involved training teachers both in public and private Schools. She is currently working as a
Math Facilitator in the Office of Math and Science for the Chicago Public Schools.
• Serena Hohmann
Serena Hohmann received a B.A. in Spanish and International Studies in 2003 from Washington University in St. Louis and an M.A. in Latin American and Caribbean Studies from University of Chicago
in 2006. Throughout her undergraduate and graduate studies, Ms. Hohmann studied and worked in both Spain and Mexico.
From 2005 to 2006, she served as an editorial assistant for the third edition's language diversity team, focusing primarily on the Everyday Mathematics Differentiation Handbooks. In this
position, she observed bilingual classrooms, conducted extensive research on language acquisition, and consulted with several bilingual educators and specialists to determine how to best adapt
the curriculum for English Language Learners.
After completing her graduate work in 2006, Ms. Hohmann was selected for a Presidential Management Fellowship (PMF) at the U.S. Department of State, Office for Analysis of Inter-American Affairs.
She currently serves as a Foreign Affairs Analyst, providing policy-makers in-depth analyses of Mexico and Canada.
• Mary Ellen Dairyko
Associate Director, Everyday Mathematics Development
Ellen Dairyko is a senior curriculum developer and Associate Director, Everyday Mathematics Development at the University of Chicago’s Center for Elementary Mathematics and Science Education
(CEMSE). She is one of the authors of the third edition, and the Common Core State Standards edition of Grade 3 Everyday Mathematics. Additionally, she co-authored the Everyday Mathematics My
Reference Book. She has developed and implemented professional development training for teachers, teacher leaders, and administrators in Chicago and elsewhere. Dairyko taught in Kindergarten
through eighth grade special education settings and in early childhood general education settings in Chicago public schools. She holds a master’s degree in curriculum and instruction from
National-Louis University and a bachelor’s degree in psychology from Mundelein College.
• Deborah Arron Leslie
Debbie Leslie is a senior curriculum developer, early childhood specialist, and Director of Science Companion Projects at the University of Chicago's Center for Elementary Mathematics and Science
Education (CEMSE). She provides professional development, coaching, and consultation to support the implementation of Everyday Mathematics and inquiry-based science curricula in the University of
Chicago charter schools, the Chicago Public Schools, and elsewhere.
Ms. Leslie is also the Early Childhood Team Leader for the elementary component of UCSMP. In this capacity, she led the author team that developed the new version of Pre-Kindergarten Everyday
Mathematics and the team that worked on the 3rd edition revisions for Kindergarten Everyday Mathematics. Leslie is also one of the authors of Science Companion, an inquiry-based elementary school
science curriculum.
At CEMSE, Leslie works on several projects that draw on her background in science and her interest in high-quality professional development in the areas of math, science, and early childhood
education. Leslie taught Pre-Kindergarten, Kindergarten, and First Grade for 10 years in Connecticut and in the Chicago area. She has also done work for the Bush Center for Child Development and
Social Policy at Yale University, the Field Museum, and the Rochelle Lee Fund in Chicago, Illinois, and has done presentations, consulting, and professional development for several other
educational organizations. She holds a Bachelor's Degree in Molecular Biochemistry and Biophysics from Yale University and a Master's Degree in Teaching from the University of Chicago.
Grade 2
Max Bell, Jean Bell, John Bretzlauf, Amy Dillard, Robert Hartfield, Andy Isaacs, James McBride, Cheryl G. Moran, Kathleen Pitvorec, Peter Saecker
Technical Art
Teachers in Residence
Kathleen Clark, Patti Satz
Mathematics and Technology Advisor
UCSMP Editorial
John Wray, Don Reneau
3rd Edition ELL Consultant
Teacher-in-Residence for the Assessment Handbook
Assistant for the Differentiation Handbook
Regina Littleton, Kriszta Miner, Mikhail Guzowski, Catherine Ann Gesell, Serena Hohmann, Lisa Christine Munson, Kathleen Marie Pina, Gabriel Sheridan, Librada Acosta, Carol Arkin, Robert Balfanz,
Sharlean Brooks, Jean Callahan, Anne Coglianese, Mary Ellen Dairyko, Tresea Felder, Dorothy Freedman, Rita Gronbach, Deborah Arron Leslie, William D. Pattison, LaDonna Pitts, Danette Riehle,
Marie Schilling, Sheila Sconiers, Kathleen Snook, Robert Strang, Sadako Tengan, Therese Wasik, Leeann Wille, Michael Wilson
• Max Bell
Max Bell is Professor Emeritus, Department of Education and the Physical Sciences Division at the University of Chicago and is affiliated with the University of Chicago Center for Mathematics and
Science Education (CEMSE). He is one of the founding authors of Everyday Mathematics (EM), and part of the Author panel that has responsibility for general oversight of the Publishing Agreement
for Everyday Mathematics and of the Center Agreement that established CEMSE.
Bell shifted in 1960 from teaching high school students to teaching teachers in the then-new MAT program at the University of Chicago. (He had earlier been in an influential NSF funded Academic
Year Institute for mathematics teachers conducted by the UC mathematics department.) He spent a decade as MAT Mathematics Coordinator while also working with UC people, SMSG, and other
organizations on reform-oriented secondary school mathematics materials. But as it became very clear that many children (and nearly all Chicago inner city children) entered secondary school with
little understanding of mathematics, Bell shifted his attention to elementary school mathematics instruction and teacher preparation.
Bell's widely reprinted 1973 article ("What does 'everyman' really need from school mathematics?") set an ambitious content agenda that anticipated the 1989 NCTM "Standards." Structured
interviews of several hundred five to nine year old children clearly showed that their mathematics learning capabilities were much greater than had been supposed. At the same time, textbook
analyses and interviews with teachers revealed an essentially vacuous primary school mathematics curriculum. With those foundations established by 1985, Bell joined with others in the University
of Chicago School Mathematics Project (UCSMP) in research, development, field testing, and widespread dissemination of the Everyday Mathematics curriculum for grades K-6.
Bell continues his interest in improvement of elementary school science and mathematics teaching, now focused on maximizing the potential for "educative curricula" (from which teachers learn as
they teach) to attack well known problems of scale in helping in-service teachers better understand and teach science and mathematics. Also, Bell and CEMSE colleagues are just beginning
conceptualization and specifications for a coherent "web based curriculum" for any individual who for any reason wishes to learn mathematics, from basic counting and arithmetic through data
analysis, algebra, or geometry.
• Jean Bell
Jean Bell is one of the founding authors of Everyday Mathematics and part of the Author panel that has responsibility for general oversight of the Publishing Agreement for Everyday Mathematics
and the Center Agreement that established the Center for Mathematics and Science Education (CEMSE) at the University of Chicago.
Early in her career Jean Bell was a microbiologist in the California State Public Health Laboratories and in the laboratory of Dr. Marc Beem at the University of Chicago. In a career change after
some years of full-time parenting Bell began teaching pre-school children and then with an MST degree from the University of Chicago became a primary school teacher.
In the early 1980's Bell did interview-based research on the mathematics capabilities of young children that showed very clearly that these capabilities had been seriously underestimated. This
led her to work with other founding authors within the University of Chicago School Mathematics Project (UCSMP) on development of the grades K-6 Everyday Mathematics curriculum.
On completion of the First Edition of Everyday Mathematics she returned to her science roots, founding with others the Chicago Science Group to develop the K-5 Science Companion curriculum. That
curriculum, distributed by Pearson Scott Foresman, seeks to be inquiry based, relatively easy to implement, firmly rooted in "reform" oriented national standards, and easily integrated with such
programs as Everyday Mathematics.
• Amy Dillard
Amy L. Dillard received a Bachelor of Arts degree in Elementary Education from Boston College in Chestnut Hill, MA. She taught elementary school for four years at Hoffman School in Glenview,
Illinois. In 1994, she earned a Master of Arts degree in Mathematics Education from DePaul University in Chicago, Illinois.
Ms. Dillard worked as a Teacher-in-Residence with the University of Chicago School Mathematics Project (UCSMP) from 1994 to 1997. She was involved in the development of the commercial publication
of the first edition of Fourth Grade Everyday Mathematics, as well as the field testing and commercial publication of the first editions of Fifth Grade Everyday Mathematics and Sixth Grade
Everyday Mathematics. Ms. Dillard worked from 1997 to 2001 as one of the authors of the second editions of Everyday Mathematics K-6.
In 2002 she began work for the UCSMP Everyday Mathematics Center. The NSF-funded center was established to support educators, parents and students who are using, or will soon be using, Everyday
Mathematics. Since 2003 and currently, Ms. Dillard serves as the Associate Director of Everyday Mathematics, third edition.
• Andrew Isaacs
Andy Isaacs received a BA in classical Greek from Northwestern University in 1974, an MST in elementary education from the University of Chicago in 1977, an MS in mathematics from the University
of Illinois at Chicago (UIC) in 1987, and a DA in mathematics (with concentrations in abstract algebra and theoretical computer science) from UIC in 1994. Philip Wagreich directed Isaacs's
dissertation, "Whole number concepts and operations in grades 1 and 2: Curriculum and rationale."
From 1977 to 1985, Isaacs taught fourth and fifth grades in Chicago-area public schools. In 1985, he joined the Department of Mathematics, Statistics, and Computer Science at UIC as a lecturer in
mathematics education. Beginning in 1986, Isaacs worked closely with Wagreich and Howard Goldberg on the NSF-funded Teaching Integrated Mathematics and Science Project (TIMS). In 1989 and 1990,
he worked with Wagreich and David Page on UIC's Maneuvers with Mathematics Project, another NSF-funded curriculum development effort. From 1990 to 1995, he was a full time writer for Math
Trailblazers, a comprehensive curriculum for grades K-5 based on TIMS and funded by NSF.
In 1995, Isaacs joined the University of Chicago School Mathematics Project to work on the Bridges to Classroom Mathematics Project, which was directed by Sheila Sconiers. Isaacs was an author on
the second edition of Everyday Mathematics, published in 2000 and 2001, and most recently, he directed revisions that led to a third edition of Everyday Mathematics in 2007. He is Co-Director of
the University's Center for Elementary Mathematics and Science Education, and a Senior Research Associate in the University's Physical Sciences Division.
• James McBride
James McBride is a Senior Research Associate in the Division of Physical Sciences at the University of Chicago, where he is Co-Director of the Center for Elementary Mathematics and Science
Education (CEMSE). Within CEMSE, McBride is director of sampling and statistical analysis for all survey research, including implementation studies, teacher surveys, and studies of student
achievement. And for the past fourteen years, he has been a principal author for all three editions of the Everyday Mathematics (EM) program.
From 1980 to 1992, McBride was Senior Research Director and Senior Mathematical Statistician at Response Analysis Corporation. His work focused on the areas of sample design, estimation
procedures, modeling, imputation, and statistical analysis. Assignments covered the gamut of survey research applications for both complex national surveys and smaller, special population
surveys. He is experienced in directing the data collection and budget monitoring activities associated with national population surveys.
From 1975 to 1980, McBride was an Assistant Professor in the Department of Statistics at Princeton University. He directed the undergraduate program in statistics, and taught graduate and
undergraduate courses in inference, probability, multivariate analysis, experimental design, demography, time series, econometrics, statistical computing and simulation, and exploratory data
analysis. He taught courses at the Woodrow Wilson School of Princeton University and the University of Chicago, and has been a Visiting Assistant Professor in the Department of Mathematics at
Cornell University. He holds a grades 6-12 teaching certificate and a B.S. in mathematics and physics, an MAT in mathematics, and a PhD in Statistics, all from the University of Chicago.
• Cheryl Moran
As the Associate Director for Direct Services, Cheryl Moran has developed and implemented workshops for in-service teachers, teacher leaders, and administrators at both the local and national
levels. She currently is writing after-school materials aligned to the Everyday Mathematics curriculum, coordinating in-school support for the Chicago Public Schools Restructuring Schools Support
Project and supporting mathematics instruction at a Chicago Public School.
Ms. Moran is one of the third edition authors for Everyday Mathematics, grade 2 and Everyday Mathematics My Reference Book. She has a Bachelors degree in psychology and a Masters degree in
Teaching. She began her career with eight years as an elementary school teacher in both public and private schools.
• Kathleen Pitvorec
Kathleen Pitvorec received a B.A. in Anthropology in 1987 and an M.S.T. in 2001 from the University of Chicago. Ms. Pitvorec began her career in education with eight years as an elementary school
teacher in both public and private schools. She left the classroom to become part of the author team of Everyday Mathematics, joining the team in late 1995 as a Teacher-in-Residence, and becoming
one of the authors for second and third editions.
In addition to participating in the writing of the materials, as one of the second-edition authors, she coordinated the preliminary information-gathering from teachers about first-edition
materials and the field test of the revised materials. From 2000 to 2004, Ms. Pitvorec served as the Associate Director of the Implementation Center at the University of Chicago School
Mathematics Project (UCSMP). In this position, she developed and implemented workshops for inservice teachers, teacher-leaders, and administrators at local and national levels.
From 2005 to 2007, Ms. Pitvorec was a third edition author of Everyday Mathematics. She oversaw the development, field testing, and revision of open-response assessment items for Grades 1-6. She
is a co-author on the grade-level specific third edition Differentiation Handbooks and Assessment Handbooks included with the program materials.
In 2006 and 2007, Ms. Pitvorec worked on an NSF-funded research project at the University of Illinois investigating teachers' implementations of elementary school standards-based mathematics
lessons. In 2008, she worked on an NSF-funded research project at the University of Illinois researching assessment tools in Everyday Mathematics. She is currently a doctoral student in the
Learning Sciences program at the University of Illinois doing her dissertation research on the training of preservice elementary school mathematics teachers.
• Peter Saecker
Peter Saecker received a B.A. in Mathematics from Lawrence University in 1959 and an M.A. in Mathematics as a Woodrow Wilson Fellow from UCLA in 1960. After a year of graduate study in
mathematics at Northwestern in 1961, Peter Saecker joined Science Research Associates as a mathematics editor. For the next 31 years, he wrote, edited, and managed a variety of educational
materials at SRA, including elementary and secondary math textbooks and software.
In the early 1990s he helped write the grant proposal that secured NSF funding for the development of Everyday Mathematics Grades 4-6. Saecker joined the team full time in 1992 and continued to
work on Everyday Mathematics through the completion of the second edition in 2001. He died in the summer of 2001, as the second edition was going into print.
• Diana Barrie
Diana Barrie received her BFA and MFA from the Art Institute of Chicago. She taught photography and film-making at the University of Northern Iowa and the University of Wisconsin at Milwaukee.
After working in the reprographics industry in New York City and Chicago, she came to UCSMP in 1990, where she has performed various duties, including creating technical illustrations for all
grade levels of Everyday Mathematics. She has also created illustrations for Science Companion and the Center for Urban School Improvement's STEP Literacy Assessments.
• Jim Flanders
Jim Flanders is a researcher at the Center for Elementary Mathematics and Science Education (CEMSE) at the University of Chicago where his focus is on integrating technology into the mathematics
curriculum. He is a contributing author of several University of Chicago School Mathematics Project (UCSMP) books including Everyday Mathematics, Transition Mathematics, Algebra, Advanced
Algebra, and Functions, Statistics and Trigonometry.
Prior to joining CEMSE he was a member of the Chicago Science Group helping develop the field testing of the Science Companion and evaluating software for elementary school mathematics. He also
wrote calculator software for the Core-Plus Mathematics Project, and was a consultant to the Everyday Learning Corporation and the Louisiana Systemic Initiative Project.
Flanders has been an assistant professor in the department of mathematics and statistics at Western Michigan University where he had NSF support to develop a course for preservice mathematics
teachers on integrating technology into secondary school mathematics. He has also been an instructor of mathematics at the Colorado College and an academic dean and mathematics department chair
at The Colorado Springs School. He has a B.A. in mathematics from Colorado College and a Ph.D. in mathematics education from the University of Chicago.
• Kathryn B. Chval
Kathryn B. Chval is an Assistant Professor and Co-Director of the Missouri Center for Mathematics and Science Teacher Education at the University of Missouri-Columbia. Dr. Chval is also a
Co-Principal Investigator for the Center for the Study of Mathematics Curriculum and the Researching Science and Mathematics Teacher Learning in Alternative Certification Models Project which are
both funded by the National Science Foundation.
Prior to joining University of Missouri, Dr. Chval was the Acting Section Head for the Teacher Professional Continuum Program in the Division of Elementary, Secondary and Informal Science
Division at the National Science Foundation. She also spent fourteen years at the University of Illinois at Chicago managing NSF-funded projects.
Dr. Chval's research interests include (1) effective preparation models and support structures for teachers across the professional continuum, (2) effective elementary teaching of underserved
populations, especially English language learners, and (3) curriculum standards and policies.
• Soundarya Radhakrishnan
Soundarya Radhakrishnan has a bachelor's degree in chemical engineering. She received her graduate degree in education from Northwestern University in 2001. Ms. Radhakrishnan was a Chicago Public
School teacher at Gray Elementary from 2001-2003. She worked as a Math Specialist with the Chicago Public School system advising elementary math teachers of five Area 1 schools in 2003-2004. Her
responsibilities included providing professional development and teacher training through workshops as well as co-teaching and modeling lessons in K-6 classrooms using Everyday Mathematics
In 2004-2005, she was part of developing the Everyday Mathematics open response assessment section of the third edition of the Assessment Handbook at the University of Chicago. This included
creating and developing open response problems for grades 1-6, field-testing these problems in classrooms at two Chicago schools, and developing rubrics for analyzing student work.
From 2005-2008, she worked as an education consultant for the Everyday Mathematics curriculum that involved training teachers both in public and private Schools. She is currently working as a
Math Facilitator in the Office of Math and Science for the Chicago Public Schools.
• Serena Hohmann
Serena Hohmann received a B.A. in Spanish and International Studies in 2003 from Washington University in St. Louis and an M.A. in Latin American and Caribbean Studies from University of Chicago
in 2006. Throughout her undergraduate and graduate studies, Ms. Hohmann studied and worked in both Spain and Mexico.
From 2005 to 2006, she served as an editorial assistant for the third edition's language diversity team, focusing primarily on the Everyday Mathematics Differentiation Handbooks. In this
position, she observed bilingual classrooms, conducted extensive research on language acquisition, and consulted with several bilingual educators and specialists to determine how to best adapt
the curriculum for English Language Learners.
After completing her graduate work in 2006, Ms. Hohmann was selected for a Presidential Management Fellowship (PMF) at the U.S. Department of State, Office for Analysis of Inter-American Affairs.
She currently serves as a Foreign Affairs Analyst, providing policy-makers in-depth analyses of Mexico and Canada.
• Mary Ellen Dairyko
Associate Director, Everyday Mathematics Development
Ellen Dairyko is a senior curriculum developer and Associate Director, Everyday Mathematics Development at the University of Chicago’s Center for Elementary Mathematics and Science Education
(CEMSE). She is one of the authors of the third edition, and the Common Core State Standards edition of Grade 3 Everyday Mathematics. Additionally, she co-authored the Everyday Mathematics My
Reference Book. She has developed and implemented professional development training for teachers, teacher leaders, and administrators in Chicago and elsewhere. Dairyko taught in Kindergarten
through eighth grade special education settings and in early childhood general education settings in Chicago public schools. She holds a master’s degree in curriculum and instruction from
National-Louis University and a bachelor’s degree in psychology from Mundelein College.
• Deborah Arron Leslie
Debbie Leslie is a senior curriculum developer, early childhood specialist, and Director of Science Companion Projects at the University of Chicago's Center for Elementary Mathematics and Science
Education (CEMSE). She provides professional development, coaching, and consultation to support the implementation of Everyday Mathematics and inquiry-based science curricula in the University of
Chicago charter schools, the Chicago Public Schools, and elsewhere.
Ms. Leslie is also the Early Childhood Team Leader for the elementary component of UCSMP. In this capacity, she led the author team that developed the new version of Pre-Kindergarten Everyday
Mathematics and the team that worked on the 3rd edition revisions for Kindergarten Everyday Mathematics. Leslie is also one of the authors of Science Companion, an inquiry-based elementary school
science curriculum.
At CEMSE, Leslie works on several projects that draw on her background in science and her interest in high-quality professional development in the areas of math, science, and early childhood
education. Leslie taught Pre-Kindergarten, Kindergarten, and First Grade for 10 years in Connecticut and in the Chicago area. She has also done work for the Bush Center for Child Development and
Social Policy at Yale University, the Field Museum, and the Rochelle Lee Fund in Chicago, Illinois, and has done presentations, consulting, and professional development for several other
educational organizations. She holds a Bachelor's Degree in Molecular Biochemistry and Biophysics from Yale University and a Master's Degree in Teaching from the University of Chicago.
Grade 3
Max Bell, Jean Bell, John Bretzlauf, Mary Ellen Dairyko, Amy Dillard, Robert Hartfield, Andy Isaacs, James McBride, Kathleen Pitvorec, Peter Saecker
Technical Art
Teachers in Residence
Lisa Bernstein, Carole Skalinder
Mathematics and Technology Advisor
UCSMP Editorial
Jamie Montague Callister, Don Reneau
3rd Edition ELL Consultant
Teacher-in-Residence for the Assessment Handbook
Assistant for the Differentiation Handbook
Regina Littleton, Kriszta Miner, Carol Arkin, Robert Balfanz, Sharlean Brooks, Mary Dominguez, David Garcia, Rita Gronbach, Mikhail Guzowski, Serena Hohmann, Carla LaRochelle, Deborah Arron
Leslie, Curtis Lieneck, Diana Marino, William D. Pattison, William Salvato, Rebecca A. Schneider, Sheila Sconiers, Sandra Siebert, Kathleen Snook, David B. Spangler, Jean Marie Sweigart, Carolyn
Wais, Leeann Wille
• Max Bell
Max Bell is Professor Emeritus, Department of Education and the Physical Sciences Division at the University of Chicago and is affiliated with the University of Chicago Center for Mathematics and
Science Education (CEMSE). He is one of the founding authors of Everyday Mathematics (EM), and part of the Author panel that has responsibility for general oversight of the Publishing Agreement
for Everyday Mathematics and of the Center Agreement that established CEMSE.
Bell shifted in 1960 from teaching high school students to teaching teachers in the then-new MAT program at the University of Chicago. (He had earlier been in an influential NSF funded Academic
Year Institute for mathematics teachers conducted by the UC mathematics department.) He spent a decade as MAT Mathematics Coordinator while also working with UC people, SMSG, and other
organizations on reform-oriented secondary school mathematics materials. But as it became very clear that many children (and nearly all Chicago inner city children) entered secondary school with
little understanding of mathematics, Bell shifted his attention to elementary school mathematics instruction and teacher preparation.
Bell's widely reprinted 1973 article ("What does 'everyman' really need from school mathematics?") set an ambitious content agenda that anticipated the 1989 NCTM "Standards." Structured
interviews of several hundred five to nine year old children clearly showed that their mathematics learning capabilities were much greater than had been supposed. At the same time, textbook
analyses and interviews with teachers revealed an essentially vacuous primary school mathematics curriculum. With those foundations established by 1985, Bell joined with others in the University
of Chicago School Mathematics Project (UCSMP) in research, development, field testing, and widespread dissemination of the Everyday Mathematics curriculum for grades K-6.
Bell continues his interest in improvement of elementary school science and mathematics teaching, now focused on maximizing the potential for "educative curricula" (from which teachers learn as
they teach) to attack well known problems of scale in helping in-service teachers better understand and teach science and mathematics. Also, Bell and CEMSE colleagues are just beginning
conceptualization and specifications for a coherent "web based curriculum" for any individual who for any reason wishes to learn mathematics, from basic counting and arithmetic through data
analysis, algebra, or geometry.
• Jean Bell
Jean Bell is one of the founding authors of Everyday Mathematics and part of the Author panel that has responsibility for general oversight of the Publishing Agreement for Everyday Mathematics
and the Center Agreement that established the Center for Mathematics and Science Education (CEMSE) at the University of Chicago.
Early in her career Jean Bell was a microbiologist in the California State Public Health Laboratories and in the laboratory of Dr. Marc Beem at the University of Chicago. In a career change after
some years of full-time parenting Bell began teaching pre-school children and then with an MST degree from the University of Chicago became a primary school teacher.
In the early 1980's Bell did interview-based research on the mathematics capabilities of young children that showed very clearly that these capabilities had been seriously underestimated. This
led her to work with other founding authors within the University of Chicago School Mathematics Project (UCSMP) on development of the grades K-6 Everyday Mathematics curriculum.
On completion of the First Edition of Everyday Mathematics she returned to her science roots, founding with others the Chicago Science Group to develop the K-5 Science Companion curriculum. That
curriculum, distributed by Pearson Scott Foresman, seeks to be inquiry based, relatively easy to implement, firmly rooted in "reform" oriented national standards, and easily integrated with such
programs as Everyday Mathematics.
• Mary Ellen Dairyko
Associate Director, Everyday Mathematics Development
Ellen Dairyko is a senior curriculum developer and Associate Director, Everyday Mathematics Development at the University of Chicago’s Center for Elementary Mathematics and Science Education
(CEMSE). She is one of the authors of the third edition, and the Common Core State Standards edition of Grade 3 Everyday Mathematics. Additionally, she co-authored the Everyday Mathematics My
Reference Book. She has developed and implemented professional development training for teachers, teacher leaders, and administrators in Chicago and elsewhere. Dairyko taught in Kindergarten
through eighth grade special education settings and in early childhood general education settings in Chicago public schools. She holds a master’s degree in curriculum and instruction from
National-Louis University and a bachelor’s degree in psychology from Mundelein College.
• Amy Dillard
Amy L. Dillard received a Bachelor of Arts degree in Elementary Education from Boston College in Chestnut Hill, MA. She taught elementary school for four years at Hoffman School in Glenview,
Illinois. In 1994, she earned a Master of Arts degree in Mathematics Education from DePaul University in Chicago, Illinois.
Ms. Dillard worked as a Teacher-in-Residence with the University of Chicago School Mathematics Project (UCSMP) from 1994 to 1997. She was involved in the development of the commercial publication
of the first edition of Fourth Grade Everyday Mathematics, as well as the field testing and commercial publication of the first editions of Fifth Grade Everyday Mathematics and Sixth Grade
Everyday Mathematics. Ms. Dillard worked from 1997 to 2001 as one of the authors of the second editions of Everyday Mathematics K-6.
In 2002 she began work for the UCSMP Everyday Mathematics Center. The NSF-funded center was established to support educators, parents and students who are using, or will soon be using, Everyday
Mathematics. Since 2003 and currently, Ms. Dillard serves as the Associate Director of Everyday Mathematics, third edition.
• Andrew Isaacs
Andy Isaacs received a BA in classical Greek from Northwestern University in 1974, an MST in elementary education from the University of Chicago in 1977, an MS in mathematics from the University
of Illinois at Chicago (UIC) in 1987, and a DA in mathematics (with concentrations in abstract algebra and theoretical computer science) from UIC in 1994. Philip Wagreich directed Isaacs's
dissertation, "Whole number concepts and operations in grades 1 and 2: Curriculum and rationale."
From 1977 to 1985, Isaacs taught fourth and fifth grades in Chicago-area public schools. In 1985, he joined the Department of Mathematics, Statistics, and Computer Science at UIC as a lecturer in
mathematics education. Beginning in 1986, Isaacs worked closely with Wagreich and Howard Goldberg on the NSF-funded Teaching Integrated Mathematics and Science Project (TIMS). In 1989 and 1990,
he worked with Wagreich and David Page on UIC's Maneuvers with Mathematics Project, another NSF-funded curriculum development effort. From 1990 to 1995, he was a full time writer for Math
Trailblazers, a comprehensive curriculum for grades K-5 based on TIMS and funded by NSF.
In 1995, Isaacs joined the University of Chicago School Mathematics Project to work on the Bridges to Classroom Mathematics Project, which was directed by Sheila Sconiers. Isaacs was an author on
the second edition of Everyday Mathematics, published in 2000 and 2001, and most recently, he directed revisions that led to a third edition of Everyday Mathematics in 2007. He is Co-Director of
the University's Center for Elementary Mathematics and Science Education, and a Senior Research Associate in the University's Physical Sciences Division.
• James McBride
James McBride is a Senior Research Associate in the Division of Physical Sciences at the University of Chicago, where he is Co-Director of the Center for Elementary Mathematics and Science
Education (CEMSE). Within CEMSE, McBride is director of sampling and statistical analysis for all survey research, including implementation studies, teacher surveys, and studies of student
achievement. And for the past fourteen years, he has been a principal author for all three editions of the Everyday Mathematics (EM) program.
From 1980 to 1992, McBride was Senior Research Director and Senior Mathematical Statistician at Response Analysis Corporation. His work focused on the areas of sample design, estimation
procedures, modeling, imputation, and statistical analysis. Assignments covered the gamut of survey research applications for both complex national surveys and smaller, special population
surveys. He is experienced in directing the data collection and budget monitoring activities associated with national population surveys.
From 1975 to 1980, McBride was an Assistant Professor in the Department of Statistics at Princeton University. He directed the undergraduate program in statistics, and taught graduate and
undergraduate courses in inference, probability, multivariate analysis, experimental design, demography, time series, econometrics, statistical computing and simulation, and exploratory data
analysis. He taught courses at the Woodrow Wilson School of Princeton University and the University of Chicago, and has been a Visiting Assistant Professor in the Department of Mathematics at
Cornell University. He holds a grades 6-12 teaching certificate and a B.S. in mathematics and physics, an MAT in mathematics, and a PhD in Statistics, all from the University of Chicago.
• Kathleen Pitvorec
Kathleen Pitvorec received a B.A. in Anthropology in 1987 and an M.S.T. in 2001 from the University of Chicago. Ms. Pitvorec began her career in education with eight years as an elementary school
teacher in both public and private schools. She left the classroom to become part of the author team of Everyday Mathematics, joining the team in late 1995 as a Teacher-in-Residence, and becoming
one of the authors for second and third editions.
In addition to participating in the writing of the materials, as one of the second-edition authors, she coordinated the preliminary information-gathering from teachers about first-edition
materials and the field test of the revised materials. From 2000 to 2004, Ms. Pitvorec served as the Associate Director of the Implementation Center at the University of Chicago School
Mathematics Project (UCSMP). In this position, she developed and implemented workshops for inservice teachers, teacher-leaders, and administrators at local and national levels.
From 2005 to 2007, Ms. Pitvorec was a third edition author of Everyday Mathematics. She oversaw the development, field testing, and revision of open-response assessment items for Grades 1-6. She
is a co-author on the grade-level specific third edition Differentiation Handbooks and Assessment Handbooks included with the program materials.
In 2006 and 2007, Ms. Pitvorec worked on an NSF-funded research project at the University of Illinois investigating teachers' implementations of elementary school standards-based mathematics
lessons. In 2008, she worked on an NSF-funded research project at the University of Illinois researching assessment tools in Everyday Mathematics. She is currently a doctoral student in the
Learning Sciences program at the University of Illinois doing her dissertation research on the training of preservice elementary school mathematics teachers.
• Peter Saecker
Peter Saecker received a B.A. in Mathematics from Lawrence University in 1959 and an M.A. in Mathematics as a Woodrow Wilson Fellow from UCLA in 1960. After a year of graduate study in
mathematics at Northwestern in 1961, Peter Saecker joined Science Research Associates as a mathematics editor. For the next 31 years, he wrote, edited, and managed a variety of educational
materials at SRA, including elementary and secondary math textbooks and software.
In the early 1990s he helped write the grant proposal that secured NSF funding for the development of Everyday Mathematics Grades 4-6. Saecker joined the team full time in 1992 and continued to
work on Everyday Mathematics through the completion of the second edition in 2001. He died in the summer of 2001, as the second edition was going into print.
• Diana Barrie
Diana Barrie received her BFA and MFA from the Art Institute of Chicago. She taught photography and film-making at the University of Northern Iowa and the University of Wisconsin at Milwaukee.
After working in the reprographics industry in New York City and Chicago, she came to UCSMP in 1990, where she has performed various duties, including creating technical illustrations for all
grade levels of Everyday Mathematics. She has also created illustrations for Science Companion and the Center for Urban School Improvement's STEP Literacy Assessments.
• Jim Flanders
Jim Flanders is a researcher at the Center for Elementary Mathematics and Science Education (CEMSE) at the University of Chicago where his focus is on integrating technology into the mathematics
curriculum. He is a contributing author of several University of Chicago School Mathematics Project (UCSMP) books including Everyday Mathematics, Transition Mathematics, Algebra, Advanced
Algebra, and Functions, Statistics and Trigonometry.
Prior to joining CEMSE he was a member of the Chicago Science Group helping develop the field testing of the Science Companion and evaluating software for elementary school mathematics. He also
wrote calculator software for the Core-Plus Mathematics Project, and was a consultant to the Everyday Learning Corporation and the Louisiana Systemic Initiative Project.
Flanders has been an assistant professor in the department of mathematics and statistics at Western Michigan University where he had NSF support to develop a course for preservice mathematics
teachers on integrating technology into secondary school mathematics. He has also been an instructor of mathematics at the Colorado College and an academic dean and mathematics department chair
at The Colorado Springs School. He has a B.A. in mathematics from Colorado College and a Ph.D. in mathematics education from the University of Chicago.
• Kathryn B. Chval
Kathryn B. Chval is an Assistant Professor and Co-Director of the Missouri Center for Mathematics and Science Teacher Education at the University of Missouri-Columbia. Dr. Chval is also a
Co-Principal Investigator for the Center for the Study of Mathematics Curriculum and the Researching Science and Mathematics Teacher Learning in Alternative Certification Models Project which are
both funded by the National Science Foundation.
Prior to joining University of Missouri, Dr. Chval was the Acting Section Head for the Teacher Professional Continuum Program in the Division of Elementary, Secondary and Informal Science
Division at the National Science Foundation. She also spent fourteen years at the University of Illinois at Chicago managing NSF-funded projects.
Dr. Chval's research interests include (1) effective preparation models and support structures for teachers across the professional continuum, (2) effective elementary teaching of underserved
populations, especially English language learners, and (3) curriculum standards and policies.
• Soundarya Radhakrishnan
Soundarya Radhakrishnan has a bachelor's degree in chemical engineering. She received her graduate degree in education from Northwestern University in 2001. Ms. Radhakrishnan was a Chicago Public
School teacher at Gray Elementary from 2001-2003. She worked as a Math Specialist with the Chicago Public School system advising elementary math teachers of five Area 1 schools in 2003-2004. Her
responsibilities included providing professional development and teacher training through workshops as well as co-teaching and modeling lessons in K-6 classrooms using Everyday Mathematics
In 2004-2005, she was part of developing the Everyday Mathematics open response assessment section of the third edition of the Assessment Handbook at the University of Chicago. This included
creating and developing open response problems for grades 1-6, field-testing these problems in classrooms at two Chicago schools, and developing rubrics for analyzing student work.
From 2005-2008, she worked as an education consultant for the Everyday Mathematics curriculum that involved training teachers both in public and private Schools. She is currently working as a
Math Facilitator in the Office of Math and Science for the Chicago Public Schools.
• Serena Hohmann
Serena Hohmann received a B.A. in Spanish and International Studies in 2003 from Washington University in St. Louis and an M.A. in Latin American and Caribbean Studies from University of Chicago
in 2006. Throughout her undergraduate and graduate studies, Ms. Hohmann studied and worked in both Spain and Mexico.
From 2005 to 2006, she served as an editorial assistant for the third edition's language diversity team, focusing primarily on the Everyday Mathematics Differentiation Handbooks. In this
position, she observed bilingual classrooms, conducted extensive research on language acquisition, and consulted with several bilingual educators and specialists to determine how to best adapt
the curriculum for English Language Learners.
After completing her graduate work in 2006, Ms. Hohmann was selected for a Presidential Management Fellowship (PMF) at the U.S. Department of State, Office for Analysis of Inter-American Affairs.
She currently serves as a Foreign Affairs Analyst, providing policy-makers in-depth analyses of Mexico and Canada.
• Deborah Arron Leslie
Debbie Leslie is a senior curriculum developer, early childhood specialist, and Director of Science Companion Projects at the University of Chicago's Center for Elementary Mathematics and Science
Education (CEMSE). She provides professional development, coaching, and consultation to support the implementation of Everyday Mathematics and inquiry-based science curricula in the University of
Chicago charter schools, the Chicago Public Schools, and elsewhere.
Ms. Leslie is also the Early Childhood Team Leader for the elementary component of UCSMP. In this capacity, she led the author team that developed the new version of Pre-Kindergarten Everyday
Mathematics and the team that worked on the 3rd edition revisions for Kindergarten Everyday Mathematics. Leslie is also one of the authors of Science Companion, an inquiry-based elementary school
science curriculum.
At CEMSE, Leslie works on several projects that draw on her background in science and her interest in high-quality professional development in the areas of math, science, and early childhood
education. Leslie taught Pre-Kindergarten, Kindergarten, and First Grade for 10 years in Connecticut and in the Chicago area. She has also done work for the Bush Center for Child Development and
Social Policy at Yale University, the Field Museum, and the Rochelle Lee Fund in Chicago, Illinois, and has done presentations, consulting, and professional development for several other
educational organizations. She holds a Bachelor's Degree in Molecular Biochemistry and Biophysics from Yale University and a Master's Degree in Teaching from the University of Chicago.
Grade 4
Max Bell, John Bretzlauf, Amy Dillard, Robert Hartfield, Andy Isaacs, Rebecca W. Maxcy^†, James McBride, Kathleen Pitvorec, Peter Saecker, Robert Balfanz*, William Carroll*, Sheila Sconiers*
* First Edition only
^† Common Core State Standards Edition only
Technical Art
Teachers in Residence
Carla L. La Rochelle, Rebecca W. Maxcy
Mathematics and Technology Advisor
UCSMP Editorial
Laurie K. Thrasher, Kathryn M. Rich
3rd Edition ELL Consultant
Teacher-in-Residence for the Assessment Handbook
Assistant for the Differentiation Handbook
Carla LaRochelle, Regina Littleton, Kriszta Miner, David B. Spangler, Deborah Karen Cohen, Maureen Dando, Joseph Dunlap, Serena Hohmann, Joanna Jolly, Carrie Kamm, Colleen Kelly, Sarah Elizabeth
Martinek, Claire Doremus Ruch, Laurel Serleth, Nancy Smith, Cynthia G. Somerville, Ingrid Stressenger, Martha Ayala, Virginia J. Bates, Randee Blair, Donna R. Clay, Vanessa Day, Jean Faszholz,
Patti Haney, Margaret Phillips Holm, Nancy Kay Hubert, Sybil Johnson, Judith Kiehm, Deborah Arron Leslie, Laura Ann Luczak, Mary O'Boyle, William D. Pattison, Beverly Pilchman, Denise Porter,
Judith Ann Robb, Mary Seymour, Laura A. Sunseri-Driscoll
• Max Bell
Max Bell is Professor Emeritus, Department of Education and the Physical Sciences Division at the University of Chicago and is affiliated with the University of Chicago Center for Mathematics and
Science Education (CEMSE). He is one of the founding authors of Everyday Mathematics (EM), and part of the Author panel that has responsibility for general oversight of the Publishing Agreement
for Everyday Mathematics and of the Center Agreement that established CEMSE.
Bell shifted in 1960 from teaching high school students to teaching teachers in the then-new MAT program at the University of Chicago. (He had earlier been in an influential NSF funded Academic
Year Institute for mathematics teachers conducted by the UC mathematics department.) He spent a decade as MAT Mathematics Coordinator while also working with UC people, SMSG, and other
organizations on reform-oriented secondary school mathematics materials. But as it became very clear that many children (and nearly all Chicago inner city children) entered secondary school with
little understanding of mathematics, Bell shifted his attention to elementary school mathematics instruction and teacher preparation.
Bell's widely reprinted 1973 article ("What does 'everyman' really need from school mathematics?") set an ambitious content agenda that anticipated the 1989 NCTM "Standards." Structured
interviews of several hundred five to nine year old children clearly showed that their mathematics learning capabilities were much greater than had been supposed. At the same time, textbook
analyses and interviews with teachers revealed an essentially vacuous primary school mathematics curriculum. With those foundations established by 1985, Bell joined with others in the University
of Chicago School Mathematics Project (UCSMP) in research, development, field testing, and widespread dissemination of the Everyday Mathematics curriculum for grades K-6.
Bell continues his interest in improvement of elementary school science and mathematics teaching, now focused on maximizing the potential for "educative curricula" (from which teachers learn as
they teach) to attack well known problems of scale in helping in-service teachers better understand and teach science and mathematics. Also, Bell and CEMSE colleagues are just beginning
conceptualization and specifications for a coherent "web based curriculum" for any individual who for any reason wishes to learn mathematics, from basic counting and arithmetic through data
analysis, algebra, or geometry.
• Amy Dillard
Amy L. Dillard received a Bachelor of Arts degree in Elementary Education from Boston College in Chestnut Hill, MA. She taught elementary school for four years at Hoffman School in Glenview,
Illinois. In 1994, she earned a Master of Arts degree in Mathematics Education from DePaul University in Chicago, Illinois.
Ms. Dillard worked as a Teacher-in-Residence with the University of Chicago School Mathematics Project (UCSMP) from 1994 to 1997. She was involved in the development of the commercial publication
of the first edition of Fourth Grade Everyday Mathematics, as well as the field testing and commercial publication of the first editions of Fifth Grade Everyday Mathematics and Sixth Grade
Everyday Mathematics. Ms. Dillard worked from 1997 to 2001 as one of the authors of the second editions of Everyday Mathematics K-6.
In 2002 she began work for the UCSMP Everyday Mathematics Center. The NSF-funded center was established to support educators, parents and students who are using, or will soon be using, Everyday
Mathematics. Since 2003 and currently, Ms. Dillard serves as the Associate Director of Everyday Mathematics, third edition.
• Andrew Isaacs
Andy Isaacs received a BA in classical Greek from Northwestern University in 1974, an MST in elementary education from the University of Chicago in 1977, an MS in mathematics from the University
of Illinois at Chicago (UIC) in 1987, and a DA in mathematics (with concentrations in abstract algebra and theoretical computer science) from UIC in 1994. Philip Wagreich directed Isaacs's
dissertation, "Whole number concepts and operations in grades 1 and 2: Curriculum and rationale."
From 1977 to 1985, Isaacs taught fourth and fifth grades in Chicago-area public schools. In 1985, he joined the Department of Mathematics, Statistics, and Computer Science at UIC as a lecturer in
mathematics education. Beginning in 1986, Isaacs worked closely with Wagreich and Howard Goldberg on the NSF-funded Teaching Integrated Mathematics and Science Project (TIMS). In 1989 and 1990,
he worked with Wagreich and David Page on UIC's Maneuvers with Mathematics Project, another NSF-funded curriculum development effort. From 1990 to 1995, he was a full time writer for Math
Trailblazers, a comprehensive curriculum for grades K-5 based on TIMS and funded by NSF.
In 1995, Isaacs joined the University of Chicago School Mathematics Project to work on the Bridges to Classroom Mathematics Project, which was directed by Sheila Sconiers. Isaacs was an author on
the second edition of Everyday Mathematics, published in 2000 and 2001, and most recently, he directed revisions that led to a third edition of Everyday Mathematics in 2007. He is Co-Director of
the University's Center for Elementary Mathematics and Science Education, and a Senior Research Associate in the University's Physical Sciences Division.
• James McBride
James McBride is a Senior Research Associate in the Division of Physical Sciences at the University of Chicago, where he is Co-Director of the Center for Elementary Mathematics and Science
Education (CEMSE). Within CEMSE, McBride is director of sampling and statistical analysis for all survey research, including implementation studies, teacher surveys, and studies of student
achievement. And for the past fourteen years, he has been a principal author for all three editions of the Everyday Mathematics (EM) program.
From 1980 to 1992, McBride was Senior Research Director and Senior Mathematical Statistician at Response Analysis Corporation. His work focused on the areas of sample design, estimation
procedures, modeling, imputation, and statistical analysis. Assignments covered the gamut of survey research applications for both complex national surveys and smaller, special population
surveys. He is experienced in directing the data collection and budget monitoring activities associated with national population surveys.
From 1975 to 1980, McBride was an Assistant Professor in the Department of Statistics at Princeton University. He directed the undergraduate program in statistics, and taught graduate and
undergraduate courses in inference, probability, multivariate analysis, experimental design, demography, time series, econometrics, statistical computing and simulation, and exploratory data
analysis. He taught courses at the Woodrow Wilson School of Princeton University and the University of Chicago, and has been a Visiting Assistant Professor in the Department of Mathematics at
Cornell University. He holds a grades 6-12 teaching certificate and a B.S. in mathematics and physics, an MAT in mathematics, and a PhD in Statistics, all from the University of Chicago.
• Kathleen Pitvorec
Kathleen Pitvorec received a B.A. in Anthropology in 1987 and an M.S.T. in 2001 from the University of Chicago. Ms. Pitvorec began her career in education with eight years as an elementary school
teacher in both public and private schools. She left the classroom to become part of the author team of Everyday Mathematics, joining the team in late 1995 as a Teacher-in-Residence, and becoming
one of the authors for second and third editions.
In addition to participating in the writing of the materials, as one of the second-edition authors, she coordinated the preliminary information-gathering from teachers about first-edition
materials and the field test of the revised materials. From 2000 to 2004, Ms. Pitvorec served as the Associate Director of the Implementation Center at the University of Chicago School
Mathematics Project (UCSMP). In this position, she developed and implemented workshops for inservice teachers, teacher-leaders, and administrators at local and national levels.
From 2005 to 2007, Ms. Pitvorec was a third edition author of Everyday Mathematics. She oversaw the development, field testing, and revision of open-response assessment items for Grades 1-6. She
is a co-author on the grade-level specific third edition Differentiation Handbooks and Assessment Handbooks included with the program materials.
In 2006 and 2007, Ms. Pitvorec worked on an NSF-funded research project at the University of Illinois investigating teachers' implementations of elementary school standards-based mathematics
lessons. In 2008, she worked on an NSF-funded research project at the University of Illinois researching assessment tools in Everyday Mathematics. She is currently a doctoral student in the
Learning Sciences program at the University of Illinois doing her dissertation research on the training of preservice elementary school mathematics teachers.
• Peter Saecker
Peter Saecker received a B.A. in Mathematics from Lawrence University in 1959 and an M.A. in Mathematics as a Woodrow Wilson Fellow from UCLA in 1960. After a year of graduate study in
mathematics at Northwestern in 1961, Peter Saecker joined Science Research Associates as a mathematics editor. For the next 31 years, he wrote, edited, and managed a variety of educational
materials at SRA, including elementary and secondary math textbooks and software.
In the early 1990s he helped write the grant proposal that secured NSF funding for the development of Everyday Mathematics Grades 4-6. Saecker joined the team full time in 1992 and continued to
work on Everyday Mathematics through the completion of the second edition in 2001. He died in the summer of 2001, as the second edition was going into print.
• Diana Barrie
Diana Barrie received her BFA and MFA from the Art Institute of Chicago. She taught photography and film-making at the University of Northern Iowa and the University of Wisconsin at Milwaukee.
After working in the reprographics industry in New York City and Chicago, she came to UCSMP in 1990, where she has performed various duties, including creating technical illustrations for all
grade levels of Everyday Mathematics. She has also created illustrations for Science Companion and the Center for Urban School Improvement's STEP Literacy Assessments.
• Rebecca Williams Maxcy
Senior Curriculum Developer
Rebecca W. Maxcy is a senior curriculum developer at the University of Chicago’s Center for Elementary and Mathematics and Science Education (CEMSE). She is one of the authors of the CCSS edition
of Grade 4 Mathematics Everyday. She worked as a teacher in residence on the Grade 4 Everyday Mathematics third edition and on various state specific editions. Currently, Maxcy is planning future
development of Everyday Mathematics. She taught elementary school in Chicago public schools, Cape Elizabeth, Maine, and Poland Springs, Maine. Maxcy trained resident teachers at the Chicago
Academy, in addition to being adjunct faculty at National Louis University. Maxcy earned a master’s in elementary education from Lesley University in Cambridge, Massachusetts, and a bachelor of
arts degree from Bates College in Lewiston, Maine.
• Rebecca Williams Maxcy
Senior Curriculum Developer
Rebecca W. Maxcy is a senior curriculum developer at the University of Chicago’s Center for Elementary and Mathematics and Science Education (CEMSE). She is one of the authors of the CCSS edition
of Grade 4 Mathematics Everyday. She worked as a teacher in residence on the Grade 4 Everyday Mathematics third edition and on various state specific editions. Currently, Maxcy is planning future
development of Everyday Mathematics. She taught elementary school in Chicago public schools, Cape Elizabeth, Maine, and Poland Springs, Maine. Maxcy trained resident teachers at the Chicago
Academy, in addition to being adjunct faculty at National Louis University. Maxcy earned a master’s in elementary education from Lesley University in Cambridge, Massachusetts, and a bachelor of
arts degree from Bates College in Lewiston, Maine.
• Jim Flanders
Jim Flanders is a researcher at the Center for Elementary Mathematics and Science Education (CEMSE) at the University of Chicago where his focus is on integrating technology into the mathematics
curriculum. He is a contributing author of several University of Chicago School Mathematics Project (UCSMP) books including Everyday Mathematics, Transition Mathematics, Algebra, Advanced
Algebra, and Functions, Statistics and Trigonometry.
Prior to joining CEMSE he was a member of the Chicago Science Group helping develop the field testing of the Science Companion and evaluating software for elementary school mathematics. He also
wrote calculator software for the Core-Plus Mathematics Project, and was a consultant to the Everyday Learning Corporation and the Louisiana Systemic Initiative Project.
Flanders has been an assistant professor in the department of mathematics and statistics at Western Michigan University where he had NSF support to develop a course for preservice mathematics
teachers on integrating technology into secondary school mathematics. He has also been an instructor of mathematics at the Colorado College and an academic dean and mathematics department chair
at The Colorado Springs School. He has a B.A. in mathematics from Colorado College and a Ph.D. in mathematics education from the University of Chicago.
• Kathryn B. Chval
Kathryn B. Chval is an Assistant Professor and Co-Director of the Missouri Center for Mathematics and Science Teacher Education at the University of Missouri-Columbia. Dr. Chval is also a
Co-Principal Investigator for the Center for the Study of Mathematics Curriculum and the Researching Science and Mathematics Teacher Learning in Alternative Certification Models Project which are
both funded by the National Science Foundation.
Prior to joining University of Missouri, Dr. Chval was the Acting Section Head for the Teacher Professional Continuum Program in the Division of Elementary, Secondary and Informal Science
Division at the National Science Foundation. She also spent fourteen years at the University of Illinois at Chicago managing NSF-funded projects.
Dr. Chval's research interests include (1) effective preparation models and support structures for teachers across the professional continuum, (2) effective elementary teaching of underserved
populations, especially English language learners, and (3) curriculum standards and policies.
• Soundarya Radhakrishnan
Soundarya Radhakrishnan has a bachelor's degree in chemical engineering. She received her graduate degree in education from Northwestern University in 2001. Ms. Radhakrishnan was a Chicago Public
School teacher at Gray Elementary from 2001-2003. She worked as a Math Specialist with the Chicago Public School system advising elementary math teachers of five Area 1 schools in 2003-2004. Her
responsibilities included providing professional development and teacher training through workshops as well as co-teaching and modeling lessons in K-6 classrooms using Everyday Mathematics
In 2004-2005, she was part of developing the Everyday Mathematics open response assessment section of the third edition of the Assessment Handbook at the University of Chicago. This included
creating and developing open response problems for grades 1-6, field-testing these problems in classrooms at two Chicago schools, and developing rubrics for analyzing student work.
From 2005-2008, she worked as an education consultant for the Everyday Mathematics curriculum that involved training teachers both in public and private Schools. She is currently working as a
Math Facilitator in the Office of Math and Science for the Chicago Public Schools.
• Serena Hohmann
Serena Hohmann received a B.A. in Spanish and International Studies in 2003 from Washington University in St. Louis and an M.A. in Latin American and Caribbean Studies from University of Chicago
in 2006. Throughout her undergraduate and graduate studies, Ms. Hohmann studied and worked in both Spain and Mexico.
From 2005 to 2006, she served as an editorial assistant for the third edition's language diversity team, focusing primarily on the Everyday Mathematics Differentiation Handbooks. In this
position, she observed bilingual classrooms, conducted extensive research on language acquisition, and consulted with several bilingual educators and specialists to determine how to best adapt
the curriculum for English Language Learners.
After completing her graduate work in 2006, Ms. Hohmann was selected for a Presidential Management Fellowship (PMF) at the U.S. Department of State, Office for Analysis of Inter-American Affairs.
She currently serves as a Foreign Affairs Analyst, providing policy-makers in-depth analyses of Mexico and Canada.
• Deborah Arron Leslie
Debbie Leslie is a senior curriculum developer, early childhood specialist, and Director of Science Companion Projects at the University of Chicago's Center for Elementary Mathematics and Science
Education (CEMSE). She provides professional development, coaching, and consultation to support the implementation of Everyday Mathematics and inquiry-based science curricula in the University of
Chicago charter schools, the Chicago Public Schools, and elsewhere.
Ms. Leslie is also the Early Childhood Team Leader for the elementary component of UCSMP. In this capacity, she led the author team that developed the new version of Pre-Kindergarten Everyday
Mathematics and the team that worked on the 3rd edition revisions for Kindergarten Everyday Mathematics. Leslie is also one of the authors of Science Companion, an inquiry-based elementary school
science curriculum.
At CEMSE, Leslie works on several projects that draw on her background in science and her interest in high-quality professional development in the areas of math, science, and early childhood
education. Leslie taught Pre-Kindergarten, Kindergarten, and First Grade for 10 years in Connecticut and in the Chicago area. She has also done work for the Bush Center for Child Development and
Social Policy at Yale University, the Field Museum, and the Rochelle Lee Fund in Chicago, Illinois, and has done presentations, consulting, and professional development for several other
educational organizations. She holds a Bachelor's Degree in Molecular Biochemistry and Biophysics from Yale University and a Master's Degree in Teaching from the University of Chicago.
• Denise Porter
In 1984, Denise A. Porter received a Bachelors of Science degree in Elementary Education from Iowa State University in Ames, IA. She taught elementary school for nine years, with five of the
years as a Mathematics Specialist. In 1990, Ms. Porter earned a Masters of Education degree from the University of Houston.
Ms. Porter worked as a Teacher-in-Residence with the University of Chicago School Mathematics Project (UCSMP) from 1993-1996 and 2003 to present. She was involved in the development of the first
editions of Fourth, Fifth and Sixth Grade Everyday Mathematics and the third edition of Sixth Grade Everyday Mathematics, along with serving as author for various state editions.
Since 1993, Ms. Porter has provided professional development to administrators, teachers, and parents in a wide variety of school districts, regarding the implementation of Everyday Mathematics.
She currently is with the University of Chicago's Center for Elementary Mathematics and K-12 Science Education (CEMSE) as an Associate in Direct Services. She supports staff at restructured
Chicago Public Schools with mathematics instruction and develops and facilitates professional development.
Grade 5
Max Bell, John Bretzlauf, Amy Dillard, Robert Hartfield, Andy Isaacs, James McBride, Kathleen Pitvorec, Denise Porter^‡, Peter Saecker, Noreen Winningham*, Robert Balfanz^†, William Carroll^†
^† First Edition only
* Third Edition only
^‡ Common Core State Standards Edition only
Technical Art
Teachers in Residence
Fran Goldenberg, Sandra Vitantonio
Mathematics and Technology Advisor
UCSMP Editorial
Rosina Busse, Laurie K. Thrasher, David B. Spangler
3rd Edition ELL Consultant
Teacher-in-Residence for the Assessment Handbook
Assistant for the Differentiation Handbook
Regina Littleton, Kriszta Miner, Sandra R. Overcash, Serena Hohmann, Sally S. Johnson, Colleen M. Kelly, Kimberley Dawn Sowa, Tracy Lynn Selock, Tammy Belgrade, Diana Carry, Debra Dawson, Kevin
Dorken, Laurel Hallman, Ann Hemwell, Elizabeth Homewood, Linda Klaric, Lee Kornhauser, Judy Korshak-Samuels, Deborah Arron Leslie, Joseph C. Liptak, Sharon McHugh, Janet M. Meyers, Susan Mieli,
Donna Nowatzki, Sheila Sconiers, Kevin J. Smith, Theresa Sparlin, Laura Sunseri, Kim Van Haitsma, John Wilson, Mary Wilson, Carl Zmola, Theresa Zmola
• Max Bell
Max Bell is Professor Emeritus, Department of Education and the Physical Sciences Division at the University of Chicago and is affiliated with the University of Chicago Center for Mathematics and
Science Education (CEMSE). He is one of the founding authors of Everyday Mathematics (EM), and part of the Author panel that has responsibility for general oversight of the Publishing Agreement
for Everyday Mathematics and of the Center Agreement that established CEMSE.
Bell shifted in 1960 from teaching high school students to teaching teachers in the then-new MAT program at the University of Chicago. (He had earlier been in an influential NSF funded Academic
Year Institute for mathematics teachers conducted by the UC mathematics department.) He spent a decade as MAT Mathematics Coordinator while also working with UC people, SMSG, and other
organizations on reform-oriented secondary school mathematics materials. But as it became very clear that many children (and nearly all Chicago inner city children) entered secondary school with
little understanding of mathematics, Bell shifted his attention to elementary school mathematics instruction and teacher preparation.
Bell's widely reprinted 1973 article ("What does 'everyman' really need from school mathematics?") set an ambitious content agenda that anticipated the 1989 NCTM "Standards." Structured
interviews of several hundred five to nine year old children clearly showed that their mathematics learning capabilities were much greater than had been supposed. At the same time, textbook
analyses and interviews with teachers revealed an essentially vacuous primary school mathematics curriculum. With those foundations established by 1985, Bell joined with others in the University
of Chicago School Mathematics Project (UCSMP) in research, development, field testing, and widespread dissemination of the Everyday Mathematics curriculum for grades K-6.
Bell continues his interest in improvement of elementary school science and mathematics teaching, now focused on maximizing the potential for "educative curricula" (from which teachers learn as
they teach) to attack well known problems of scale in helping in-service teachers better understand and teach science and mathematics. Also, Bell and CEMSE colleagues are just beginning
conceptualization and specifications for a coherent "web based curriculum" for any individual who for any reason wishes to learn mathematics, from basic counting and arithmetic through data
analysis, algebra, or geometry.
• Amy Dillard
Amy L. Dillard received a Bachelor of Arts degree in Elementary Education from Boston College in Chestnut Hill, MA. She taught elementary school for four years at Hoffman School in Glenview,
Illinois. In 1994, she earned a Master of Arts degree in Mathematics Education from DePaul University in Chicago, Illinois.
Ms. Dillard worked as a Teacher-in-Residence with the University of Chicago School Mathematics Project (UCSMP) from 1994 to 1997. She was involved in the development of the commercial publication
of the first edition of Fourth Grade Everyday Mathematics, as well as the field testing and commercial publication of the first editions of Fifth Grade Everyday Mathematics and Sixth Grade
Everyday Mathematics. Ms. Dillard worked from 1997 to 2001 as one of the authors of the second editions of Everyday Mathematics K-6.
In 2002 she began work for the UCSMP Everyday Mathematics Center. The NSF-funded center was established to support educators, parents and students who are using, or will soon be using, Everyday
Mathematics. Since 2003 and currently, Ms. Dillard serves as the Associate Director of Everyday Mathematics, third edition.
• Andrew Isaacs
Andy Isaacs received a BA in classical Greek from Northwestern University in 1974, an MST in elementary education from the University of Chicago in 1977, an MS in mathematics from the University
of Illinois at Chicago (UIC) in 1987, and a DA in mathematics (with concentrations in abstract algebra and theoretical computer science) from UIC in 1994. Philip Wagreich directed Isaacs's
dissertation, "Whole number concepts and operations in grades 1 and 2: Curriculum and rationale."
From 1977 to 1985, Isaacs taught fourth and fifth grades in Chicago-area public schools. In 1985, he joined the Department of Mathematics, Statistics, and Computer Science at UIC as a lecturer in
mathematics education. Beginning in 1986, Isaacs worked closely with Wagreich and Howard Goldberg on the NSF-funded Teaching Integrated Mathematics and Science Project (TIMS). In 1989 and 1990,
he worked with Wagreich and David Page on UIC's Maneuvers with Mathematics Project, another NSF-funded curriculum development effort. From 1990 to 1995, he was a full time writer for Math
Trailblazers, a comprehensive curriculum for grades K-5 based on TIMS and funded by NSF.
In 1995, Isaacs joined the University of Chicago School Mathematics Project to work on the Bridges to Classroom Mathematics Project, which was directed by Sheila Sconiers. Isaacs was an author on
the second edition of Everyday Mathematics, published in 2000 and 2001, and most recently, he directed revisions that led to a third edition of Everyday Mathematics in 2007. He is Co-Director of
the University's Center for Elementary Mathematics and Science Education, and a Senior Research Associate in the University's Physical Sciences Division.
• James McBride
James McBride is a Senior Research Associate in the Division of Physical Sciences at the University of Chicago, where he is Co-Director of the Center for Elementary Mathematics and Science
Education (CEMSE). Within CEMSE, McBride is director of sampling and statistical analysis for all survey research, including implementation studies, teacher surveys, and studies of student
achievement. And for the past fourteen years, he has been a principal author for all three editions of the Everyday Mathematics (EM) program.
From 1980 to 1992, McBride was Senior Research Director and Senior Mathematical Statistician at Response Analysis Corporation. His work focused on the areas of sample design, estimation
procedures, modeling, imputation, and statistical analysis. Assignments covered the gamut of survey research applications for both complex national surveys and smaller, special population
surveys. He is experienced in directing the data collection and budget monitoring activities associated with national population surveys.
From 1975 to 1980, McBride was an Assistant Professor in the Department of Statistics at Princeton University. He directed the undergraduate program in statistics, and taught graduate and
undergraduate courses in inference, probability, multivariate analysis, experimental design, demography, time series, econometrics, statistical computing and simulation, and exploratory data
analysis. He taught courses at the Woodrow Wilson School of Princeton University and the University of Chicago, and has been a Visiting Assistant Professor in the Department of Mathematics at
Cornell University. He holds a grades 6-12 teaching certificate and a B.S. in mathematics and physics, an MAT in mathematics, and a PhD in Statistics, all from the University of Chicago.
• Kathleen Pitvorec
Kathleen Pitvorec received a B.A. in Anthropology in 1987 and an M.S.T. in 2001 from the University of Chicago. Ms. Pitvorec began her career in education with eight years as an elementary school
teacher in both public and private schools. She left the classroom to become part of the author team of Everyday Mathematics, joining the team in late 1995 as a Teacher-in-Residence, and becoming
one of the authors for second and third editions.
In addition to participating in the writing of the materials, as one of the second-edition authors, she coordinated the preliminary information-gathering from teachers about first-edition
materials and the field test of the revised materials. From 2000 to 2004, Ms. Pitvorec served as the Associate Director of the Implementation Center at the University of Chicago School
Mathematics Project (UCSMP). In this position, she developed and implemented workshops for inservice teachers, teacher-leaders, and administrators at local and national levels.
From 2005 to 2007, Ms. Pitvorec was a third edition author of Everyday Mathematics. She oversaw the development, field testing, and revision of open-response assessment items for Grades 1-6. She
is a co-author on the grade-level specific third edition Differentiation Handbooks and Assessment Handbooks included with the program materials.
In 2006 and 2007, Ms. Pitvorec worked on an NSF-funded research project at the University of Illinois investigating teachers' implementations of elementary school standards-based mathematics
lessons. In 2008, she worked on an NSF-funded research project at the University of Illinois researching assessment tools in Everyday Mathematics. She is currently a doctoral student in the
Learning Sciences program at the University of Illinois doing her dissertation research on the training of preservice elementary school mathematics teachers.
• Peter Saecker
Peter Saecker received a B.A. in Mathematics from Lawrence University in 1959 and an M.A. in Mathematics as a Woodrow Wilson Fellow from UCLA in 1960. After a year of graduate study in
mathematics at Northwestern in 1961, Peter Saecker joined Science Research Associates as a mathematics editor. For the next 31 years, he wrote, edited, and managed a variety of educational
materials at SRA, including elementary and secondary math textbooks and software.
In the early 1990s he helped write the grant proposal that secured NSF funding for the development of Everyday Mathematics Grades 4-6. Saecker joined the team full time in 1992 and continued to
work on Everyday Mathematics through the completion of the second edition in 2001. He died in the summer of 2001, as the second edition was going into print.
• Noreen Winningham
Noreen Winningham is on loan to the University of Chicago's Center for Elementary Mathematics and K-12 Science Education (CEMSE) from School District 65 (Evanston , Illinois). At the U of C, Ms.
Winningham is the main author for the 5th grade teacher and student materials for the third edition of Everyday Mathematics. She is a member of the author team for the University of Chicago
School Mathematics Project (UCSMP) secondary component PreTransitions textbook, and is a member of the In-School Support Team for several CEMSE and Chicago Public Schools projects.
Before coming to the University of Chicago, Ms. Winningham was an elementary/middle school teacher and administrator for eighteen years and a private consultant to elementary school districts in
25 U.S. states, France, and Belgium for the prior ten years. Winningham's past work includes the design and management of programming for the integration of the arts into the basic curriculum and
the design and delivery of teacher professional development in mathematics and computer technology.
Ms. Winningham has authored a number of pamphlets and texts, including Reading Olympics, published by Prurock Press. She also has been part of numerous research projects focusing on developing
mathematics curriculum for elementary school. Ms. Winningham holds K-9 teaching, general administration, and superintendent certificates, an undergraduate degree in communication/computer
science, a masters degree in elementary instruction, and a doctorate in educational leadership.
• Diana Barrie
Diana Barrie received her BFA and MFA from the Art Institute of Chicago. She taught photography and film-making at the University of Northern Iowa and the University of Wisconsin at Milwaukee.
After working in the reprographics industry in New York City and Chicago, she came to UCSMP in 1990, where she has performed various duties, including creating technical illustrations for all
grade levels of Everyday Mathematics. She has also created illustrations for Science Companion and the Center for Urban School Improvement's STEP Literacy Assessments.
• Jim Flanders
Jim Flanders is a researcher at the Center for Elementary Mathematics and Science Education (CEMSE) at the University of Chicago where his focus is on integrating technology into the mathematics
curriculum. He is a contributing author of several University of Chicago School Mathematics Project (UCSMP) books including Everyday Mathematics, Transition Mathematics, Algebra, Advanced
Algebra, and Functions, Statistics and Trigonometry.
Prior to joining CEMSE he was a member of the Chicago Science Group helping develop the field testing of the Science Companion and evaluating software for elementary school mathematics. He also
wrote calculator software for the Core-Plus Mathematics Project, and was a consultant to the Everyday Learning Corporation and the Louisiana Systemic Initiative Project.
Flanders has been an assistant professor in the department of mathematics and statistics at Western Michigan University where he had NSF support to develop a course for preservice mathematics
teachers on integrating technology into secondary school mathematics. He has also been an instructor of mathematics at the Colorado College and an academic dean and mathematics department chair
at The Colorado Springs School. He has a B.A. in mathematics from Colorado College and a Ph.D. in mathematics education from the University of Chicago.
• Kathryn B. Chval
Kathryn B. Chval is an Assistant Professor and Co-Director of the Missouri Center for Mathematics and Science Teacher Education at the University of Missouri-Columbia. Dr. Chval is also a
Co-Principal Investigator for the Center for the Study of Mathematics Curriculum and the Researching Science and Mathematics Teacher Learning in Alternative Certification Models Project which are
both funded by the National Science Foundation.
Prior to joining University of Missouri, Dr. Chval was the Acting Section Head for the Teacher Professional Continuum Program in the Division of Elementary, Secondary and Informal Science
Division at the National Science Foundation. She also spent fourteen years at the University of Illinois at Chicago managing NSF-funded projects.
Dr. Chval's research interests include (1) effective preparation models and support structures for teachers across the professional continuum, (2) effective elementary teaching of underserved
populations, especially English language learners, and (3) curriculum standards and policies.
• Soundarya Radhakrishnan
Soundarya Radhakrishnan has a bachelor's degree in chemical engineering. She received her graduate degree in education from Northwestern University in 2001. Ms. Radhakrishnan was a Chicago Public
School teacher at Gray Elementary from 2001-2003. She worked as a Math Specialist with the Chicago Public School system advising elementary math teachers of five Area 1 schools in 2003-2004. Her
responsibilities included providing professional development and teacher training through workshops as well as co-teaching and modeling lessons in K-6 classrooms using Everyday Mathematics
In 2004-2005, she was part of developing the Everyday Mathematics open response assessment section of the third edition of the Assessment Handbook at the University of Chicago. This included
creating and developing open response problems for grades 1-6, field-testing these problems in classrooms at two Chicago schools, and developing rubrics for analyzing student work.
From 2005-2008, she worked as an education consultant for the Everyday Mathematics curriculum that involved training teachers both in public and private Schools. She is currently working as a
Math Facilitator in the Office of Math and Science for the Chicago Public Schools.
• Serena Hohmann
Serena Hohmann received a B.A. in Spanish and International Studies in 2003 from Washington University in St. Louis and an M.A. in Latin American and Caribbean Studies from University of Chicago
in 2006. Throughout her undergraduate and graduate studies, Ms. Hohmann studied and worked in both Spain and Mexico.
From 2005 to 2006, she served as an editorial assistant for the third edition's language diversity team, focusing primarily on the Everyday Mathematics Differentiation Handbooks. In this
position, she observed bilingual classrooms, conducted extensive research on language acquisition, and consulted with several bilingual educators and specialists to determine how to best adapt
the curriculum for English Language Learners.
After completing her graduate work in 2006, Ms. Hohmann was selected for a Presidential Management Fellowship (PMF) at the U.S. Department of State, Office for Analysis of Inter-American Affairs.
She currently serves as a Foreign Affairs Analyst, providing policy-makers in-depth analyses of Mexico and Canada.
• Deborah Arron Leslie
Debbie Leslie is a senior curriculum developer, early childhood specialist, and Director of Science Companion Projects at the University of Chicago's Center for Elementary Mathematics and Science
Education (CEMSE). She provides professional development, coaching, and consultation to support the implementation of Everyday Mathematics and inquiry-based science curricula in the University of
Chicago charter schools, the Chicago Public Schools, and elsewhere.
Ms. Leslie is also the Early Childhood Team Leader for the elementary component of UCSMP. In this capacity, she led the author team that developed the new version of Pre-Kindergarten Everyday
Mathematics and the team that worked on the 3rd edition revisions for Kindergarten Everyday Mathematics. Leslie is also one of the authors of Science Companion, an inquiry-based elementary school
science curriculum.
At CEMSE, Leslie works on several projects that draw on her background in science and her interest in high-quality professional development in the areas of math, science, and early childhood
education. Leslie taught Pre-Kindergarten, Kindergarten, and First Grade for 10 years in Connecticut and in the Chicago area. She has also done work for the Bush Center for Child Development and
Social Policy at Yale University, the Field Museum, and the Rochelle Lee Fund in Chicago, Illinois, and has done presentations, consulting, and professional development for several other
educational organizations. She holds a Bachelor's Degree in Molecular Biochemistry and Biophysics from Yale University and a Master's Degree in Teaching from the University of Chicago.
• Denise Porter
In 1984, Denise A. Porter received a Bachelors of Science degree in Elementary Education from Iowa State University in Ames, IA. She taught elementary school for nine years, with five of the
years as a Mathematics Specialist. In 1990, Ms. Porter earned a Masters of Education degree from the University of Houston.
Ms. Porter worked as a Teacher-in-Residence with the University of Chicago School Mathematics Project (UCSMP) from 1993-1996 and 2003 to present. She was involved in the development of the first
editions of Fourth, Fifth and Sixth Grade Everyday Mathematics and the third edition of Sixth Grade Everyday Mathematics, along with serving as author for various state editions.
Since 1993, Ms. Porter has provided professional development to administrators, teachers, and parents in a wide variety of school districts, regarding the implementation of Everyday Mathematics.
She currently is with the University of Chicago's Center for Elementary Mathematics and K-12 Science Education (CEMSE) as an Associate in Direct Services. She supports staff at restructured
Chicago Public Schools with mathematics instruction and develops and facilitates professional development.
Grade 6
Max Bell, John Bretzlauf, Sarah R. Burns^‡ Amy Dillard, Robert Hartfield, Andy Isaacs, James McBride, Ann McCarty*, Kathleen Pitvorec, Peter Saecker, Robert Balfanz^†, William Carroll^†
^† First Edition only
* Third Edition only
^‡ Common Core State Standards Edition only
Technical Art
Teachers in Residence
Mathematics and Technology Advisor
UCSMP Editorial
Kathryn M. Rich, Laurie K. Thrasher
3rd Edition ELL Consultant
Teacher-in-Residence for the Assessment Handbook
Assistant for the Differentiation Handbook
Regina Littleton, Kriszta Miner, Kelley E. Buchheister, Aaron T. Hill, Mollie Rudnick, Serena Hohmann, Barbara J. Kitz, Moira S. Rodgers, Linda Werner, Ann Brown, Sarah Busse, Terry DeJohng,
Craig Dezell, John Dini, Donna Goffron, Steve Heckley, Karen Hedberg, Deborah Arron Leslie, Sharon McHugh, Janet M. Meyers, Donna Owen, William D. Pattison, Marilyn Pavlak, Jane Picken, Kelly
Porto, John Sabol, Sheila Sconiers, Rose Ann Simpson, Debbi Suhajda, Laura Sunseri, Andrea Tyrance, Kim Van Haitsma, Mary Wilson, Nancy Wilson, Jackie Winston, Carl Zmola, Theresa Zmola
• Max Bell
Max Bell is Professor Emeritus, Department of Education and the Physical Sciences Division at the University of Chicago and is affiliated with the University of Chicago Center for Mathematics and
Science Education (CEMSE). He is one of the founding authors of Everyday Mathematics (EM), and part of the Author panel that has responsibility for general oversight of the Publishing Agreement
for Everyday Mathematics and of the Center Agreement that established CEMSE.
Bell shifted in 1960 from teaching high school students to teaching teachers in the then-new MAT program at the University of Chicago. (He had earlier been in an influential NSF funded Academic
Year Institute for mathematics teachers conducted by the UC mathematics department.) He spent a decade as MAT Mathematics Coordinator while also working with UC people, SMSG, and other
organizations on reform-oriented secondary school mathematics materials. But as it became very clear that many children (and nearly all Chicago inner city children) entered secondary school with
little understanding of mathematics, Bell shifted his attention to elementary school mathematics instruction and teacher preparation.
Bell's widely reprinted 1973 article ("What does 'everyman' really need from school mathematics?") set an ambitious content agenda that anticipated the 1989 NCTM "Standards." Structured
interviews of several hundred five to nine year old children clearly showed that their mathematics learning capabilities were much greater than had been supposed. At the same time, textbook
analyses and interviews with teachers revealed an essentially vacuous primary school mathematics curriculum. With those foundations established by 1985, Bell joined with others in the University
of Chicago School Mathematics Project (UCSMP) in research, development, field testing, and widespread dissemination of the Everyday Mathematics curriculum for grades K-6.
Bell continues his interest in improvement of elementary school science and mathematics teaching, now focused on maximizing the potential for "educative curricula" (from which teachers learn as
they teach) to attack well known problems of scale in helping in-service teachers better understand and teach science and mathematics. Also, Bell and CEMSE colleagues are just beginning
conceptualization and specifications for a coherent "web based curriculum" for any individual who for any reason wishes to learn mathematics, from basic counting and arithmetic through data
analysis, algebra, or geometry.
• Sarah Burns
Senior Curriculum Developer
Sarah Burns came to CEMSE from Connecticut, where she taught in the Farmington Public School system. She began by working as part of a program funded by the Illinois Board of Higher Education
that provided support to teachers using Everyday Mathematics in Chicago Public Schools. Sarah was the sixth grade team leader for the Common Core State Standards edition of Everyday Mathematics
and is currently serving as the Grade 5 Everyday Mathematics team leader. Sarah holds a bachelor’s degree in English and a master of science degree in elementary education, both from the
University of Pennsylvania.
• Amy Dillard
Amy L. Dillard received a Bachelor of Arts degree in Elementary Education from Boston College in Chestnut Hill, MA. She taught elementary school for four years at Hoffman School in Glenview,
Illinois. In 1994, she earned a Master of Arts degree in Mathematics Education from DePaul University in Chicago, Illinois.
Ms. Dillard worked as a Teacher-in-Residence with the University of Chicago School Mathematics Project (UCSMP) from 1994 to 1997. She was involved in the development of the commercial publication
of the first edition of Fourth Grade Everyday Mathematics, as well as the field testing and commercial publication of the first editions of Fifth Grade Everyday Mathematics and Sixth Grade
Everyday Mathematics. Ms. Dillard worked from 1997 to 2001 as one of the authors of the second editions of Everyday Mathematics K-6.
In 2002 she began work for the UCSMP Everyday Mathematics Center. The NSF-funded center was established to support educators, parents and students who are using, or will soon be using, Everyday
Mathematics. Since 2003 and currently, Ms. Dillard serves as the Associate Director of Everyday Mathematics, third edition.
• Andrew Isaacs
Andy Isaacs received a BA in classical Greek from Northwestern University in 1974, an MST in elementary education from the University of Chicago in 1977, an MS in mathematics from the University
of Illinois at Chicago (UIC) in 1987, and a DA in mathematics (with concentrations in abstract algebra and theoretical computer science) from UIC in 1994. Philip Wagreich directed Isaacs's
dissertation, "Whole number concepts and operations in grades 1 and 2: Curriculum and rationale."
From 1977 to 1985, Isaacs taught fourth and fifth grades in Chicago-area public schools. In 1985, he joined the Department of Mathematics, Statistics, and Computer Science at UIC as a lecturer in
mathematics education. Beginning in 1986, Isaacs worked closely with Wagreich and Howard Goldberg on the NSF-funded Teaching Integrated Mathematics and Science Project (TIMS). In 1989 and 1990,
he worked with Wagreich and David Page on UIC's Maneuvers with Mathematics Project, another NSF-funded curriculum development effort. From 1990 to 1995, he was a full time writer for Math
Trailblazers, a comprehensive curriculum for grades K-5 based on TIMS and funded by NSF.
In 1995, Isaacs joined the University of Chicago School Mathematics Project to work on the Bridges to Classroom Mathematics Project, which was directed by Sheila Sconiers. Isaacs was an author on
the second edition of Everyday Mathematics, published in 2000 and 2001, and most recently, he directed revisions that led to a third edition of Everyday Mathematics in 2007. He is Co-Director of
the University's Center for Elementary Mathematics and Science Education, and a Senior Research Associate in the University's Physical Sciences Division.
• James McBride
James McBride is a Senior Research Associate in the Division of Physical Sciences at the University of Chicago, where he is Co-Director of the Center for Elementary Mathematics and Science
Education (CEMSE). Within CEMSE, McBride is director of sampling and statistical analysis for all survey research, including implementation studies, teacher surveys, and studies of student
achievement. And for the past fourteen years, he has been a principal author for all three editions of the Everyday Mathematics (EM) program.
From 1980 to 1992, McBride was Senior Research Director and Senior Mathematical Statistician at Response Analysis Corporation. His work focused on the areas of sample design, estimation
procedures, modeling, imputation, and statistical analysis. Assignments covered the gamut of survey research applications for both complex national surveys and smaller, special population
surveys. He is experienced in directing the data collection and budget monitoring activities associated with national population surveys.
From 1975 to 1980, McBride was an Assistant Professor in the Department of Statistics at Princeton University. He directed the undergraduate program in statistics, and taught graduate and
undergraduate courses in inference, probability, multivariate analysis, experimental design, demography, time series, econometrics, statistical computing and simulation, and exploratory data
analysis. He taught courses at the Woodrow Wilson School of Princeton University and the University of Chicago, and has been a Visiting Assistant Professor in the Department of Mathematics at
Cornell University. He holds a grades 6-12 teaching certificate and a B.S. in mathematics and physics, an MAT in mathematics, and a PhD in Statistics, all from the University of Chicago.
• Ann McCarty
Ann McCarty was the lead author for the Grade 6 teacher and student materials for the Third Edition of Everyday Mathematics. Ann has also written and evaluated mathematics and science teacher and
student materials for Pearson Education, Harcourt Education, and ETA/Cuisenaire.
Prior to working in publishing, McCarty worked as a curriculum coordinator, supervising the implementation and instructional delivery of the Second Edition of Everyday Mathematics (K-5), as well
as middle school content 6-8. Ann also served as a consultant to National Board Certification (NBC) candidates and district-level science coordinators who were conducting mathematics curriculum
analysis projects using TIMSS data.
McCarty began her teaching career in Western Samoa where she served as a Peace Corps Volunteer, teaching mathematics at Aleipata High School. She was a finalist for the Excellence in Elementary
Mathematics Teaching Award in 1994. Ann holds a Type 09 Illinois teaching certificate, undergraduate degrees in Clinical Psychology and Mathematics, as well as a master's degree in Science
• Kathleen Pitvorec
Kathleen Pitvorec received a B.A. in Anthropology in 1987 and an M.S.T. in 2001 from the University of Chicago. Ms. Pitvorec began her career in education with eight years as an elementary school
teacher in both public and private schools. She left the classroom to become part of the author team of Everyday Mathematics, joining the team in late 1995 as a Teacher-in-Residence, and becoming
one of the authors for second and third editions.
In addition to participating in the writing of the materials, as one of the second-edition authors, she coordinated the preliminary information-gathering from teachers about first-edition
materials and the field test of the revised materials. From 2000 to 2004, Ms. Pitvorec served as the Associate Director of the Implementation Center at the University of Chicago School
Mathematics Project (UCSMP). In this position, she developed and implemented workshops for inservice teachers, teacher-leaders, and administrators at local and national levels.
From 2005 to 2007, Ms. Pitvorec was a third edition author of Everyday Mathematics. She oversaw the development, field testing, and revision of open-response assessment items for Grades 1-6. She
is a co-author on the grade-level specific third edition Differentiation Handbooks and Assessment Handbooks included with the program materials.
In 2006 and 2007, Ms. Pitvorec worked on an NSF-funded research project at the University of Illinois investigating teachers' implementations of elementary school standards-based mathematics
lessons. In 2008, she worked on an NSF-funded research project at the University of Illinois researching assessment tools in Everyday Mathematics. She is currently a doctoral student in the
Learning Sciences program at the University of Illinois doing her dissertation research on the training of preservice elementary school mathematics teachers.
• Peter Saecker
Peter Saecker received a B.A. in Mathematics from Lawrence University in 1959 and an M.A. in Mathematics as a Woodrow Wilson Fellow from UCLA in 1960. After a year of graduate study in
mathematics at Northwestern in 1961, Peter Saecker joined Science Research Associates as a mathematics editor. For the next 31 years, he wrote, edited, and managed a variety of educational
materials at SRA, including elementary and secondary math textbooks and software.
In the early 1990s he helped write the grant proposal that secured NSF funding for the development of Everyday Mathematics Grades 4-6. Saecker joined the team full time in 1992 and continued to
work on Everyday Mathematics through the completion of the second edition in 2001. He died in the summer of 2001, as the second edition was going into print.
• Diana Barrie
Diana Barrie received her BFA and MFA from the Art Institute of Chicago. She taught photography and film-making at the University of Northern Iowa and the University of Wisconsin at Milwaukee.
After working in the reprographics industry in New York City and Chicago, she came to UCSMP in 1990, where she has performed various duties, including creating technical illustrations for all
grade levels of Everyday Mathematics. She has also created illustrations for Science Companion and the Center for Urban School Improvement's STEP Literacy Assessments.
• Denise Porter
In 1984, Denise A. Porter received a Bachelors of Science degree in Elementary Education from Iowa State University in Ames, IA. She taught elementary school for nine years, with five of the
years as a Mathematics Specialist. In 1990, Ms. Porter earned a Masters of Education degree from the University of Houston.
Ms. Porter worked as a Teacher-in-Residence with the University of Chicago School Mathematics Project (UCSMP) from 1993-1996 and 2003 to present. She was involved in the development of the first
editions of Fourth, Fifth and Sixth Grade Everyday Mathematics and the third edition of Sixth Grade Everyday Mathematics, along with serving as author for various state editions.
Since 1993, Ms. Porter has provided professional development to administrators, teachers, and parents in a wide variety of school districts, regarding the implementation of Everyday Mathematics.
She currently is with the University of Chicago's Center for Elementary Mathematics and K-12 Science Education (CEMSE) as an Associate in Direct Services. She supports staff at restructured
Chicago Public Schools with mathematics instruction and develops and facilitates professional development.
• Kathryn B. Chval
Kathryn B. Chval is an Assistant Professor and Co-Director of the Missouri Center for Mathematics and Science Teacher Education at the University of Missouri-Columbia. Dr. Chval is also a
Co-Principal Investigator for the Center for the Study of Mathematics Curriculum and the Researching Science and Mathematics Teacher Learning in Alternative Certification Models Project which are
both funded by the National Science Foundation.
Prior to joining University of Missouri, Dr. Chval was the Acting Section Head for the Teacher Professional Continuum Program in the Division of Elementary, Secondary and Informal Science
Division at the National Science Foundation. She also spent fourteen years at the University of Illinois at Chicago managing NSF-funded projects.
Dr. Chval's research interests include (1) effective preparation models and support structures for teachers across the professional continuum, (2) effective elementary teaching of underserved
populations, especially English language learners, and (3) curriculum standards and policies.
• Soundarya Radhakrishnan
Soundarya Radhakrishnan has a bachelor's degree in chemical engineering. She received her graduate degree in education from Northwestern University in 2001. Ms. Radhakrishnan was a Chicago Public
School teacher at Gray Elementary from 2001-2003. She worked as a Math Specialist with the Chicago Public School system advising elementary math teachers of five Area 1 schools in 2003-2004. Her
responsibilities included providing professional development and teacher training through workshops as well as co-teaching and modeling lessons in K-6 classrooms using Everyday Mathematics
In 2004-2005, she was part of developing the Everyday Mathematics open response assessment section of the third edition of the Assessment Handbook at the University of Chicago. This included
creating and developing open response problems for grades 1-6, field-testing these problems in classrooms at two Chicago schools, and developing rubrics for analyzing student work.
From 2005-2008, she worked as an education consultant for the Everyday Mathematics curriculum that involved training teachers both in public and private Schools. She is currently working as a
Math Facilitator in the Office of Math and Science for the Chicago Public Schools.
• Serena Hohmann
Serena Hohmann received a B.A. in Spanish and International Studies in 2003 from Washington University in St. Louis and an M.A. in Latin American and Caribbean Studies from University of Chicago
in 2006. Throughout her undergraduate and graduate studies, Ms. Hohmann studied and worked in both Spain and Mexico.
From 2005 to 2006, she served as an editorial assistant for the third edition's language diversity team, focusing primarily on the Everyday Mathematics Differentiation Handbooks. In this
position, she observed bilingual classrooms, conducted extensive research on language acquisition, and consulted with several bilingual educators and specialists to determine how to best adapt
the curriculum for English Language Learners.
After completing her graduate work in 2006, Ms. Hohmann was selected for a Presidential Management Fellowship (PMF) at the U.S. Department of State, Office for Analysis of Inter-American Affairs.
She currently serves as a Foreign Affairs Analyst, providing policy-makers in-depth analyses of Mexico and Canada.
• Jim Flanders
Jim Flanders is a researcher at the Center for Elementary Mathematics and Science Education (CEMSE) at the University of Chicago where his focus is on integrating technology into the mathematics
curriculum. He is a contributing author of several University of Chicago School Mathematics Project (UCSMP) books including Everyday Mathematics, Transition Mathematics, Algebra, Advanced
Algebra, and Functions, Statistics and Trigonometry.
Prior to joining CEMSE he was a member of the Chicago Science Group helping develop the field testing of the Science Companion and evaluating software for elementary school mathematics. He also
wrote calculator software for the Core-Plus Mathematics Project, and was a consultant to the Everyday Learning Corporation and the Louisiana Systemic Initiative Project.
Flanders has been an assistant professor in the department of mathematics and statistics at Western Michigan University where he had NSF support to develop a course for preservice mathematics
teachers on integrating technology into secondary school mathematics. He has also been an instructor of mathematics at the Colorado College and an academic dean and mathematics department chair
at The Colorado Springs School. He has a B.A. in mathematics from Colorado College and a Ph.D. in mathematics education from the University of Chicago.
• Deborah Arron Leslie
Debbie Leslie is a senior curriculum developer, early childhood specialist, and Director of Science Companion Projects at the University of Chicago's Center for Elementary Mathematics and Science
Education (CEMSE). She provides professional development, coaching, and consultation to support the implementation of Everyday Mathematics and inquiry-based science curricula in the University of
Chicago charter schools, the Chicago Public Schools, and elsewhere.
Ms. Leslie is also the Early Childhood Team Leader for the elementary component of UCSMP. In this capacity, she led the author team that developed the new version of Pre-Kindergarten Everyday
Mathematics and the team that worked on the 3rd edition revisions for Kindergarten Everyday Mathematics. Leslie is also one of the authors of Science Companion, an inquiry-based elementary school
science curriculum.
At CEMSE, Leslie works on several projects that draw on her background in science and her interest in high-quality professional development in the areas of math, science, and early childhood
education. Leslie taught Pre-Kindergarten, Kindergarten, and First Grade for 10 years in Connecticut and in the Chicago area. She has also done work for the Bush Center for Child Development and
Social Policy at Yale University, the Field Museum, and the Rochelle Lee Fund in Chicago, Illinois, and has done presentations, consulting, and professional development for several other
educational organizations. She holds a Bachelor's Degree in Molecular Biochemistry and Biophysics from Yale University and a Master's Degree in Teaching from the University of Chicago.
|
{"url":"http://everydaymath.uchicago.edu/about/em-history/about-em-authors/","timestamp":"2014-04-18T18:10:36Z","content_type":null,"content_length":"188051","record_id":"<urn:uuid:dd3f78cd-e4bc-4036-9968-a5d35a5c82f9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: ReplyToPratt
Harvey Friedman friedman at math.ohio-state.edu
Tue Nov 4 07:24:05 EST 1997
Here is a general comment on Pratt's postings: they suggest an alternative
view of the foundations of mathematics that somehow is going to incorporate
both structuralist ideas (e.g., category theory and topos theory) and
traditional f.o.m. (set theory, predicate calculus, etc) into one combined
theory side by side. One of the pieces of glue is to be linear logic.
However, as far as I am concerned, I don't see (in these postings) a single
simple, cogent idea of how this is to be done. In particular, many people
on this e-mail list don't know what linear logic, and many of the people
who do, don't have a positive impression of it. At least they don't see why
it is important or useful in foundations of mathematics, and in particular
what it has to do with any substantive concept of mathematical proof. Until
this is made clear for the group in simple, basic terms, with simple, basic
examples, I suspect that such postings serve no useful purpose.
>I'd like to propose the following as a suitable program of research for
>anyone interested in making the sort of sense of structuralism that
>Harvey has been quite rightly demanding. The general idea is to unify
>the propositional and constructive views on foundations as a single
>coherent framework.
>First, with suitable care this program has the potential to inherit
>essentially intact the entire tradition and technology of two important
>tracks in foundations. The first is the conservative track defending
>FOL, more than adequately represented on FOM. The second is the
>constructive track originating with Brouwer (on whose life btw Dirk van
>Dalen will speak at LICS this year). More specifically it focuses on
>the path through Gentzen culminating in Girard's linear logic (LL).
The Brouwer track is well formulated in traditional f.o.m. terms. The
intuitionistic systems are well formulated and well studied, with
appropriate striking results of the usual f.o.m. kind. However, LL has not
had the same attention. You can do us a great service by presenting LL in a
basic way so that the point of it for foundations of mathematics is made
crystal clear. How about it?
>As a grue critter, so to speak, I find the FOM view of mathematics just
>as extraterrestrial. A lot of you seem to believe that everything is a
>set, and claim not to understand the point of view of those who deny
>this. As an interesting corollary, size (the only property a set can
>have up to isomorphism) looms large among what you regard as the most
>important questions of foundations.
Size does not loom large in the kind of state of the art foundations of
mathematics that I do. "Everything is a set" is a gross oversimplification
of the usual f.o.m. viewpoint. The size thing is not a corollary.
>The corresponding point of view in mathematics *should* be that
>everything can be understood equally well as consisting of sets or
>Boolean algebras, and optimally as a blend thereof.
I do not believe that any significant amount of anything can be equally
well understood solely in terms of Boolean algebras. I stated the "Theorem"
that asserts that sets are the simplest foundationally complete system - as
you recall, unfortunately I don't quite know what this assertion means. But
it is clearly true.
Exercise for Pratt: write some basic real analysis solely in terms of
Boolean algebras.
>In reducing
>mathematics to sets you commit the same error a physicist would make in
>reducing physics to particles. Interestingly, both errors can be
>documented by essentially the same underlying machinery.
The mathematicians have a trivial translation of all of mathematics into
sets that is sufficiently powerful so as to obtain crucial nonderivability
results. A forced reduction of physics to particles is comparatively very
Please document for us, in the clearest possible basic terms, this "same
underlying machinery."
>Everything changes in
>mathematics, some faster than others.
Basic classical mathematics is relatively unchanged. The classical number
systems are unchanged. So what do you mean by this?
>A proof denotes a transformation of the givens into the result(s).
Attempts to illuminate proofs as transformations have been uninspiring,
leading to uninformative messes. Do you have any better ideas?
> The full
>completeness proofs now starting to appear for linear logic (the
>appropriate logic of proof) take the form of a bijection between
>cut-free proofs and natural transformations, just as completeness for
>propositional logic puts theorems in correspondence with tautologies.
In what sense is linear logic "the appropriate logic of proof?" State at
least two examples in the clearest most basic terms. Do you think that no
serious mathematical proof is cut-free, and if so, does this indicate that
what you are talking about may be unimportant for the foundations of
>I'm hoping that the metamathematical theorems some of us on the
>transformational side of things are shooting for will make foundations
>a second such example [of duality].
Can you state some metamathematical theorems you are shooting for and why
they would have any impact on foundations of mathematics?
>What structuralism is (or should be) proposing is to account for the
>proofs and their denotation as transformations, based on a formal
>system as rich as that currently used to account for propositions and
>their denotation as partitions.
As I said above, it is not generally believed that anything significant is
going to come of genuine mathematical proofs as transformations. That
doesn't mean that nothing significant will ever come of this. It's just
that you are on the spot to say something basic and intelligible that lends
credence to this idea.
> The detailed structure of this system
>will be that of linear logic, the formal side of whose multiplicative
>fragment, MLL, is comparable in complexity, elegance, and mathematical
>relevance to classical propositional calculus.
Show us this in the clearest basic terms, so that we can evaluate it.
> Just as the basic
>semantic entities for propositional calculus are 0 and 1 (the *'s in my
>diagram), so is the basic semantic entity for MLL the abstract
>transformation or morphism (the edge in my diagram). Just as 0's and
>1's combine under Boolean connectives, so do edges combine under
Sounds like an algebraic trick, and not something fundamental; perhaps not
a fundamental explanation of anything.
> A cut-free MLL derivation starts with an untangled set of
>edges and derives theorems in which those edges become more or less
>tangled up in each other. The Danos-Regnier theorem gives an elegant
>derivation-independent characterization of the derivable such tangles.
In the usual foundational setups, as I said earlier, no really interesting
mathematical proof is cut-free. How does this effect the significance of
what you are saying?
I wrote:
>>It just that people should recognize what's involved in
>>doing such an overhaul, and not fool themselves into either
>> i) embracing something that is either essentially the same as the
>>usual set theoretic foundations of mathematics; or
>> ii) embracing something that doesn't even minimally accomplish what
>>is so successfully accomplished by the usual set theoretic foundations of
Pratt wrote:
>Transformations are "essentially the same as" partitions only in the
>sense that waves are essentially the same as photons. Physicists do
>not argue that the equivalence of the photon and wave viewpoints makes
>the photon viewpoint irrelevant to physics. The same reasoning applies
I meant things like: using (set theoretic) functions as primitive instead
of sets. I reject the analogy with photon/wave. The sense of translation
between functions and sets is different than the relationship between
photon and wave.
>Replacing truth and sets isn't the goal. The goal is proof and truth
>as equal partners, not proof as the servant of truth. Hand in hand
>with this is the goal of sets and categories as equal partners.
"Equal partners" is much too vague. You need to give some sort of
indication as to how you are going to combine these. Set theory is already
a spectacular success on many many levels, whereas category theory is not
(as a foundation). Therefore "equal" needs considerable justification.
Friedman wrote:
>>Ex: Let E be a subset of the unit square in the plane, which is symmetric
>>about the line y = x. Then E contains or is disjoint from the graph of a
>>Borel measurable function.
Pratt wrote:
>It is an interesting question whether the interest in such subsets
>derives directly from an application of mathematics, as Harvey and
>Adrian (and others?) have been arguing, or indirectly from an
>unacknowledged pathology of their framework. My suspicions are with
>the latter.
I am assuming for the moment that this was not intended as a gratuitous
insult. So what does "an unacknowledged pathology of their framework" mean?
There are over 10000 papers in which Borel measurable subsets of Euclidean
spaces are mentioned.
>I would claim that linear logic is, in the loose sense of this analogy,
>the "Fourier transform" of first order logic, preserving its content
>while reorganizing its structure to give the dual perspective.
In order for comments like this to be useful to the e-mail list, there has
to be a great deal more explanation at a very basic level. You haven't even
told us what linear logic is and why it is interesting.
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1997-November/000158.html","timestamp":"2014-04-20T08:20:25Z","content_type":null,"content_length":"12603","record_id":"<urn:uuid:0c3ad8e1-0184-424c-9899-5889952bfbd0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Description of the Random Behavior Exercise for the Instructor
Three different levels of difficulty/involvement are woven into this web exercise. Level 1 would be appropriate for an advanced placement high school senior, junior college student, or an
introductory course college student who wants just the basics. By skipping all the level 2 and 3 verbiage, the student guickly discovers he/she can't behave randomly, but he/she won't become aware of
the subtleties and complexities surrounding randomness.
Level 2 is most appropriate for introductory college-level courses or second courses in psychology. Level 2 exposes students to some of subtle and complex issues concerning randomness, probability
and social inference (cognition).
Level 3 would be appropriate for upper-level/major student because it assumes the student has had a statistics/math course and is highly motivated to explore, think, and learn about abstract,
difficult-to-think-about concepts. Working all the way through level 3 will take most students between 1 and 1.5 hours.
The exercise begins with some definitions of randomness along with optional digressions about definition problems, flawed lotteries, and chaos theory. Next, students input an imaginary sequence of
100 coin flips and learn about 4 ways to assess randomness (distribution of heads/tails, number of runs, length of runs, and serial dependencies or autocorrelation). Students can test their
understanding of these randomness tests by taking an interactive quiz.
Next, they request an analysis of their imagined sequence, and they receive results of the 4 statistical tests in a table. The exercise provides explanations for why the imagined sequences usually
fail randomness tests, describing the gambler's and hot hand fallacies, a famous probability "teaser," and some applications to everyday life.
By now, the student should be curious about what a random sequence of 100 flips should look like, and they get an opportunity to find out in the second section of the exercise. They either flip a
real coin themselves, or they have the computer generate 100 "random" flips. The random number generator is not very good and frequently fails at least one of the randomness tests. This should
increase skepticism about computer output and increase appreciation for how hard it is to create anything that acts randomly. If students want to explore a high-quality random number generator, they
can then link to a site using radioactive decay to generate "real" random numbers.
After comparing the coin/computer results with their imaginary results, students read brief descriptions of the research on people's problems with behaving randomly, and students are asked to
generate some reasons why randomness is so hard for us grasp [a number of reasons are provided latter]
In the last part of the exercise, students can test their new insights by repeating the coin flip exercises (imagine, real, or computer generated) . Each time they repeat the coin flip exercise,
their results are stored in a table so they can see whether practice leads to closer approximation of randomness. They can also take an interactive multiple choice and short answer quiz. They can ask
for hints if they are unsure of an answer; after they pick an answer, they find out if it is correct; they can read justifications/explanations for the correct answer, and they can link to relevant
sections of the exercise which they can review to solidify their learning. Finally, students complete a brief evaluation of the exercise.
Right now, the program will not send student responses to any data file, but it can be easily set up to do so by adding a POSTURL or MAILTO command.
Go to the webtutorial exercise Chris Wetzel's homepage
|
{"url":"http://faculty.rhodes.edu/wetzel/random/description.html","timestamp":"2014-04-17T06:41:15Z","content_type":null,"content_length":"4413","record_id":"<urn:uuid:edbb60d3-a960-47a0-9825-d45a827ef081>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Towards quantum chemistry on a quantum computer
TweetUQ (University of Queensland, Brisbane Australia) physicists and Harvard chemists have teamed up to build a quantum computer that could have profound implications for wider science.
Professor Andrew White and colleagues from UQ's School of Mathematics and Physics, teamed up with researchers from Harvard University, led by Professor Alán Aspuru-Guzik, to tackle the problem of
applying quantum mechanics to fields such as chemistry and biology.
“Physicists have a problem,” Professor White said.
“They have an outstandingly successful theory of nature at the small scale – quantum mechanics – but have been unable to apply it exactly to situations more complicated than, say, four or five
“But now we have done exactly that by building a small quantum computer and used it to calculate the precise energy of molecular hydrogen."
This groundbreaking approach to molecular simulations could have profound implications not just for chemistry, but also for a range of fields from cryptography to materials science.
The work, published this week in Nature Chemistry, saw Professor White's team assemble the physical computer and run the experiments, while Professor Aspuru-Guzik's team coordinated experimental
design and performed key calculations.
“We were the software guys and they were the hardware guys,” Professor Aspuru-Guzik said.
While modern supercomputers can perform approximate simulations, increasing the complexity of these systems results in an exponential increase in computational time.
“Quantum computers promise highly precise calculations while using a fraction the resources of conventional computing,” he said.
“This computational power derives from the way quantum computers manipulate information. In classical computers, information is encoded in bits, that have only two values: zero and one. Quantum
computers use quantum bits – qubits – that can have an infinite different number of values such as zero, or one, or zero plus one, and so on.
“Quantum computers also exploit the strange phenomena of entanglement, powerful correlations between qubits that Einstein once described as ‘spooky action at a distance'.”
Professor White said it would be a while before quantum computers would leave the lab and appear on desktops.
“It's very early days for quantum technology,” he said.
“Most quantum computer demonstrations have been limited to a handful of qubits. A colleague of mine in Canada says that any demonstration with less than ten qubits is cute but useless, which
makes me think of a baby with an abacus.
“However, Alán and his team at Harvard have shown that when we can build circuits of just a few hundred qubits, this will surpass the combined computing power of all the traditional computers in
the world, each of which uses many billions of bits.”
“It took standard computing 50 years to get to this point, I'm sure we can do it in much less time than that.”
The quantum circuits corresponding to evolution of the listed Hermitian
second-quantized operators. Here p, q, r, and s are orbital indices corresponding to qubits such that the population of |1 determines the occupancy of the orbitals. It is assumed that the orbital
indices satisfy p > q > r > s. These circuits were found by performing the Jordan-Wigner transformation given in (S2b) and (S2a) and then propagating the obtained Pauli spin variables. In each
circuit, θ = θ(h) where h is the integral preceding the operator. Gate Tˆ(θ) is defined by ˆ T |0 = |0 and ˆ T |1 = exp(−iθ)|1, ˆG is the global phase gate given by exp(−iφ)ˆ1, and the
change-of-basis gate ˆ Y is defined as ˆRx(−π/2). Gate ˆH refers to the Hadamard gate. For the number-excitation operator, both M = ˆ Y and M = ˆH must be implemented in succession. Similarly, for
the double excitation operator each of the 8 quadruplets must be implemented in succession. The global phase gate must be included due to the phase-estimation procedure. Phase estimation requires
controlled versions of these operators which can be accomplished by changing all gates with θ-dependence into controlled gates.Nature Chemistry - Towards quantum chemistry on a quantum computer
Exact first-principles calculations of molecular properties are currently intractable because their computational cost grows exponentially with both the number of atoms and basis set size. A
solution is to move to a radically different model of computing by building a quantum computer, which is a device that uses quantum systems themselves to store and process data. Here we report
the application of the latest photonic quantum computer technology to calculate properties of the smallest molecular system: the hydrogen molecule in a minimal basis. We calculate the complete
energy spectrum to 20 bits of precision and discuss how the technique can be expanded to solve large-scale chemical problems that lie beyond the reach of modern supercomputers. These results
represent an early practical step toward a powerful tool with a broad range of quantum-chemical applications.
12 page pdf of supplemental information
A fundamental challenge for the quantum simulation of large molecules is the accurate decomposition of the system’s time evolution operator, ˆU . In our experimental demonstration, we exploit the
small size and inherent symmetries of the hydrogen molecule Hamiltonian to implement ˆU exactly, using only a small number of gates. As the system size grows such a direct decomposition will no
longer be practical. However, an efficient first-principles simulation of the propagator is possible for larger chemical systems.
The key steps of an efficient approach are: (1) expressing the chemical Hamiltonian in second quantized form, (2) expressing each term in the Hamiltonian in a spin 1/2 representation via the
Jordan-Wigner transformation, (3) decomposing the overall unitary propagator, via a Trotter-Suzuki expansion, into a product of the evolution operators for non-commuting Hamiltonian terms, and
(4) efficiently simulating the evolution of each term by designing and implementing the corresponding quantum circuit. We note that the first two steps generate a Hamiltonian that can be easily
mapped to the state space of qubits. The last steps are part of the quantum algorithm for simulating the time-evolution operator, ˆU , generated by this Hamiltonian. Details of each step are
New Scientist coverage
Their "iterative phase estimation algorithm" is a variation on existing quantum algorithms such as Shor's algorithm, which has been successfully used to crack encryption schemes. It is run
several times in succession, with the output from each run forming the input to the next.
"You send two things into the algorithm: a single control qubit and a register of qubits pre-encoded with some digital information related to the chemical system you're looking at," says White.
"The control qubit entangles all the qubits in the register so that the output value – a 0 or 1 – gives you information about the energy of the chemical system." Each further run through the
algorithm adds an extra digit.
The data passes through the algorithm 20 times to give a very precise energy value. "It's like going to the 20th decimal place," White says. Errors in the system can mean that occasionally a 0
will be confused with a 1, so to check the result the 20-step process is repeated 30 times.
The team used this process to calculate the energy of a hydrogen molecule as a function of its distance from adjacent molecules. The results were astounding, says White. The energy levels they
computed agreed so precisely with model predictions – to within 6 parts in a million – that when White first saw the results he thought he was looking at theoretical calculations. "They just
looked so good."
IPEA success probability measured over a range of parameters. Probabilities
for obtaining the ground state energy, at the equilibrium bond length 1.3886 a0, as a function of: (a) the number of times each bit is sampled (n); (b) the number of extracted bits (m); (c) the
fidelity between the encoded register state and the ground state (F ). The standard fidelity between a measured mixed ρ and ideal pure |Ψ> state is F =<Ψ|ρ|Ψ>. (a) & (b) employ a ground state
fidelity of F ≈ 1. (a) & (c) employ a 20-bit IPEA. All lines are calculated using a model that allows for experimental imperfections. This model, as well as the technique used to calculate success
probabilities and error bars, are detailed in the SOM (section B).
Towards quantum chemistry on a quantum computer
blog comments powered by Disqus
|
{"url":"http://nextbigfuture.com/2010/01/towards-quantum-chemistry-on-quantum.html","timestamp":"2014-04-17T21:22:46Z","content_type":null,"content_length":"246275","record_id":"<urn:uuid:959676f4-0397-4f42-9c1c-fbcb42988f62>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
|
texture in circle [Archive] - OpenGL Discussion and Help Forums
View Full Version : texture in circle
09-03-2002, 02:21 PM
hi, can anyone tell me if theres a way i can make a circle without making it with lines? and can anyone give me an example on how to texture a sphere?
09-03-2002, 02:30 PM
Well you could use bezieh (that is spelled wrong) curves to make a circle and nurbs (I think). There's also some command like glCircle or somethin that will make a circle. The other method is to use
sine and cosine and to for loop drawing lines to produce a circle. I have only used the last two methods before. However nomatter what all a circle is once it comes down to it is a bunch of lines (in
a non theoretical sense).
Hope I have shed some light on your situation. In terms of texturing, if you use the last solution texturing would be a piece of cake.
09-03-2002, 03:39 PM
Sorry but there is no glCircle command, a circle is a 2D object. But gluDisc makes a flat disk, with or without a hole in the center and you can add a texture to it.
Easy way for a textured sphere is with the gluSphere and the gluQuadricTexture commands.
Easy circle: you can make a circle with a paint program and apply it to a quad as a texture.
Originally posted by JOSE ML:
hi, can anyone tell me if theres a way i can make a circle without making it with lines? and can anyone give me an example on how to texture a sphere?
[This message has been edited by nexusone (edited 09-03-2002).]
09-03-2002, 06:41 PM
Well, the was I texture my sky dome is simply taking one texture and spreding it across the entire sky... I get the coordinates with simple sin cos functions which I use to create my sky dome
First ask yourself how you make a sphere...
Then you will see that with the formula of the sphere with radius one you already have a full set of coordinates since a texture coordinates normally range from 0 to 1.
So after you compute your x,y,z vertices for your sphere, you could get the point of the texture in the sphere with something like:
m_Texture[0] = x*(radius)*0.5f + 0.5f;
m_Texture[1] = z*(radius)*0.5f + 0.5f;
Where the starting point is right at the top of the sphere...
Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved.
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-128401.html","timestamp":"2014-04-21T04:49:44Z","content_type":null,"content_length":"5343","record_id":"<urn:uuid:1ba6cb6c-11fb-46eb-9bad-dac727631f5c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New Castle, DE Statistics Tutor
Find a New Castle, DE Statistics Tutor
...I truly enjoy helping students achieve their goals. Thanks for visiting my page, and best of luck!Scored 780/800 on SAT Math in high school and 800/800 on January 26, 2013 test. Routinely
score 800/800 on practice tests.
19 Subjects: including statistics, calculus, geometry, algebra 1
...I have a Master of Science degree in math, over three years' experience as an actuary, and am a member of MENSA. I am highly committed to students' performances and to improve their
comprehension of all areas of mathematics.I have excelled in courses in Ordinary Differential Equations in both un...
19 Subjects: including statistics, calculus, geometry, algebra 1
...I hold a B.S. in Mathematics from Rensselear Polytechnic Institute (RPI), and I offer tutoring in all math levels as well as chemistry and physics. My credentials include over 10 years
tutoring experience and over 4 years professional teaching experience. I received 800/800 on the GRE math sect...
58 Subjects: including statistics, reading, geometry, biology
...I also take Multivariable Calculus, AP Biology, AP Physics B and C, AP Literature, and AP Statistics. I have managed to earn straight A's the past two years. I also took AP Psychology my
junior year and managed to earn a 5.
21 Subjects: including statistics, chemistry, English, biology
I have scored 750 on the GMAT test with a 98 percentile score. I have also scored a perfect 800 score in GRE Math with the 99 percentile. I have been teaching SAT, GMAT, ACT, MCAT and GRE prep
for over 7 years.
18 Subjects: including statistics, physics, calculus, finance
Related New Castle, DE Tutors
New Castle, DE Accounting Tutors
New Castle, DE ACT Tutors
New Castle, DE Algebra Tutors
New Castle, DE Algebra 2 Tutors
New Castle, DE Calculus Tutors
New Castle, DE Geometry Tutors
New Castle, DE Math Tutors
New Castle, DE Prealgebra Tutors
New Castle, DE Precalculus Tutors
New Castle, DE SAT Tutors
New Castle, DE SAT Math Tutors
New Castle, DE Science Tutors
New Castle, DE Statistics Tutors
New Castle, DE Trigonometry Tutors
|
{"url":"http://www.purplemath.com/new_castle_de_statistics_tutors.php","timestamp":"2014-04-18T11:21:34Z","content_type":null,"content_length":"24062","record_id":"<urn:uuid:ce570293-f5ea-4c3b-a67d-912b107c8eb6>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
Return to List
Atiyah-Singer Index Theorem: An Introduction
A publication of Hindustan Book Agency.
             
Hindustan Book This monograph is a thorough introduction to the Atiyah-Singer index theorem for elliptic operators on compact manifolds without boundary. The main theme is only the classical index
Agency theorem and some of its applications, but not the subsequent developments and simplifications of the theory.
2013; 276 pp; The book is designed for a complete proof of the \(K\)-theoretic index theorem and its representation in terms of cohomological characteristic classes. In an effort to make the
hardcover demands on the reader's knowledge of background materials as modest as possible, the author supplies the proofs of almost every result. The applications include Hirzebruch signature
theorem, Riemann-Roch-Hirzebruch theorem, and the Atiyah-Segal-Singer fixed point theorem, etc.
978-93-80250-54-0 A publication of Hindustan Book Agency; distributed within the Americas by the American Mathematical Society. Maximum discount of 20% for all commercial channels.
List Price: US$60 Graduate students and research mathematicians interested in \(K\)-theoretic index theorem.
Member Price: • \(K\)-theory
US$48 • Fredholm operators and Atiyah-Jädnich theorem
• Bott periodicity and Thom isomorphism
Order Code: HIN/ • Pseudo-differential operators
64 • Characteristic classes and Chern-Weil construction
• Spin structure and Dirac operator
• Equivariant \(K\)-theory
• The index theorem
• Cohomological formulation of the index theorem
• Bibliography
• Index
|
{"url":"http://cust-serv@ams.org/bookstore?fn=20&arg1=whatsnew&ikey=HIN-64","timestamp":"2014-04-17T02:24:21Z","content_type":null,"content_length":"14599","record_id":"<urn:uuid:f8cad7d5-8a2e-4011-b48a-e356d3e2a2b7>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Error Sigma
Often, when making a measurement, one does not know the "true" value. That's why one has to do measurements in the first place. Therefore, we repeat the experiment many times and we take the mean
value as an approximation for the "true" value (the idea being, of course, that our measurements were rather precise). How large the standard-deviation is tells us something about the accuracy of the
measurements. If we do not miss any systematic effects (for example, a mis-aligned detector or something more subtle, such as the velocity of the earth with respect to the aether
|
{"url":"http://www.physicsforums.com/showthread.php?t=268678","timestamp":"2014-04-16T07:44:34Z","content_type":null,"content_length":"25386","record_id":"<urn:uuid:2751f83b-2810-4fce-b8ee-0f26db40d68a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fractions, Mixed Numbers, And Decimals Practice
Fractions, Mixed Numbers, And Decimals Practice, Problems and Worksheets
AdaptedMind has 29 lessons to help with fractions, mixed numbers, and decimals practice. Move your mouse over a lesson to preview it. Click a lesson to start practicing problems, print worksheets, or
watch a video!
|
{"url":"http://www.adaptedmind.com/categorylist.php?categoryId=12","timestamp":"2014-04-18T10:35:04Z","content_type":null,"content_length":"96814","record_id":"<urn:uuid:5f479298-1db9-463d-be5b-9c16290e1988>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Figure 3
Resolution: standard / high
Figure 3.
Multi-IP normalization. This figure shows CHANCE output for the multi-IP normalization module. (a-d) CHANCE produces a summary statement (a-b), a pairwise sample differential enrichment matrix (c-d),
and a graphical representation of the normalization process. The graphical representation gives the same type of plot as in IP strength estimation for each IP sample, as well as the consensus of the
IP samples; see Materials and methods. The summary statement quantifies the graphical representation by giving the statistical significance of the difference of each sample from the consensus. The
differential enrichment matrix computes the percentage of the genome differentially enriched between all pairs of samples, using the same technique for IP-Input comparison used in IP strength
estimation; see Materials and methods. (a,c,e) Multi-IP normalization of H3K4me1, H3K4me2, H3K4me3, and H3K36me3 in human embryonic stem cells (H1 HESCs), from the Broad ENCODE data. (b,d,f) The
capacity of CHANCE multi-IP normalization to detect batch effects. The clustering of technical replicates (denoted by 1 and 2) for each biological replicate (denoted by A and B) seen in (f) is
quantified in the pairwise differential enrichment matrix (d), which shows a statistically insignificant percentage of the genome differentially enriched between replicates but a non-negligible
percentage of the genome differentially enriched between batches.
Diaz et al. Genome Biology 2012 13:R98 doi:10.1186/gb-2012-13-10-r98
Download authors' original image
|
{"url":"http://genomebiology.com/2012/13/10/R98/figure/F3?highres=y","timestamp":"2014-04-18T09:07:37Z","content_type":null,"content_length":"13013","record_id":"<urn:uuid:e36214e0-ff76-4dbb-a469-fc6a44f9625a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analogue studies of nonlinear systems.
Luchinsky, D. G. and McClintock, Peter V. E. and Dykman, M. I. (1998) Analogue studies of nonlinear systems. Reports on Progress in Physics, 61 (8). pp. 889-997. ISSN 0034-4885
Full text not available from this repository.
The design of analogue electronic experiments to investigate phenomena in nonlinear dynamics, especially stochastic phenomena, is described in practical terms. The advantages and disadvantages of
this approach, in comparison to more conventional digital methods, are discussed. It is pointed out that analogue simulation provides a simple, inexpensive, technique that is easily applied in any
laboratory to facilitate the design and implementation of complicated and expensive experimental projects; and that there are some important problems for which analogue methods have so far provided
the only experimental approach. Applications to several topical problems are reviewed. Large rare fluctuations are studied through measurements of the prehistory probability distribution, thereby
testing for the first time some fundamental tenets of fluctuation theory. It has thus been shown for example that, whereas the fluctuations of equilibrium systems obey time-reversal symmetry, those
under non-equilibrium conditions are temporally asymmetric. Stochastic resonance, in which the signal-to-noise ratio for a weak periodic signal in a nonlinear system can be enhanced by added noise,
has been widely studied by analogue methods, and the main results are reviewed; the closely related phenomena of noise-enhanced heterodyning and noise-induced linearization are also described.
Selected examples of the use of analogue methods for the study of transient phenomena in time-evolving systems are reviewed. Analogue experiments with quasimonochromatic noise, whose power spectral
density is peaked at some characteristic frequency, have led to the discovery of a range of interesting and often counter-intuitive effects. These are reviewed and related to large fluctuation
phenomena. Analogue studies of two examples of deterministic nonlinear effects, modulation-induced negative differential resistance (MINDR) and zero-dispersion nonlinear resonance (ZDNR) are
described. Finally, some speculative remarks about possible future directions and applications of analogue experiments are discussed.
Actions (login required)
|
{"url":"http://eprints.lancs.ac.uk/32025/","timestamp":"2014-04-21T15:27:34Z","content_type":null,"content_length":"19639","record_id":"<urn:uuid:09a6e823-d075-4c28-b20f-5baaa71197d5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Natural Frequencies and Mode Shapes of a Square Plate with Discontinuous Boundary Conditions.
Accession Number : AD0751238
Title : Natural Frequencies and Mode Shapes of a Square Plate with Discontinuous Boundary Conditions.
Descriptive Note : Masters thesis,
Corporate Author : AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OHIO SCHOOL OF ENGINEERING
Personal Author(s) : Marsh,James E.
Report Date : SEP 1971
Pagination or Media Count : 100
Abstract : The natural frequencies and mode shapes are theoretically determined for a simply supported square plate with discontinuous boundary conditions created by clamping segments of the
boundary. Two different clamping configurations are investigated: (1) partial clamping at the end of one edge, and (2) partial clamping on opposite edges. Satisfying the conditions of clamping leads
to a Fredholm integral equation of the first kind for the first clamping configuration and a system of integral equations for the second configuration. The frequencies are found by approximating the
integral equations with a finite set of homogeneous algebraic equations and insisting that this set have a nontrival solution. (Author)
Descriptors : (*METAL PLATES, RESONANT FREQUENCY), (*VIBRATION, METAL PLATES), INTEGRAL EQUATIONS, EQUATIONS OF MOTION, CLAMPS, GRAPHICS, MATRICES(MATHEMATICS), COMPUTER PROGRAMS, THESES
Subject Categories : Mechanics
Distribution Statement : APPROVED FOR PUBLIC RELEASE
|
{"url":"http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=AD0751238","timestamp":"2014-04-16T07:50:28Z","content_type":null,"content_length":"3723","record_id":"<urn:uuid:ec1388a4-2b55-49da-96df-43f56c99b380>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Delta and Pine calculator compares seed costs input by cotton variety
What started off as a way to ease “sticker shock” over the cost of Delta and Pine Land Co.'s new DP 555 BG/RR cotton variety ended up being a good tool for figuring an often misunderstood input cost.
The tool, the cost per acre calculator, is available on Delta and Pine Land's Website at www.deltaandpine.com and on CD. According to DPL, it allows users to quickly and easily calculate the cost per
acre associated with different cotton varieties.
When figuring seed costs, many cotton producers figure they can plant roughly 10 pounds of seed per acre or about five acres for each bag of seed. Simple enough, but not an exact science. But with
the cost of seed these days, it definitely pays to know exactly what your seed costs are.
The variable that needs more attention in this turnrow cipher is that seed size can vary from 4,300 seeds per pound to 6,300 seeds per pound. In addition, the number of seeds per bag can vary by as
much as 100,000 seeds. In effect, the actual acres you get from a single bag of seed can vary by as much as 3 acres between small-seeded and large-seeded varieties.
That's why it's important to know your seed costs adjusted for this factor.
To illustrate, the seed calculator shows that a bag of DP 555 BG/RR seed costs $119.95, compared to $70.95 for DP 449 BG/RR. This is the initial sticker shock that growers experienced when inquiring
about DPL's new-generation cotton variety, also known as “Triple Nickel.”
But costs between the two varieties begin to converge in the seed calculator. At 37,000 seeds per acre, DP 555 BG/RR costs $14.10 per acre, compared to $10.09 for DP 449 BG/RR. The reason for this is
that DP 555 BG/RR has the highest number of seeds per bag of all varieties listed, 315,000 seeds per bag, while DP 449BG/RR has 260,000 seeds per bag.
The calculator also shows that at a plant population of 37,000, the grower can plant 8.51 acres with a bag of DP 555 BG/RR, compared to slightly over 7 acres with DP 449 BG/RR.
“We don't think you should be planting pounds anymore,” noted Jim Willeke, vice president of sales and marketing for Delta and Pine Land Co., when asked how the calculator will help growers figure
their seed costs. “You need to think about seeds per foot and seeds per acre.”
To begin, the user puts in his seeds per row foot and row width to calculate his seeds per acre. After entering the latter, the tool then calculates what it actually costs per acre to plant each
variety at that selected plant population. At a population of 37,000 plants per acre, the lowest per acre seed cost was for NuCotn 33B and DP 5690 RR at $9.52, while the highest was 555, at $14.10.
The tool looks at 53 of the top cotton varieties, including DPL competitors, comparing constants such as the average number of seed per bag for each variety, the cost per bag, the average price for
1,000 seeds and average seed count per bag.
Acres planted per bag and cost per acre vary according to the seeds-per-acre figure entered by the user. You can also compare selected varieties in a head-to-head comparison.
“We first put the calculator out there so farmers could better understand the cost of 555,” Willeke said. While the per bag cost of the variety is significantly higher than other varieties, “it's
very cost-effective when you look at the total number of seeds per bag. It's really only about $2 to $3 more per acre than other varieties.”
However, DPL has found a much broader interest in the tool. “Farmers are able to see the actual number of seeds they're planting, the cost and how it impacts them a lot more effectively than ever
before,” Willeke said. “It's a very good tool and it's easy to operate.”
Technology fees are not included in the cost analysis, noted Willeke. “Those fees remain the same on an acreage basis. This just clears up to the farmer what his actual seed costs are.
“It also shows the farmer how expensive it is if he wants to plant at 43,000 seeds per acre, and he over-drops and plants 50,000 instead.”
|
{"url":"http://deltafarmpress.com/print/delta-and-pine-calculator-compares-seed-costs-input-cotton-variety","timestamp":"2014-04-21T08:36:07Z","content_type":null,"content_length":"10079","record_id":"<urn:uuid:3bc65e3b-824e-4d7b-a2cb-4900e7cc35c5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
digitalmars.D.learn - floating point divide
"Damian" <damianday hotmail.co.uk>
I come from a pascal background and we could use:
div integral division operator
/ floating point division operator
So my question is, how does D force floating point division on
At the moment i do this, but i was hoping for an easier way:
int n1 = 10, n2 = 2;
float f = cast(float)(cast(float)(n1 / n2));
Oct 11 2012
int n1 = 10, n2 = 2;
float f = (n1+0.0f)/n2;
Casting n1 to float would also work, but I hope the compiler is smart
enough to optimize away the plus expression.
Oct 11 2012
I come from a pascal background and we could use:
div integral division operator
/ floating point division operator
Two operators for the two different operations is a design better than C, that is bug-prone.
So my question is, how does D force floating point division on
At the moment i do this, but i was hoping for an easier way:
int n1 = 10, n2 = 2;
float f = cast(float)(cast(float)(n1 / n2));
That's not good, it performs an integer division, followed by two float casts. Note: float is useful only if you have many of them, or if you pass/return pairs of them. A single float is not so
useful. A solution: int n1 = 10, n2 = 2; const f = n1 / cast(double)n2; Using type inference is useful, as it doesn't hide an integer result if your code is wrong. Bye, bearophile
Oct 11 2012
On Thursday, 11 October 2012 at 15:21:01 UTC, bearophile wrote:
I come from a pascal background and we could use:
div integral division operator
/ floating point division operator
Two operators for the two different operations is a design better than C, that is bug-prone.
So my question is, how does D force floating point division on
At the moment i do this, but i was hoping for an easier way:
int n1 = 10, n2 = 2;
float f = cast(float)(cast(float)(n1 / n2));
That's not good, it performs an integer division, followed by two float casts. Note: float is useful only if you have many of them, or if you pass/return pairs of them. A single float is not so
useful. A solution: int n1 = 10, n2 = 2; const f = n1 / cast(double)n2; Using type inference is useful, as it doesn't hide an integer result if your code is wrong. Bye, bearophile
Ah i see, thankyou for the explanation.
Oct 11 2012
|
{"url":"http://www.digitalmars.com/d/archives/digitalmars/D/learn/floating_point_divide_40091.html","timestamp":"2014-04-19T12:12:24Z","content_type":null,"content_length":"13494","record_id":"<urn:uuid:3cf3234e-7ebf-42aa-a2a7-1e40f3553550>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gardena Algebra Tutor
Find a Gardena Algebra Tutor
...A published writer and active screenwriter, he also coaches students on essays and creative writing. Victor's students have earned acceptances to universities such as Stanford, UC Berkeley,
Boston College, and Yeshiva University, New York. He is also pleased to provide college application guida...
29 Subjects: including algebra 2, reading, chemistry, English
...Scored a 35 on the ACT Reading section in 2006. Privately tutored ACT Math, Writing, & Reading for 2+ years. I personally scored a 35 on the ACT English subsection (35 composite) in 2006.
60 Subjects: including algebra 2, reading, algebra 1, Spanish
...As a professional, I am always seeking new methods and ideas to improve my skills and enhance each student’s learning experience. I believe that every child has the right to learn. The main
purpose of education in the United States is to help students gain knowledge and give everyone an equal opportunity as a means to succeed in life.
23 Subjects: including algebra 1, reading, writing, grammar
I am Jackline, I have a bachelor of Commerce major in Accounting and a high diploma in Education from a very famous university in my country, Egypt. I was a math teacher for 5 years in a very old
and famous language school in Egypt. I am working now and since September 2012 as a Pre-k teacher in St.
5 Subjects: including algebra 1, geometry, grammar, elementary (k-6th)
...Now as a college graduate it is my main source of income. I love being a tutor because it gives me a chance to make a difference in a student's life and help them succeed. In today's
educational system teachers are bombarded with oversized classrooms and often are unable to provide extra attention for the few children who need it.
11 Subjects: including algebra 1, algebra 2, chemistry, biology
|
{"url":"http://www.purplemath.com/gardena_algebra_tutors.php","timestamp":"2014-04-16T19:03:19Z","content_type":null,"content_length":"23767","record_id":"<urn:uuid:46f3ec58-13a9-4cdc-a48a-ca7a7829b85b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
For our purposes, a curve is a possibly curved line showing a relationship between two factors. Corresponding to every value of the variable x we have a value y = f(x ). In programming terms, this is
similar to a method that takes one Double argument and returns a Double.
Curves are also known as functions. However, because the word function is a reserved word in many languages, including Visual Basic .NET, we chose the term Curve to represent functions. In the
documentation, both curve and function may be used.
The Extreme Optimization Mathematics Library for .NET has a simple and intuitive object model for working with curves. You can easily create the most common types of curves, find zeroes and calculate
derivatives. The Extreme Optimization Mathematics Library for .NET currently supports constants, lines, quadratic curves, polynomials and Chebyshev approximations.
In addition, the Extreme Optimization Mathematics Library for .NET implements the notion of a function basis. A function basis is a set of functions that can be combined to form a particular class of
functions or curves. An example of a function basis is the set of monomial functions 1, x, x^2, x^3, which can be combined to form all polynomials up to degree 3. Function bases have applications in
least squares problems and interpolation.
|
{"url":"http://www.extremeoptimization.com/Documentation/Mathematics/Curves/Default.aspx","timestamp":"2014-04-18T05:31:03Z","content_type":null,"content_length":"24987","record_id":"<urn:uuid:fc353963-a016-476a-b53e-135930a020c6>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Use of a coupled mechanical-acoustic computational model to identify failure mechanisms in paper production
Kao, D, Pericleous, Koulis A., Graham, Deryn and Knight, Brian (2006) Use of a coupled mechanical-acoustic computational model to identify failure mechanisms in paper production. In: Proceedings of
the Thirteenth International Congress on Sound and Vibration, ICSV13 , 2-6 Jul 2006, Vienna, Austria.
Full text not available from this repository.
In this paper, a couple mechanical-acoustic system of equations is solved to determine the relationship between emitted sound and damage mechanisims in paper under controlled stress conditions. The
simple classical expression describing the frequency of a plucked string to its material properties is used to generate a numberical representation of the microscopic structue of the paper, and the
resulting numerical model is then used to simulate the vibration of a range of simple fibre structures when undergoing two distinct types of damange mechanisms: (a)fibre/fibre bond failure, (b) fibre
failure. The numercial results are analysed to determine whether there is any detectable systematic difference between the resulting acoustic emissions of the two damage processes. Fourier techniques
are then used to compare th computeed results against experimental measurements. Distinct frequency components identifying each type of damage are shown to exist, and in this respect theory and
experiments show good correspondece. Hence, it is shown, that althrough the mathematical model represents a grossly-simplified view of the complex structure of the paper, it nevertheless provides a
good understanding of the underlying micro-mechanisms characterising its proeperties as a stress-resisting structure. Use of the model and acoompanying software will enable operators to identify
approaching failure conditions in the continuous production of paper from emitted sound signals and take preventative action.
Actions (login required)
|
{"url":"http://gala.gre.ac.uk/959/","timestamp":"2014-04-20T16:37:49Z","content_type":null,"content_length":"30297","record_id":"<urn:uuid:5f5f7338-182b-40ea-82a0-2d8a910e3e65>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ARSC T3D Users' Newsletter 9, October 21, 1994
List of Differences Between T3D and Y-MP
The current list of differences between the T3D and the Y-MP is:
1. Data type sizes are not the same (Newsletter #5)
2. Uninitialized variables are different (Newsletter #6)
3. The effect of the -a static compiler switch (Newsletter #7)
4. There is no GETENV on the T3D (Newsletter #8)
5. Missing routine SMACH on T3D (Newsletter #9)
6. Different Arithmetics (Newsletter #9)
I encourage users to e-mail in differences that they have found, so we all can benefit from each other's experience.
Missing Routine SMACH on the T3D
While coding up some exercises for the T3D class, I found an old friend missing. The routine SMACH was added to the BLAS routines when LINPACK came out in 1978. The routine interrogates the machine
it's running on and can return three constants that describe the machine's arithmetic: EPS, TINY, HUGE. From the smach.f documentation we have:
c assume the computer has
c b = base of arithmetic
c t = number of base b digits
c l = smallest possible exponent
c u = largest possible exponent
c then
c eps = b**(1-t)
c tiny = 100.0*b**(-l+t)
c huge = 0.01*b**(u-t)
Basically we have:
EPS is this definition of the machine epsilon TINY is close to the smallest machine representable number HUGE is close to the largest machine representable number
The Fortran version of the routine is available from netlib (anonymous ftp site: netlib2.cs.utk.edu).
But there is a better replacement for SMACH of LINPACK and that is SLAMCH of LAPACK. This routine, SLAMCH, is part of the T3D libraries. LAPACK is meant to be a replacement for the older linear
algebra libraries LINPACK and EISPACK. SLAMCH works similar to SMACH by computationally interrogating the machine it's running on and returns a machine arithmetic constant. From the slamch.f
documentation we have:
* SLAMCH determines single precision machine parameters.
* Arguments
* =========
* CMACH (input) CHARACTER*1
* Specifies the value to be returned by SLAMCH:
* = 'E' or 'e', SLAMCH := eps
* = 'S' or 's , SLAMCH := sfmin
* = 'B' or 'b', SLAMCH := base
* = 'P' or 'p', SLAMCH := eps*base
* = 'N' or 'n', SLAMCH := t
* = 'R' or 'r', SLAMCH := rnd
* = 'M' or 'm', SLAMCH := emin
* = 'U' or 'u', SLAMCH := rmin
* = 'L' or 'l', SLAMCH := emax
* = 'O' or 'o', SLAMCH := rmax
* where
* eps = relative machine precision
* sfmin = safe minimum, such that 1/sfmin does not overflow
* base = base of the machine
* prec = eps*base
* t = number of (base) digits in the mantissa
* rnd = 1.0 when rounding occurs in addition, 0.0 else
* emin = minimum exponent before (gradual) underflow
* rmin = underflow threshold - base**(emin-1)
* emax = largest exponent before overflow
* rmax = overflow threshold - (base**emax)*(1-eps)
So all of the functionality of SMACH and more is available in SLAMCH, which is available in the T3D libraries.
Arithmetic Differences Between Y-MP and T3D
With the above description of SMACH and SLAMCH, now is a good opportunity to describe some of the differences in the machine arithmetic between the Y-MP and the T3D. The machine arithmetic for the
Y-MP was determined more than 20 years ago during the design of the CRAY-1. Since then, a lot has happened in machine arithmetic, most notably the publication of the IEEE Floating Point Standard 754
in 1981 and its acceptance in 1985. The Dec Alpha chip is implemented according to this new standard.
Using the above two functions we can see some of the differences between the arithmetics on each machine:
from SMACH Y-MP T3D
eps = b**(1-t) 7.105427357601E-15 2.22044604925031308E-16
tiny = 100.0*b**(-l+t) 1.2902840147914E-2450 4.00833672001794556E-290
huge = 0.01*b**(u-t) 7.7502316430824E+2449 2.49480038691839982E+289
from SLAMCH
eps = 7.105427357601E-15 1.11022302462515654E-16
sfmin= 3.6672077351097E-2466 2.22507385850720138E-308
base = 2. 2.
prec = 1.4210854715202E-14 2.22044604925031308E-16
t = 48. 53.
rnd = 0. 1.
emin = -8192 -1021
rmin = 0. 2.22507385850720138E-308
emax = 8190 1024
rmax = R 1.79769313486231571E+308
All of these numbers were gotten from the Fortran versions available from netlib and printed out with a Fortran statement like:
print *, SMACH( 1 )
All of the results from the Fortran version agreed with the answers from libsci on both machines. This is just part of the story of the numerical differences between the Y-MP and the T3D. We will
discuss these numerical differences more in future newsletters.
PVM Between T3D and Y-MP
Last week I requested experience from our users on using the PVM encoding PvmDataRaw between the Y-MP and T3D. I have tried several test programs and the experience I have on available encodings is:
Between T3D and T3D:
PvmDataDefault - slowest, loses precision
PvmDataRaw - fast
PvmDataInplace - fastest, but user needs to synchronize
Between T3D and Y-MP:
PvmDataDefault - the only one available
After I finish teaching the T3D class, I'll report some timings for each of the above encodings.
ARSC Course Announcement
Title: Applications Programming On The Cray T3D Dates: October 31, November 1,2 Time: 9:00 AM - 12:00 M, 1:00 - 5:00 PM Location: University of Alaska Fairbanks main campus, room TBA Instructor: Mike
Ess, Parallel Applications Specialist (ARSC) Course Description: An introduction to parallel programming on CRAY T3D Massively Parallel Processor (MPP). Data-sharing, work-sharing, and
message-passing programming techniques will be described. The primary goal is to provide practical experience in implementing techniques that enhance overall code performance. These include
performance tools for bottleneck determination, debugging and code development tools, and parallel processing principles. This class will have directed lab sessions and users will have an opportunity
to have their applications examined with the instructor. Intended Audience: Researchers who will be developing programs to run on the CRAY T3D Massively Parallel Processor (MPP). Prerequisites:
Applicants should have a denali userid or be in the process of applying for a userid. Applicants should be familiar with programming in Fortran or C on a UNIX system.
Application Procedure
There is no charge for attendance, but enrollment will be limited to 15. In the event of greater demand, applicants will be selected by ARSC staff based on qualifications, need and order of
Send e-mail to consult@arsc.edu with the following information:
• course name
• your name
• UA status (e.g., undergrad, grad, Asst. Prof.)
• institution/dept.
• phone
• advisor (if you are a student)
• denali userid
• preferred e-mail address
• describe programming experience
• describe need for this class
Request for References
I received a request for references that describe the T3D hardware that were not from CRI. The only one I know is a good two page description in:
Advanced Computer Architecture, Kai Hwang, McGraw-Hill Inc., 1993
If you know of any other good publications about the T3D, e-mail the reference on to me and I'll send it on.
Ron Renfrew, the ARSC Analyst in Charge from CRI passed along this quote:
Quote of the week, due to David Clark of MIT: "There is an old network saying: Bandwidth problems can be cured with money. Latency problems are harder because the speed of light is fixed--you can't
bribe God."
We can expand this list to include both networks and computers:
Solved with money To be solved by God
----------------- -------------------
more bandwidth latency beyond the speed of light
more memory infinite precision arithmetic
perfectly linear speedups
bug free code
Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
|
{"url":"http://www.arsc.edu/arsc/support/news/t3dnews/t3dnews09/index.xml","timestamp":"2014-04-19T17:09:28Z","content_type":null,"content_length":"21410","record_id":"<urn:uuid:ed3d06ea-f3ae-4eb9-a601-efdefc680e8c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Whole Numbers to 100
Compare Whole Numbers to 100
Materials: number line and number cards 0 10; place value blocks (tens rods and ones blocks) or beans and beansticks; overhead projector and overhead materials for place value blocks, if available
Preparation: Put a number line on the board or on the floor. Prepare a large card for each symbol >, <.
Prerequisite Skills and Concepts: Children should be able to count and write numbers 0 10 and should be familiar with ordering numbers 0 10.
• Say: Yesterday we talked about numbers that come just before, just after, and between. We also talked about numbers that are greater than or less than other numbers. Today we are going to use a
number line to help find the numbers.
• Ask: Would someone come up and choose a number and put it where it belongs on the number line? How did you know where to put it?
Elicit from the child that he/she looked at the number, at the number line, and then counted up (or back) to find where to place it. Then have another child choose another number.
• Ask: Is this number greater than or less than the first number? How do you know?
Elicit from children the correct answer and whether it comes before, and is less than the first number, or comes after, and is more than the first number.
• Continue this activity until children have an understanding of greater than and less than. Then introduce working with numbers to 20. Use beans and beansticks or place value blocks.
• Say: Now let's try numbers that are greater than and less than using tens and ones place value blocks (or beans and beansticks).
Discuss with children the value of each manipulative. Have children line up ten blocks (or beans) beside the tens rod (or beanstick).
• Ask: How many ones are in the tens rod?
Elicit that there are 10.
• Say: Show me how to make 7. Save that. Now show me how to make 12.
• Check children's work. If you have an overhead and rods and cubes, show the children both numbers.
• Ask: Which number is greater than the other number? (12) How do you know?
Children should say the number with 1 ten is greater than the number without any ten.
• Say: Now make the numbers 11 and 16 with place value blocks.
• Ask: Which number is less than the other number? (11) How do you know?
Children should say that both numbers have 1 ten, but eleven has only 1 one and 16 has 6 ones. The number with fewer ones is less.
• Say: Now make the numbers 10 and 20 with place value blocks.
• Ask: Which number is greater than the other number? (20) How do you know?
Children should say that both numbers have tens, but no ones, and that 10 has 1 ten and 20 has 2 tens, so 20 is larger.
• Ask: How do you know when a number is greater than another number?
Children should say that if one number has more tens, that number is greater. If both numbers have the same number of tens, and one number has more ones, that number is greater.
• Finally, introduce the symbols >, <. Write 5 > 2 and 3 < 9 on the board. Show children how the larger opening of each sign is closest to the greater number, and the smaller, pointy end of both
signs points to the smaller number. Allow time for volunteers to come to the board and replace the numbers with other numbers, making sure that the numbers are placed correctly. Then have
children use large number cards and the large > and < symbol cards to act out comparing numbers with the symbols.
• Once children have become proficient in comparing numbers 0 20, use these activities with greater numbers.
Wrap-Up and Assessment Hints
There is a lot of material covered in this unit and the different skills need lots of practice. As you wrap-up, reinforce the vocabulary, making certain the children have a thorough understanding of
the concepts presented. Make sure that children are proficient in one set of numbers before introducing larger numbers.
Compare Whole Numbers to 100
Order Whole Numbers to 100
|
{"url":"http://www.eduplace.com/math/mathsteps/1/c/1.compare.develop1.html","timestamp":"2014-04-19T07:31:43Z","content_type":null,"content_length":"9184","record_id":"<urn:uuid:e1806bdd-cf14-4bdb-bbcc-c01a21869d6c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
|
263 Final Projects
CS263 Final Projects
Spring 1996
Abstract: I propose to implement a new scheme for describing reactive embedded controllers. Based on the synchronous model of time used in the Esterel language, it will support building
heterogeneous systems composed of blocks described in other models of computation. The semantics of this domain is a succession of least fixed-points of continuous functions defined on a pointed
CPO that ensures determinism. I will implement this domain in the Ptolemy prototyping environment.
Abstract: Matlab, a numerical computation language, has become an increasingly popular prototyping tool for scientist and engineers. One of the main reason for its popularity is its simplicity.
Matlab has only one data type, matrix, and all functions are build on top of matrix or matrix operations. As a result, one no long need to think or express program at a level of each array
element or array index. Instead, the programmer is allowed to focus on the mathematical meaning of the algorithms, and write codes that are not only short but also very readable, almost
pseudo-code like. However, as a language designed mainly for numerical computation, it is very hard to do any symbolic computation that uses data structures. In this project, we are looking at
ways of extending Matlab to handle symbolic computations, while still maintaining its simplicity. Our main idea is to add one more data type, graph, into Matlab. We feel graph, instead of
pointer, is the right level of abstraction for expressing symbolic computations, since most of data structures or symbolic algorithms are expressed in turn of graph or special form of graph such
as trees, or lists. The problem now is to define the right set of build-in functions, and constructs that allow one to express large set of symbolic computation easily in this new language.
Abstract: My project is the design of a specification language which describes the semantics of a graphical user interface. The language, LRENE, gives the programmer a syntax with which to
describe common GUI tasks (e.g., displaying editable views of objects), control (e.g., disabling menu items as a result of events), and structure (e.g., similarity between views). LRENE promotes
UI consistency by making consistent interfaces easier to build, rather than harder. Because it is a high-level description language, LRENE allows more effort to be spent on GUI design by
supporting rapid prototyping and consistent semantics.
|
{"url":"http://www.cs.berkeley.edu/~yelick/263/projects.html","timestamp":"2014-04-21T04:39:31Z","content_type":null,"content_length":"5435","record_id":"<urn:uuid:168d1af5-2918-4ef7-92d7-fffd1b44a05f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
To find statements of attendance policy, academic integrity, how to compute your current grade, grade appeals, etc., click here.
Click on each topic title to download the notes for that topic.
Number Systems: Integers, countable infinity, induction, rationals, irrationals, the ordered ring of real numbers, suprema and infima, the least upper bound and Archimedean properties, decimal
representation, uncountable infinity of real numbers, basic inequalities, complex numbers, roots of unity.
Topology of the Real Axis and the Complex Plane: Basic set theory, Cartesian producs, Axiom of Choice, open and closed intervals, open and closed balls, open and closed sets, cluster points,
interiors, closures, boundaries, bounded sets, connected sets, open subsets of the real line, Cantor set.
Limits: Functions, domains, ranges, onto and one-to-one functions, inverses, infinite sequences and their limits, convergence and divergence, Cauchy sequences, bounded sequences and cluster points,
monotonic sequences, operations on limits, some special limits, limsup and liminf, completeness of real numbers, construction of the real numbers, limits of functions, left-hand and right-hand
Continuity: Continuity at a point, limits and continuity, continuous functions, examples of continuous functions, operations on continuous functions, continuity of composite functions, continuous
preimages of open and closed sets, compactness, Bolzano-Weierstrass and Heine-Borel theorems, uniform continuity, continuous images of compact intervals, Lipschits and Holder continuity,
discontinuities, monotonic functions and their inverses.
Differentiation: Derivative at a point, slope of the tangent, differentiable functions, derivatives of elementary functions, differentiability and continuity, calculus of derivatives, derivatives of
composite functions and the chain rule, mean value theorem, derivatives and monotonicity, derivatives of inverse functions, higher derivatives, maxima and minima of functions, lack of
differentiability at a point, intermediate value theorem, L'Hospital's rule.
Rieman Integral: Riemann and Darboux sums, partitions and refinements, the Riemann integral, integrals of continuous functions, operations on integrals, integrals of the absolute value, integrals
over adjacent intervals, mean value theorem, antiderivatives and the fundamental theorem of calculus, change of variable, integration by parts, integrability of piecewise-continuous functions, a
nonintegrable function, logarithm and exponential functions, hyperbolic functions and their inverses, integration methods for rational functions, improper integrals, gamma function.
Numerical and Functional Series: Taylor's formula and Taylor series, Lagrange's and Cauchy's remainder, Taylor expansion of elementary functions, indefinite expressions and L'Hospital rule, numerical
series, Cauchy's criterion, absolute and conditional convergence, addition and multiplication of series, functional sequences and series, pointwise and uniform convergence, Weierstrass test,
integration and differentiation of functional series, power series and radius of convergence, complex exponentials, Weierstrass Approximation Theorem.
Class notes from the Fall semester of 2013 are deposited here.
Here is another set of notes, written up by Joshua Sauppe.
The following textbooks contain (some of the) material presented in this course:
T. M. Apostol, Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra, Wiley.
T. M. Apostol, Mathematical Analysis, Second Edition, Addison-Wesley.
A. Browder, Mathematical Analysis: An Introduction, Springer-Verlag.
R. C. Buck, Advanced Calculus, Waveland.
R. Courant, Differential and Integral Calculus, Vol. 1, Springer-Verlag.
R. Courant and F. John, Introduction to Calculus and Analysis, Vol. 1, Springer-Verlag.
S. Lang, Undergraduate Analysis, Springer-Verlag.
R. Larson, R. P. Hostetler, and B. H. Edwards, Calculus: Early Transcendental Functions, Houghton-Mifflin.
J. E. Marsden and M. J. Hoffman, Elementary Classical Analysis, W. H. Freeman.
M. Rosenlicht, Introduction to Analysis, Dover.
W. Rudin, Principles of Mathematical Analysis, McGraw-Hill.
G. E. Shilov, Elementary Real and Complex Analysis, Dover.
M. Spivak, Calculus, Publish or Perish.
J. Stewart, Calculus: Early Transcendental Functions, 2 Vols., 5th Ed., Brooks-Cole.
R. S. Strichartz, The Way of Analysis, Jones and Bartlett.
V. R. Zorich, Mathematical Analysis I, Springer-Verlag.
Comments and recommendations about textbooks
I have ordered the book by Strichartz, because it has a very intuitive approach and presents important results from a relatively practical point of view. If you want to have two books, buy the one by
Rosenlicht or the one by Shilov, because they are cheap. My favorite is the book by Courant and John. It is a genuine classic, and is unsurpassed in conveying the true understanding of mathematical
analysis. Very often, I will follow the material from this book. The reason I did not order it is because it is expensive. If you want to buy it, it may be cheaper to get it used from amazon.com or
abebooks.com. The book by Courant alone is older and a bit more calculusy version of the book by Courant and John. The books by Shilov and Zorich are translations of Russian books, and are also very
intuitive, connected to physics, and user friendly. The book by Rudin has great exercise problems, and I will assign many of them in the homework. It is often used as the standard Mathematical
Analysis text. Most of the other books not mentioned explicitly are some of the better standard mathematical Analysis textbooks. Finally, we used either the book by Larson & Co. or the book by
Stewart in our Calculus sequence, depending on when you took Calculus. I strongly recommend either for reviewing elementary material.
|
{"url":"http://homepages.rpi.edu/~kovacg/classes/analysis1/420.html","timestamp":"2014-04-17T21:26:03Z","content_type":null,"content_length":"10717","record_id":"<urn:uuid:1e3ee0bc-e0f7-49b8-9b37-19b89a1b65d6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
= Preview Document = Member Document = Pin to Pinterest
Formatted like a standardized test, this 8 page document tests math skills such as patterns, rounding, symmetry, division, probability, area, graph reading, congruence, estimation, shapes and
more. These are approximately the skills tested in third grade, although state standards vary widely. Common Core: 4.NBT.A.3
Students create two multiplication and two division facts to describe an array. Two sets of ten arrays.
Whole, halves, thirds, fourths, and eighths. Six pages, four cards to a page. Common Core: Fractions: 3.NF.A.1
Students determine the rule for input/output tables, and complete the table using the rule.
Six pages of money vocabulary flashcards, four per page. Use for money recognition, vocabulary, games.
Formatted like a standardized test, this 7 page document tests math skills such as addition, multiplication, units of measurement, percentages, shapes, division, area, perimeter, and more. These
are approximately the skills tested in fifth grade, although state standards vary widely.
Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3
Formatted like a standardized test, this 10 page document tests math skills such as addition, multiplication, percentages, shapes, division, probability, graph-reading, area, perimeter, units of
measurement, place value and more. These are approximately the skills tested in fourth grade, although state standards vary widely.
• Make 8 copies of the kites and bows. Write an "answer" to a math problem on each kite. Write various combinations of math problems (that match up with each answer) on the bows.
Seven pages of practice worksheets, multiple choice answers. This is formatted for testing practice.
A page explaining rulers as a standard measurement, followed 5 pages of pictures to measure (or pictures to draw to the right size) with an inch ruler.
Formatted like a standardized test, this 9 page document tests math skills such as addition and subtraction, chart- and graph-reading, place value and more. These are approximately the skills
tested in second grade, although state standards vary widely.
Formatted like a standardized test, this 6 page document tests math skills such as place value, addition, subtraction, rounding, multiplication, division and more. These are approximately the
skills tested in third grade, although state standards vary widely. Common Core: 4.NBT.A.3
Formatted like a standardized test, this 6 page document tests math skills such as addition, place value, subtraction, multiplication, patterns, shapes, and more. These are approximately the
skills tested in second grade, although state standards vary widely.
• This series of practice tests replicates the style of many standardized tests. Levels 1 and 2 use bubbles, so you may want to print our "Testing Practice Answer Sheet". Use these tests to
ascertain your students' knowledge of shapes, counting, addition, patterns, graph-reading and more, or to help them understand this testing style. Six pages of tests.
Use this simple bar graph to determine how many trash items (categorized by type) were picked up.
• Practice number recognition (to 12), graphing and cooperation with this fun "fishing" activity.
Related Links
|
{"url":"http://www.abcteach.com/directory/subjects-math-19-8-4","timestamp":"2014-04-21T07:31:40Z","content_type":null,"content_length":"98697","record_id":"<urn:uuid:87f0bf5d-0e70-4f97-bc0a-afd0a040a56f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Seventeen or Bust Nixes Three Sierpinski Candidates
1409663 story
Posted by
from the barely-legal dept.
Craigj0 writes
"In just 8 days Seventeen or bust has removed three Sierpinski candidates after people have been trying for years. Seventeen or bust is a distributed attack on the Sierpinski problem. You can find
the first two press releases here(1) and here(2), the third is still to come. More information about Sierpinski numbers can be found here. Finally they could always use some more people so join!"
This discussion has been archived. No new comments can be posted.
Seventeen or Bust Nixes Three Sierpinski Candidates
Comments Filter:
• Speaking of Sierpinski... (Score:4, Interesting)
by jonnyfish (224288) <jonnyfish@nospam.gmail.com> on Saturday December 07, 2002 @12:04PM (#4832698) Journal
Way back when I was first learning DirectDraw (shudder), I wrote a program to generate pixel colors based on x, y, and time (ticks) values. I played with trig functions a little bit, trying to
generate some plasma effects. Eventually I got bored of that and tried some operations using the bitwise operators and, amazingly, I got it to generate the Sierpinski triangle using something
like 2 or 3 bitwise ands per pixel. I say amazingly because all I know about Sierpinski triangles is the name, so I generated it completely by accident. Needless to say, I was very surprised the
first time I saw it.
□ by BoojiBoy0 (596932)
What you made was a simple two state cellular automaton.... here's a link: http://www.wolframscience.com/preview/nks_pages/?N KS0263.gif No No Not Wolfram! (don't forget to remove the space)
☆ Anyways... Oddly enough I found an extra jab at Wolfram at the bottom of this story's link: The math links on this site go to Eric Weisstein's World of Mathematics, hosted by Wolfram
Research, makers of Mathematica. (As a sidenote, Wolfram's headquarters are located a stone's throw from the University of Illinois campus.) Eric makes us wonder why the rest of us are
allowed to waste his oxygen; but in any case, we thank him.
☆ by jonnyfish (224288)
I still don't know how I did it :P
□ by ymgve (457563)
It's a really easy effect to create - take the X coordinate, then AND with the Y coordinate. Use one colour for 0 and another for everything else, and you've got yourseif a Sierpinski
triangle! Try it :)
• More information on the Sierpinksi problem (Score:5, Informative)
by cyberlemoor (624985) on Saturday December 07, 2002 @12:18PM (#4832768)
..can be found here [prothsearch.net] (a slightly more detailed explanation than the one at the link the author gives).
• by bons (119581)
It was the magical "collapse sections" button.
WTF that does I'll never know, but it needed to be checked. Maybe that's why there are almost no replies to this thread...
• How to prove this? (Score:5, Interesting)
by Scarblac (122480) <slashdot@gerlich.nl> on Saturday December 07, 2002 @04:57PM (#4834161) Homepage
I've read around a bit now, I've even installed their client (wasn't currently doing any other distributed stuff, so why not), but I still don't understand the math well. I understand you can
prove a number k is not a Sierpinski number by finding an n so that k*2^n+1 is prime. The lowest known Sierpinski number is 78557. There are now only 14 lower numbers left for which there's no
fitting n found yet, and they're searching for them.
Now what I don't understand is how Sierpinski-ness can be proven, how they know there's not some huge n that makes 78557*2^n+1 prime after all; and I can't find the info. There's a class of
numbers that are Sierpinski by construction (apparently) but they are much higher than this one. I guess there's no quick easy answer, I just have to read the literature, and I'm not going to...
There are too many contrived number properties out there, and too much other stuff to do :)
□ Re:How to prove this? (Score:3, Informative)
by iltzu (22928)
Now what I don't understand is how Sierpinski-ness can be proven, how they know there's not some huge n that makes 78557*2^n+1 prime after all; and I can't find the info.
Here's some info [astate.edu], though the exact construction of the proof isn't give. Apparently, it's possible to prove that for any n, 78557*2^n+1 is divisible by one of a finite (and quite
small) number of primes. As to how, ask the guy who proved it...
☆ For people who have not done lots of number theory ( I have done some and this is mostly over my head), A very rough analogy: you know any linear algebra? what a vector space is? a bunch
of (simple) objects (the basis set) can describe any object (eg a line) in the (vector) space. Partly what the Sierpinski test is about is finding a bunch of primes that can be used to
describe all numbers of the forms k.2^n+1. I could be wrong - if so please correct me. These suckers fascinate me (as do all prime number problems)
□ Re:How to prove this? (Score:3, Insightful)
by Omkar (618823)
I don't know (I'm a high school student), but you can use a technique (math induction) to prove the divisibility of numbers. For example:
8^n - 1 = 7m where for any n m,n are +ve integers
the statement is true for n=1 (trivial)
assume it holds for n = some +ve int. k.
8^k - 1 = 7s
Consider the next case:
8^(k+1) - 1
= 8(8^k) - 1
= 8(7s +1) -1
= 56s + 7
= 7(8s+1) clearly divisible by 7.
=>The assertion holds for n=k+1 if it holds for n=k So since n = 1 holds, the assertion holds for all +ve int. n. I'm sure that the techniques these guys use are far more complicated and
sophisticated, but it is possible to prove things like that.
• Heh (Score:2)
by helix400 (558178)
Quote for the article:
So if you've been thinking that you can't find a prime without multiple computers that are always on, this just goes to show you obviously can.
Ya! Way to rub it in their faces! I mean, what kind of idiots didn't already understand that you can find prime numbers on multiple computers...especially computers that are on!
That'll teach 'em...
Related Links Top of the: day, week, month.
|
{"url":"http://science.slashdot.org/story/02/12/07/1424213/seventeen-or-bust-nixes-three-sierpinski-candidates?sdsrc=nextbtmprev","timestamp":"2014-04-21T15:04:16Z","content_type":null,"content_length":"96491","record_id":"<urn:uuid:2110342e-4b99-460d-901e-0d1335761770>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find the limit by direct substitution.
A. lim x-> 5
square root cubed of x^2 -2x - 23. I replaced all of the x's with 5 and got -2.
B. lim x-> -2
(4x^2 - 7x) / (6x + 10). I replaced all of the x's with -2 and got -15. Here's a link to the problem. It's 3A and 3B. Thanks.
http://pic20.picturetrail.com/VOL1370/5 ... 705398.jpg
rowdy3 wrote:A. lim x-> 5
square root cubed of x^2 -2x - 23.
From what you have posted (in comparison with what the image shows), they appear to have missed teaching you a few topics back in algebra. The "square root, cubed, of [whatever]" is (sqrt[whatever])^
3. The exercise actually involves a cube root.
To learn about radicals (and you really need to know them, before calculus!!), try here.
Since the exercises require only that you evaluate the expressions (the "limit" concept isn't actually involved), I'm not sure what your question is...? If you are asking for confirmation of the
calculator values, yes, the values are correct.
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=9&t=1289&p=3933","timestamp":"2014-04-17T01:00:34Z","content_type":null,"content_length":"19565","record_id":"<urn:uuid:a3cfbe79-eb44-4e72-b9cb-9f214e3da2a8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ulam's Spiral: Has it been explained or not? - Straight Dope Message Board
Originally Posted by AHunter3
Why is an "explanation" needed? You're not graphing random numbers, you're graphing primes (or, more specifically, you're graphing the places that primes appear in the stream of integers). Why should
it not form a pattern, simply because it does not form a simple cyclical/repetitive pattern? Anyone here ever read about chaos theory? Chaotic phenomena (i.e., phenomena not conforming to the kind of
pattern that lets you predict the next occurrence) does tend to have patterns. Static in phone lines. Distribution of curves in shorelines. Etc. Old news at this point.
"Why is an explanation needed?" -- Well, because that's what mathematicians
: try to figure out why various mathematical structures exist. In this case in particular, mathematicians don't really know much about the distribution of primes, and they'd really like to know more.
Some of the most famous unsolved mathematical conjectures involve primes: more specifically, primes and addition. Primes and prime factorizations, being defined multiplicatively, behave nicely under
multiplication, but become very confusing when addition is involved. For example: Goldbach's conjecture (Every even number greater than 2 is the sum of two primes), the twin-primes conjecture (there
exist an infinite number of prime pairs (p,p+2)), and the Riemann hypothesis (the nontrivial zeroes of the Riemann zeta function have real part 1/2--this is related in a somewhat subtle way to the
distribution of primes) all are of this type.
Just calling this "chaos theory: old news" is too dismissive. The thing about chaos theory is that, although its name might suggest otherwise, it's really quite well understood. Once the governing
equations of the system are known, they can be analyzed in a pretty straightforward way to see if the system is expected to be chaotic or not. For chaotic systems like the ones you mention above,
there are dynamical models that predict the onset of chaotic behavior as various parameters are tweaked. (These can be more or less accurate depending on the fidelity of the underlying model, but we
think we understand more or less why things become chaotic.)
But in this case we haven't got any good ideas what the underlying "dynamic" equations might look like. (Of course, the primes are not a dynamic system at all, but you might look at the sequence of
primes, or the prime differences p
, or maybe the prime differences divided by ln(n) to normalize out the obvious growth, and consider these to be samples from a Poincare section of some prime-namical system.) Without knowing what the
equations are, there's no way to honestly call this a "chaotic" system, since we don't have any way of knowing whether what we're seeing is really the same lack of order as the lack of order seen in
|
{"url":"http://boards.straightdope.com/sdmb/showthread.php?t=329904","timestamp":"2014-04-16T13:52:39Z","content_type":null,"content_length":"222256","record_id":"<urn:uuid:261810ff-35a5-42eb-9679-865f6e0b6901>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
High School Mathematics - V
1. Find the Least Common Multiple of 3/4, 5/6 and 1/3.
2. Find the smallest number which when divided by 25, 40 or 56 has in each case 13 as the remainder.
3.A class has 20 boys and 30 girls. The average age of boys is 11 years and of girls is 12 years. What is the average age of the whole class?
4. What is 0.312312312....expressed as a fraction?
5. There are three partners in a business. Out of these three, the capital of first is twice that of second and the capital of second is twice the capital of the third. If at the end of the year, the
profit is $10,500, what is the share of each?
6. The volume of a gas varies inversely as the pressure when the temperature is constant. When the pressure is 20, the volume is 30; what is the volume when the pressure is 15?
7. By selling an article for $240, a trader loses 4%. In order to gain 10%, what price must he sell the article for?
8. Find the sum which is given on 5% per annum compound interest for 3 years amounting to $9261.
Character is who you are when no one is looking.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=86120","timestamp":"2014-04-18T23:18:59Z","content_type":null,"content_length":"9138","record_id":"<urn:uuid:37d44a1b-b08e-44f9-9c15-e09a678b7f5b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Place: Room N718, 12-1pm.
Spring 2010 Schedule
□ Sam Coskey (CUNY):
Date: March 9, 2010
□ Victoria Gitman :
Date: March 16, 2010
Title: Nonstandard Models of the Peano Axioms: Why not expect an iPhone app for them?
Abstract: The period of 16th-19th century saw some of the greatest advances in Number Theory with the work of Euler, Gauss, Fermat, etc. But unlike Euclid in Elements, these mathematicians
did not work under the constraints of a formal axiomatic system. It was not until the late 19th century that a fundamental shift in thinking marked a return to the formal mathematics of
Euclid from two millennia earlier. In 1890, Guiseppe Peano proposed a formal axiomatization of number theory - the Peano Axioms. The Peano Axioms consisted of the fundamental properties of
addition and multiplication together with induction. Together with most mathematicians of his time, Peano believed the natural numbers to be the unique structure satisfying the Peano Axioms
and thought that every number theoretic property was provable from them. They turned out to be wrong on both accounts. In the early 20th century, it was shown that there are nonstandard
models of the Peano Axioms – structures satisfying the Peano Axioms but not isomorphic to the natural numbers. In 1930’s, came the final blow as Gödel showed, in his famous Incompleteness
Theorem, that the attempt to axiomatize the natural numbers was futile as any reasonable axiomatization would leave an unprovable assertion. It followed that there were nonstandard models of
the Peano Axioms not just non-isomorphic to the natural numbers but having fundamentally different properties. In this talk, I will give a brief introduction to nonstandard models of the
Peano Axioms and prove a famous result of Stanley Tennenbaum from 1959, which showed that there is no algorithmic construction of such a model.
Fall 2009 Schedule
☆ Larua Ghezzi :
Date: November 3, 2009
Title: Minimal free resolutions in Commutative Algebra, and applications to Computational Algebra.
Abstract: Broadly speaking, commutative algebra is the study of the solution set of polynomial equations in many variables, conveniently “encoded” in a commutative ring (the coordinate
ring of the corresponding algebraic variety). We are interested in properties of these rings, such as dimension, depth, Cohen-Macaulayness.
A particularly convenient tool in this study is given by free resolutions, that is, the process of approximating a (complicated) ring with free modules in a finite number of steps. The
maps between the free modules at each step are given by matrices. The advantage is that free resolutions are easily computable with software such as CoCoA and Macaulay.
In the first part of this talk I will give an overview of free resolutions. In the second part, to provide a more specific framework, I will briefly present my recent research on the
minimal free resolution of points in the projective space.
☆ Delaram Kahrobaei
Date: November 17, 2009
Title: Graphic Arithmetic.
☆ G. Rosenberger (Technische universitat Dortmund):
Date: December 8, 2009
Title: On Asymptotic Densities and Generic Properties in Finitely generated Groups Doubles.
Abstract: The asymptotic density is a method to compute densities and/or probabilities within infinite finitely generated groups. If P is a group property, the asymptotic density
determines the measure of the set of elements which satisfy P. Is the asymptotic density equal to 1, we say that the property P is generic in G. P is called an asymptotic visible
property, if the corresponding asymptotic density is strictly between 0 and 1. If the asymptotic density is 0, the P is called negligible. We show that there is an interesting connection
between the asymptotic properties of a group G and the asymptotic properties of its subgroups of finite index. Using this we give some applications for several classes of groups.
Spring 2009 Schedule
☆ Victoria Gitman
New York College of Technology
City University of New York
March 17, 2009
Title: Gödel's Proof: The First Incompleteness Theorem Part 1.
☆ Thomas Johnstone
New York College of Technology
City University of New York
March 24, 2009
Title: Gödel's Proof: The First Incompleteness Theorem Part 2.
Abstract for the March 17th, and the March 24th Seminar: In 1931, Kurt Gödel showed that for any reasonable axiomatization of number theory, such as the Peano Axioms, there are always
statements that are true but not provable. This monumental result, known as the First Incompleteness Theorem, was the first in a series of discoveries asserting the fundamental
limitations of formal mathematics. Together with the Second Incompleteness Theorem, Gödel’s work ended Hilbert’s Program of formalizing all known mathematics. In this two-part talk, we
will introduce the historical context of Hilbert’s Program and prove Gödel’s result.
☆ Thomas Tradler
New York College of Technology
City University of New York
April 21, 2009
Title: Quantum Conundrums
Abstract: Since the invention of quantum mechanics over 100 years ago, physicists have been aware of the interpretational shortcomings provided by this highly successful theory. These
problems have been exhibited most prominently in the statistical interpretation of experiments such as the double-slit experiments, and of what is known as Schrödinger's cat. Despite this
highly unsatisfactory state of affairs and much theoretical progress, which lead to fields such as quantum field theory, string theory, and non-commutative geometry, the statistical
interpretation of quantum mechanics is still fundamental to its understanding. In my talk, I will review some of the main experiments that display the seaming contradictions of the
theory, and describe some of the (algebraic) attempts to obtain a more satisfactory interpretation. The hope is that one day a theory will emerge which will resolve these issues by
providing a more reasonable setting than the one currently used.
☆ Sereta Scott (Computer System Technology- New York City College of Technology)
Supervised by: Dr. Delaram Kahrobaei
April 28, 2009
Title: ElGamal's and Schnorr's Digital Signatures
Abstract: In this paper, we present the topic digital signature, showing how Schnorr signature scheme, a variant of ElGgamal’s scheme is used to enhance the security of smart card
technology. We show how Cryptographic algorithm method is used to generate signature and signature verification. Cryptography is about the prevention and detection of cheating and other
malicious activities in data security. ElGamal’s and Schnorr’s Digital Signature Schemes are widely used because of the difficulty of solving Diffie-Helman key exchange, which involves
the discrete log problems. In Schnorr’s Digital signature scheme is based in the same principle as the ElGamal’s except that Schnorr’s method first signs the message then applies the hash
function, and ElGamal’s does the reverse of Schnorr’s. Also Schnorr’s Scheme minimizes the time to generate the signatures.
Fall 2008 Schedule
☆ Introduction to C-LAC
Dr. Delaram Kahrobaei
New York City College of Technology
September 9
☆ Dr. Xiandong Li
New York City College of Technology
City University of New York
September 16
Title: Quantum Information
For decades the remarkable facts of quantum physics have alternately frustrated and delighted the best thinking of physicists from Einstein to Feynman. Until recently the physicists found
surprisingly few real-world applications. Quantum information is a very important application of quantum mechanics. This talk will introduce the audience the basic knowledge from quantum
mechanics to quantum information.
☆ Dr. Sean Zhang
College of Staten Island
City University of New York
October 7
Title: RFID Technology and RFID Authentication Protocols
Radio Frequency Identification (RFID) is an exciting emerging technology. Its use is expanded extremely wide and fast, and is increasingly ubiquitous around the globe: from large
department store like Wal-Mart, pharmaceutical, aviation, military goods logistics, manufacturer supply chain, library, hospital, and kindergarten, to livestock, money note, passport, and
the latest NYC Marathon runner tracker. Along with the massive deployments of RFID systems, various security issues and privacy concerns have been brought up. In this talk, we will first
explain what RFID technology is and how it works. Then introduce some mutual authentication protocols to tackle these security issues and privacy concerns.
☆ Digital Signatures
Ms. Sereta Scott
City Tech
Mr. Franklin Fung
Polytechnic Institute of NYU
October 28
The notion of a digital signature may prove to be one of the most fundamental and useful inventions of modern cryptography. A signature scheme provides a way for each user to sign
messages so that the signatures can later be verified by anyone else. More specifically, each user can create a matched pair of private and public signature for the message (using the
signer's public key). The verifier can convince himself that the message contents have not been altered since the message was signed. Also the signer cannot later repudiate having signed
the message, since no one but the signer possesses his private key. By analogy with the paper world, where one might sign a letter and seal it in an envelope, one can sign an electronic
message using one's private key, and then seal the result by encrypting it with the recipient's public key. The recipient can perform the inverse operations of opening the letter and
verifying the signature to electronic mail are quite widespread today already.
☆ Models of Theoretical Computing
Ms. Elisa Elshamy
New York City College of Technology
December 2
Title: How to think about algorithms in theory and practice
Abstract: In the early 20th century, the famous mathematician David Hilbert asked whether all mathematical problems could be solved by a mechanical procedure. Hilbert believed that the
answer to his question would be affirmative, but this turned out to be very far from the truth. In order to tackle Hilbert’s question it was necessary to formalize what was meant by a
mechanical procedure. In the 1930’s several mathematicians took up this challenge. Gödel came up with the recursive functions. Alan Turing, the father of modern computers, came up with
the Turing machines. In the later years, following the development of actual computers, new formalizations such as unlimited register machines were introduced. There is no single formal
definition of what is an algorithm. In this talk, I will give an informal intuition for the notion of algorithm and discuss the Turing machines and the unlimited register machines as two
formalizations of this concept. The remarkable fact that came out of the study of theoretical and practical computation is that diverse models of computation ranging from Gödel’s
recursive functions to Turing machines to modern computers all have equal computational capabilities: in disguise they all have exactly the same algorithms!
|
{"url":"http://websupport1.citytech.cuny.edu/Faculty/adouglas/seminar.html","timestamp":"2014-04-17T18:24:24Z","content_type":null,"content_length":"15890","record_id":"<urn:uuid:1d9bef46-8ff1-47df-bc8d-d078f5bbc946>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
|
finding radius of a circle - please help!
October 18th 2008, 09:55 AM #1
Oct 2008
finding radius of a circle - please help!
could anyone quickly help me find this radius...i am totally stuck here and have to know this before i can move forward.
Find the standard form of the equation -
center is (8, 3) and passes thru (-6, -1)
so far i have figured out (x-8)^2 + (y-3)^2 = r^2
the radius is what i cannot understand what to figure out...
Use the distance formula:
Yes, you're on the right track. Now all you have to do is... Substitute $x = -6$ and $y = -1$ into the same equation to get $r$.
I hope that helps.
could anyone quickly help me find this radius...i am totally stuck here and have to know this before i can move forward.
Find the standard form of the equation -
center is (8, 3) and passes thru (-6, -1)
so far i have figured out (x-8)^2 + (y-3)^2 = r^2
the radius is what i cannot understand what to figure out...
The radius of a circle is by definition, the distance from the center to any point on the circle. What is the distance from (8, 3) to (-6, -1)?
Another perfectly good way to do this is to say that, since (-6,-1) is ON the circle, it must satisfy that equation. If x=-6 and y= -1, what does your equation give you?
October 18th 2008, 10:12 AM #2
October 18th 2008, 11:39 AM #3
Junior Member
Aug 2008
Dubai, UAE
October 18th 2008, 11:45 AM #4
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/pre-calculus/54341-finding-radius-circle-please-help.html","timestamp":"2014-04-21T11:40:12Z","content_type":null,"content_length":"40543","record_id":"<urn:uuid:5235c7a4-804b-4758-a60c-b2bf67d82b46>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reply to comment
January 1999
Sarah Flannery, a student at Scoil Mhuire Gan Smal in Blarney, Co. Cork, was awarded an Intel Fellows Achievement Award for her project in the Chemical, Physical and Mathematical Category at the 1998
Esat Telecom Young Scientist and Technology awards.
Sarah's project for the exhibition was entitled "Cryptography - The Science of Secrecy". Working with matrices and using her PC at home for experiments, Sarah developed and tested a new cryptography
algorithm for encrypting data. While the work has been around for a little while, media attention has recently skyrocketed!
The initial idea for the algorithm was developed while Sarah was on a placement at Baltimore-Zergo in Dublin, and she has called it the Cayley-Purser algorithm, named after Arthur Cayley, an
19th-century Cambridge mathematician, and Michael Purser, a cryptographer who inspired her.
Sarah's new algorithm is a competitor to the popular RSA algorithm. Where RSA uses exponentiation to encode and decode a message, Cayley-Purser uses matrix multiplication. This means that while RSA
grows as a cubic with the length of the encryption "key", Cayley-Purser grows quadratically. For a typical key length of 1024 bits, this makes Cayley-Purser around 75 times faster.
One downside of Sarah's algorithm is that the encrypted messages it produces are much longer than those produced by RSA. More significantly, however, there is still some possibility that
Cayley-Purser might have some "security holes", making it too easy to "crack the code". Now that Sarah and her algorithm are receiving so much attention, other researchers will begin to explore the
security properties of Cayley-Purser, and see how it stacks up against the dominant RSA.
Some Internet discussion of Sarah's work.
The homepage of RSA.
|
{"url":"http://plus.maths.org/content/comment/reply/2687","timestamp":"2014-04-16T16:04:27Z","content_type":null,"content_length":"22798","record_id":"<urn:uuid:21e381fd-4e71-4765-9ef2-25d18834330c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Precise relation between prime number theorem and zero-free region
up vote 5 down vote favorite
I was wondering about the following, and I was hoping that some expert here could answer, rather than me indulging in a search for a needle in the haystack of formulas in books like Titchmarsch.
• $\zeta(s)$ is the Riemann zeta function.
• $f : \mathbb R^+ \rightarrow (0,1/2)$ is such that $\zeta(s)$ does not vanish between $s = 1+it$ and $s=1 - f(t) + it$.
• $\pi(x)$, $Li(x)$ as in wikipedia.
Assuming the above data, suppose the version of the prime number theorem that can be proven is:
$$ \pi(x) = Li(x) + O\left\(G(x)\right\) $$
Can G(x) be given a closed form expression showing its precise(if and only if) dependence on $f(t)$?
Heuristics: When $f = 0$, $G(x) = x \mathrm{e}^{-a\sqrt{\ln x}}$ and when $f = 1/2$, $G(x) = \sqrt x \ln x$. So possibly there would be a term like $x^{1-f(x)}$ in a putative expression for $G(x)$.
Titchmarsh has a very well-written chapter on the Prime Number Theorem. This is hardly a needle in a haystack. – Micah Milinovich Aug 6 '10 at 4:15
add comment
1 Answer
active oldest votes
Your heuristic is wrong: $G(x)=x\exp{(-a\sqrt{\log{x}}})$ follows from $f=\frac{c}{\log{(|t|+3)}}$ for some fixed real $c>0$.
I really don't want to tell you the answer, because this is a great exercise! A big hint: use the "approximate explicit formula"
up vote 4 down vote $\psi(x)=x-\sum_{|\rho|\leq T} \frac{x^{\rho}}{\rho}+O(T^{-1} x \log^2{x}),$
bound the sum over zeros trivially given what you know about $f$, and then choose $T$ so that the two error terms balance.
Unless I'm missing something, an "if and only if" dependence is more than an exercise. – Kevin O'Bryant May 24 '10 at 20:48
add comment
Not the answer you're looking for? Browse other questions tagged analytic-number-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/25789/precise-relation-between-prime-number-theorem-and-zero-free-region/25790","timestamp":"2014-04-16T04:59:52Z","content_type":null,"content_length":"53241","record_id":"<urn:uuid:00362e2f-96cb-412e-8dd1-a9cc371c2a3b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Paramount Prealgebra Tutor
Find a Paramount Prealgebra Tutor
...I use a hands-on, one-on-one approach with real world applications. This approach helps my students connect with basic algebra concepts. By connecting with advanced math concepts, my students
are able to study "smarter" and achieve academic success more quickly.
19 Subjects: including prealgebra, English, reading, biology
...I was on varsity swim team for 4 years. I was co-captain of that swim team for 1 year. I also use to be a life guard.
14 Subjects: including prealgebra, calculus, algebra 1, physics
...Prior to privately tutoring, I volunteered for 20 years in my children's schools in the Palos Verdes School District from pre-K through junior high. During my Masters program at C.S.U. Chico, I
taught Sociology.
16 Subjects: including prealgebra, English, reading, grammar
...I am well experienced in DNA structure, sequence, and how DNA is transcribed into RNA, and how RNA is translated into protein. I am also an expert with regards to chromatin structure, and how
genes are regulated in mammalian tissues. I am well versed in gene mutations and the consequences of these mutations in human diseases, such as cancers and inherited diseases.
27 Subjects: including prealgebra, chemistry, physics, GRE
...I love teaching! I teach Middle School. I teach Sunday School!
14 Subjects: including prealgebra, reading, grammar, elementary (k-6th)
|
{"url":"http://www.purplemath.com/Paramount_prealgebra_tutors.php","timestamp":"2014-04-17T21:37:07Z","content_type":null,"content_length":"23571","record_id":"<urn:uuid:461b5736-a868-44c7-b1bb-8675506519b7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Natural Theory Of Space Quantum
The Natural Theory of Space Quantum 2011
Physical space is shown to be a contiguous growth. Wave mechanics and Newtonian (through Einstein) mechanics can be united in the absence of a time t.
Space is not the existing illusion of our senses; instead, space is quantized and defines itself physically per the (Fibonacci ) infinite sequence:
with seed values
The ratios converge:
Where the “golden ratio”
Fibonacci Space:
Adding to, subtracting from, retracing within, and all physics regarding space quanta are events within at least one quantum and changing only at quantum borders (boundaries) per the natural
For example, the “speed of light” c (Einstein ) is limited because the particle (photon) does not truly travel through (Newtonian ) time and continuous space.
In fact, the particle travels through contiguous space quanta, one by one.
In the Newtonian sense of velocity through continuous space, there should be no limit to velocity. In quantum space, the analogy to Newtonian velocity is the spatial travel across quantum boundaries.
Each boundary crossing is the “same” event for light and its velocity is bounded by c in the “sense” of a time t.
There is no real time t; instead, the “particles” travel only in space from one quantum to an adjacent quantum.
As the “growth” sequence itself, successive quanta are different in “size” by the factor:
φ = 1.618
The illusion of Newtonian time should cause a quantifiable perception of the speed of light c relative to space quanta:
Accept: c is the perceived absolute electromagnetic velocity and measurement velocity as known today.
Assume: physical reality is a transition from one region to the next region in space.
Then: particles with subatomic sizes on the order of space quantum boundaries may not traverse the boundary in the same way our senses perceive the transition. This serves to gauge the boundaries
between spatial regions.
And: very spatially large and/or “distant” entities may be misrepresented by our sensual (time) measurements by the factor 1.618 for each spatial boundary from our innate sense and our measurements.
And: we as humans experience only the sensual approximation of Newtonian and Euclidean continuity.
Physical relevance is solely the traverse across spatial quantum boundaries.
A two dimensional visualization of quantum spatial boundaries is suggested by the Fibonacci spiral .
A Five Dimensional View of the Spiral:
Physically, we cannot achieve 2 from 0 and 1. We can only achieve 1 from 0 and 1. Following, we can achieve 2 from the adjacency of 1 and 1. And so on.
Since we live in three dimensions, we can easily see the two dimensional intersection within the Euclidean spiral, i.e. the linearity of the spiral intersects with a maximum of three adjacent two
dimensional regions.
Space itself, as we know it, is three dimensional. If we lived in five dimensions, we could easily see the three dimensional intersection of a two-dimensional “spiral” with “five” regions of three
dimensional space per the natural sequence.
We do not live in five dimensions; instead, we live in three.
The natural sequence begins with the seed values 0 and 1. Perhaps we could visualize 3 from 5 and 5 from 8. But we cannot visualize 3 from 4 or 8 from 9. Physical relevance is solely the traverse
across adjacent natural boundaries.
Wave Mechanics:
Wave mechanic principles (Schrodinger ) show:
and so the approximation
ΔxΔk ≥ O(1).
One result (ramification) is a temporally related uncertainty in measurements.
Wave mechanics mathematically defines observations (perceptions) differing from Newtonian continuity; however, wave mechanics is a physical discipline that utilizes the concept of time t,
e.g. Δk depends on a perceived time t (and mass m)
The mathematical (Fourier ) representations (transformations) are not physically real in the sense of a time t.
Boundary Size:
One possible (sensual) estimation in one dimension of spatial boundary size (between adjacent quantum) could be suggested by:
(Width of boundary)2 = Constant x (Time required sensually for continuity)
(In similar mathematical form to E=mc2.)
Using orders of magnitude 10-27 “sec-cm” suggested by wave mechanics and estimating the “speed” of sensory communication in the range 10-3 sec – 10-6 sec, we would then estimate the magnitude:
b ~ 10-16 to 10-18 meters (for example)
Contiguity of space quantum should be mathematically defined beginning with the natural sequence. That is beyond the scope of this letter.
Intermediate Review:
While we can mathematically achieve 2 from 0 and 1, we cannot physically achieve two from nothing and something.
Each space quantum “experiences” only each of its boundaries.
The juxtaposition of space is physical reality. The sense of time serves to approximate
physical laws and works well within the bounds of our senses.
What we call entropy is in fact a “direction” through Fibonacci space that incorporates an increase in “size” throughout the sequence per the “golden” or “natural” ratio φ.
We cannot propel ourselves 10 meters across the Planet Earth’s surface without an “energy” (the units of which are not a real function of “time”) and similarly a subatomic “particle” cannot propel
itself across a spatial quantum boundary without energy.
This serves to increase the “energy” in the following quantum by the factor φ and gives rise to the concept of entropy.
Small Particles:
Spatially (relatively) small particles (entities) with enough energy should have no problem traversing the boundary from one quantum to the next as directed by the entropy of space.
Spatially and massively (energetically) small particles may not be able to traverse the boundaries at all.
In that case, such “particles” could be left behind in space and would not entropically move forward. It seems possible such particles could in fact remain “backward” in the entropical sense.
Massive Energy:
For example, a “large energy between” two relatively small “particles” should easily provide a contiguous directional result through the entropic spatial sequence.
Per the natural sequence of space, mass does not bend (warp) space; instead, space is physically real and unalterable directly by mass (matter) and is independent of the sense of time.
Matter is defined by mass and “consumes” and exists in real space, e.g. our sun has a relatively large mass and “uses” a large amount of space as we know it.
The sun follows across all quantum boundaries along with us.
Space Warp:
The bending of space around mass is not physically real in three dimensions; instead, it is a sensation (illusion) from our innate continuous imagination of spatial contiguity.
The alteration (warping) of three dimensional space can take place through (within) five dimensions per the natural sequence, but cannot take place within three or “four” dimensions. The natural
sequence is physical, not arithmetical, and five follows directly from three.
Negative entropy:
Negative entropy can only be achieved through five (maybe three) dimensions. Nothing can be achieved through four dimensions.
Intermediate Summary:
Time is not physically real. It is a neurological simulation of continuity from a real spatially contiguous universe.
There is energy and space. There is no time t.
The idea of a physical time t would mean that “time is continuous, directional, has no real dimension except as previously indicated by a clock, was created somehow unknown to anyone, but still has a
real physical significance.” This writing holds that view to be unreconcilable.
An Eight Dimensional View:
In Fibonacci space, our 3-dimensional experience intersects with eight separate five dimensional regions at each boundary.
The boundaries are supposed to be relatively small in a spatial (and energetical) sense.
In the entropical sense, an energy compatible with a boundary region could exist “within” a boundary neither moving forward or backward.
In that case, it seems the specific energy may experience one, several, or all of the intersections.
At such an event, the energy (particle) could traverse among three-dimensional regions of our (experiential) Fibonacci space.
Least Energetic Level:
In our experience, everything “falls” to the lowest energy state.
An example would be water flowing through a drain from a sink into a pipe through another pipe and into an urban main drain system leading into a waste water retreatment plant.
The water obeys our perceived law of gravity and falls through pipes “heading” and “directionalized” toward the center of the Earth where the water would experience no other energy realtively
speaking. If the water could in fact reach Earth’s center of gravity, it would have fallen into a weightless environment as if the water were in orbit and falling “off” the edge of the Earth.
Similarly, chemical states react into the lowest binding energy form until some larger applied energy can change the state.
Lowest Entropic Level:
The lowest entropical level should be “backward” along the path to higher entropy, i.e. a change of direction toward lower entropy.
In the absence of forward entropical (motion) direction, it seems a particle (entity, maybe having a mass) may seek a lower (backward) entropical state, e.g. a particle within a spatial boundary may
be able to traverse various boundaries and may “gravitate” downward in the energetical sense in a similar way to the experience in our three dimensional world.
The Square Law Relationship:
The Einstein2 square law relationship E=mc2 is also dealt with in the Schrodinger6 equation and also in many perceived natural forces like sound and gravity.
The Schrodinger equation needs to include “i”, i.e. the square root of negative one.
Similarly, the natural sequence could proceed in a “negative” direction with the seed values 0 and -1 with physical reality being a “square” and with all ratios matching the positive sequence.
But mathematics is a measurement result of physics, not the other way around.
Negative Entropy:
Negative entropy could be mathematically resolved by an “inverse or reverse” sequence, but physically real negative entropy should only be three-dimensionally achieved through the spatial via-ways
resulting from intersecting spatial and energetical boundaries.
Intermediate Conclusion:
Space is not subject to our views of arithmetic; instead, space is defined by the natural sequence. Contrary to our sensations, time is not physically real. Time is a good measurement approximation
in our macroscopic physical world and “historically” is built into all units of energy, measures, and our perceptions.
Space and energy directionally build the concept of entropy.
The dimensionality following from Fibonacci space also implies boundaries. The boundary dimensions are suggested by quantum (wave) mechanics.
Negative entropy should be achieved by exacting the correct energy. Not the most or least energy; instead, the correct energy corresponding to the spatial boundary.
Dimensional Fibonacci Space Regarding Negative Entropy:
A one dimensional existance and a two dimensional existance would (should) be negative (reverse or backward) from our entropical position, while 5 and 8 dimensions should be entropically positive
(forward) from our position in three dimensional Fibonacci space.
Some (intelligently small) life forms (here with us) biologically move (“autonomically think”) in only one or two dimensions from their (cellular and multi-cellular) internal sense.
We neuroligically move (live) in three dimensions after 700 million “years” of evolution and there “remain” one and two dimensional creatures and forms that somehow live here in three dimensions
along with us and even within us.
As a crude example, a garden vine is directionally one-dimensional as a growth, but in three dimensions we can see its full expansion in space.
We do not conceptually or spatially live in one or two dimensions; instead, we live in three.
We can easily see (experience) one or two dimensions as with the spiral, but we cannot experience 5 or 8 dimensions from any of 3 or 2 or 1 dimensional space.
An Adjustment to Boundary Size:
From quantum mechanics, the boundary size has been approximated at
b ~ 10-16 - 10-18 meters.
From special relativity,
c is bounded and ~ 109 meters per second.
c is an upper bound. The perceived velocity is bounded in Fibonacci space by the number of boundary crossings in our perceived one second of time t.
Then there are ~ 109 crossings in one perceived second using a Joule measurement system.
From quantum mechanics, we had bounded the (minimum) number of crossings using sensory (neurological) requirements in the range 103 – 106 per perceived second.
A special relativity estimation of the boundary size b:
b2 = constant ÷ (number of boundries experienced in one perceived second)
b2 = (small constant) x 10-30 x 10-9 (again using the constant h from wave maechanics)
b ~ 10-19 – 10-20 meters
Boundary Energy:
Estimating in Fibonacci space without wave mechanics:
Energy per unit mass can be equated to (109)2 Joules for each perceived second. One perceived second corresponds to 109 physical events (boundaries) so there are an implied 1 Joule (appx.) per
kilogram per average one-dimensional spatial boundary event.
In that case, one kilogram (103 gram) requires 1 Joule and one microgram (10-6 gram) requires 10-9 Joules as an energy associacted with a single boundary.
For example, a “force” required to propel one gram in one-dimensional space for a perceived 10-9 sec would be calculated from the following:
10-3 Joules = Force(F) x b
Then b = 10-3 ÷ F (meters)
Assuming our perceived force of gravity at Earth’s surface (our experience) and following the general theory of relativity, then:
b ~ 10-3 ÷ (10 meters sec-2 x 10-3 kg) = 10-1 x (10-9)2
And b ~ 10-19 meters.
The Fibonacci boundary size in one dimension is estimated as:
b ≤ 10-16 – 10-18 meters using wave mechanics and neurological time requirements
b ~ 10-19 – 10-20 meters using special relativity and wave mechanics
b ~ 10-19 meters using only relativity and no wave mechanics
Boundaries have Barrier Energies
Some Specific Energies do not move Entropically Forward
There are no Real Functions of Time (t)
There is only Energy lost (left Backward) in the Entropical Transition of Space
The Energies left Entropically behind should have Ramifications for other Energies moving Forward without them
First we understand the Barriers, then we can begin to understand Negative Entropy
Once we understand Negative Entropy, perhaps we could begin to understand Dimensional Forward Entropy
We Probably did not Achieve Three Dimensions without First Achieving One and Two
The Force Fb Relating to Boundary Size:
The force F we used to calculate boundary size applies to the force of gravity between a mass on the Earth’s surface and the Earth itself. That is our experience and corresponds to boundary size b
for us here.
The gravity-space force itself is a function of the square of space (r2) and the sum of two masses m1 and m2 in a one dimensional sense.
A lower force of space-gravity, for example on our moon, implies a larger boundary dimension and a larger energy “barrier” than we experience here.
For example, in a different gravitational environment:
1. Since we apparently lose certain internal energies (“age”) at each boundary, those energies could be altered (could become larger) through larger boundaries (barriers) than we experience here.
2. In locations with small Fb and a corresponding large boundary, larger energies should be able to experience spatial intersections that only small energies experience here.
A weightless environment, for example an “orbit around” a large mass should only affect the boundary (barrier) size by the effective radius change regarding the real force Fb.
The Square Law in Higher Dimensions:
For us here, there are fundamental square law forces like sound and gravity.
In 5 dimensions, we should experience “cubed” law forces, and so on.
The Nearest Large Boundaries (Biggest Holes) in Space:
The nearest large boundaries to us are at the nearest regions of lowest Fb.
That should be exactly in between the moon and Earth centers of gravity on a Euclidean straight line. Unfortunatley, the line moves continually and “quickly” in 3 dimensions.
The location along the line(s) is easy to calculate and is the simple cancellation points of the two opposing gravitational square law forces. The region is relatively small and traverses in space
quickly as we see it, i.e. it “orbits around” the planet.
More Distant Boundaries:
And so the Earth-surface boundary size is 10-19 meters and the boundary b would need to increase in width by the factor 1010 as an order of magnitude approximation to pass a single atom,
one-dimensionally speaking.
Since gravity is a square law here, that would imply 105 x this planet’s radius, or 50,000 Earth-diameters.
That is a long way for our technology. It is better to use canceling forces from nearby mass to achieve low Fb.
Fibonacci space is real. The concept of time is not physically real.
Many measurements innately use time and so do all of our perceptions and especially our language(s).
Energy and space are real. They both grow directionally.
Sometimes, we represent our three-dimensional world with our derived three-dimensional mathematics and we become confused (overwhelmed) and cannot soundly (physically) enter into the sequence of
natural growth.
Reconciling the natural sequence should lead to an understanding that cannot be achieved in three dimensions alone.
Reverse and dimensional forward entropy are likely waiting for our own enlightenment.
Technology exists to achieve the nearest broad interesctions.
|
{"url":"http://lofi.forum.physorg.com/The-Natural-Theory-Of-Space-Quantum_32016.html","timestamp":"2014-04-19T19:34:16Z","content_type":null,"content_length":"197124","record_id":"<urn:uuid:3911ceb3-a1e3-496b-bb20-63888f5b0a42>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Volume 126,
Volume 126, Issue 12, 28 March 2007
• COMMUNICATIONS
View Description Hide Description
A Regge pole analysis is employed to explain the oscillatory patterns observed in numerical simulations of integral cross section for the reaction in the translational collision energy
range . In this range the integral cross section for the transition, affected by two overlapping resonances, shows nearly sinusoidal oscillations below and a more structured oscillatory
pattern at larger energies. The two types of oscillations are related to the two Regge trajectories which (pseudo) cross near the energy where the resonances are aligned. Simple estimates
are given for the periods of the oscillations.
View Description Hide Description
The authors develop an efficient particle labeling procedure based on a linked cell algorithm which is shown to reduce the computing time for a molecular dynamics simulation by a factor
of 3. They prove that the improvement of performance is due to the efficient fulfillment of both spatial and temporal locality principles, as implemented by the contiguity of labels
corresponding to interacting atoms. Finally, they show that the present label reordering procedure can be used to devise an efficient parallel one-dimensional domain decomposition
molecular dynamics scheme.
• ARTICLES
□ Theoretical Methods and Algorithms
View Description Hide Description
Scalar-relativistic pseudopotentials and corresponding spin-orbit potentials of the energy-consistent variety have been adjusted for the simulation of the cores of the transition metal
elements Y–Pd. These potentials have been determined in a one-step procedure using numerical two-component calculations so as to reproduce atomic valence spectra from four-component
all-electron calculations. The latter have been performed at the multi-configuration Dirac-Hartree-Fock level, using the Dirac-Coulomb Hamiltonian and perturbatively including the Breit
interaction. The derived pseudopotentials reproduce the all-electron reference data with an average accuracy of for configurational averages over nonrelativistic orbital configurations
and for individual relativistic states. Basis sets following a correlation consistent prescription have also been developed to accompany the new pseudopotentials. These range in size from
cc-pVDZ-PP to cc-pV5Z-PP and also include sets for correlation (cc-pwCVDZ-PP through cc-pwCV5Z-PP), as well as those with extra diffuse functions (aug-cc-pVDZ-PP, etc.). In order to
accurately assess the impact of the pseudopotential approximation, all-electron basis sets of triple-zeta quality have also been developed using the Douglas-Kroll-Hess Hamiltonian
(cc-pVTZ-DK, cc-pwCVTZ-DK, and aug-cc-pVTZ-DK). Benchmark calculations of atomic ionization potentials and electronic excitation energies are reported at the coupled cluster level of
theory with extrapolations to the complete basis set limit.
View Description Hide Description
This paper is concerned with the structural transition dynamics of the six-atom Morse cluster with zero total angular momentum, which serves as an illustrative example of the general
reaction dynamics of isolated polyatomic molecules. It develops a methodology that highlights the interplay between the effects of the potential energy topography and those of the
intrinsic geometry of the molecular internal space. The method focuses on the dynamics of three coarse variables, the molecular gyration radii. By using the framework of geometric
mechanics and hyperspherical coordinates, the internal motions of a molecule are described in terms of these three gyration radii and hyperangular modes. The gyration radii serve as slow
collective variables, while the remaining hyperangular modes serve as rapidly oscillating “bath” modes. Internal equations of motion reveal that the gyration radii are subject to two
different kinds of forces: One is the ordinary force that originates from the potential energy function of the system, while the other is an internal centrifugal force. The latter
originates from the dynamical coupling of the gyration radii with the hyperangular modes. The effects of these two forces often counteract each other: The potential force generally works
to keep the internal mass distribution of the system compact and symmetric, while the internal centrifugal force works to inflate and elongate it. Averaged fields of these two forces are
calculated numerically along a reaction path for the structural transition of the molecule in the three-dimensional space of gyration radii. By integrating the sum of these two force
fields along the reaction path, an effective energy curve is deduced, which quantifies the gross work necessary for the system to change its mass distribution along the reaction path.
This effective energy curve elucidates the energy-dependent switching of the structural preference between symmetric and asymmetric conformations. The present methodology should be of
wide use for the systematic reduction of dimensionality as well as for the identification of kinematic barriers associated with the rearrangement of mass distribution in a variety of
molecular reaction dynamics in vacuum.
View Description Hide Description
The authors formulate a Car-Parrinello treatment for the density-functional-based tight-binding method with and without self-consistent charge corrections. This method avoids the
numerical solution of the secular equations, the principal drawback for large systems if the linear combination of atomic orbital ansatz is used. The formalism is applicable to finite
systems and for supercells using periodic boundary conditions within the -point approximation. They show that the methodology allows the application of modern computational techniques
such as sparse matrix storage and massive parallelization in a straightforward way. All present bottlenecks concerning computer time and consumption of memory and memory bandwidth can be
removed. They illustrate the performance of the method by direct comparison with Born-Oppenheimer molecular dynamics calculations. Water molecules, benzene, the fullerene, and liquid
water have been selected as benchmark systems.
View Description Hide Description
We present a robust linear-scaling algorithm to compute the symmetric square-root or Löwdin decomposition of the atomic-orbital overlap matrix. The method is based on Newton-Schulz
iterations with a new approach to starting matrices. Calculations on 12 chemically and structurally diverse molecules demonstrate the efficiency and reliability of the method.
Furthermore, the calculations show that linear scaling is achieved.
View Description Hide Description
A computational multiscale method is proposed to simulate coupled, nonequilibrium thermomechanical processes. This multiscale framework couples together thermomechanical equations at the
coarse scale with nonequilibrium molecular dynamics at the fine scale. The novel concept of distributed coarse scale thermostats enables subsets of fine scale atoms to be attached to
different coarse scale nodes which act as thermostats. The fine scale dynamics is driven by the coarse scale mean field. A coarse-grained Helmholtz free energy is used to derive
macroscopic quantities. This new framework can reproduce the correct thermodynamics at the fine scale while providing an accurate coarse-grained result at the coarse scale.
View Description Hide Description
The authors propose a new approach to understand the electrostaticsurface contributions to the interactions of large but finite periodic distributions of charges. They present a simple
method to derive and interpret the surface contribution to any electrostatic field produced by a periodic distribution of charges. They discuss the physical and mathematical
interpretations of this term. They present several examples and physical details associated with the calculation of the surface term. Finally, they provide a simple derivation of the
surface contribution to the virial. This term does not disappear even if tinfoilboundary conditions are applied.
View Description Hide Description
An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by
minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing
so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M.
Dyer et al., J. Chem. Phys.123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the
approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the
aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation
functions in excellent agreement with simulation data.
View Description Hide Description
Gillespie’s exact stochastic simulation algorithm (SSA) [J. Phys. Chem.81, 2350 (1977)] has been widely used to simulate the stochastic dynamics of chemically reacting systems. In this
algorithm, it is assumed that all reactions occur instantly. While this is true in many cases, it is also possible that some chemical reactions, such as gene transcription and translation
in living cells, take certain time to finish after they are initiated. Thus, the product of such reactions will emerge after certain delays. Apparently, Gillespie’s SSA is not an exact
algorithm for chemical reaction systems with delays. In this paper, the author develops an exact SSA for chemical reaction systems with delays, based upon the same fundamental premise of
stochastic kinetics used by Gillespie in the development of his SSA. He then shows that an algorithm modified from Gillespie’s SSA by Barrio et al. [PLOS Comput. Biol.2, 1017 (2006)] is
also an exact SSA for chemical reaction systems with delays, but it needs to generate more random variables than the author’s algorithm.
View Description Hide Description
A computationally simple three-step procedure to survey the energy landscape and to determine the molecular transition structure and activation energy at the intersection of two weakly
coupled electronic potential energy surfaces of different symmetry is suggested. Only commercial software is needed to obtain the transition states of, for instance, spin-flip reactions.
The computational expense is only two to three times larger than that of the standard determination of an adiabatic reaction path. First, the structures of the two electronic initial and
final states along a chosen reaction coordinate are individually optimized. At the “projected crossing,” the two states have the same energy at the same value of the reaction coordinate,
but different state-optimized partial structures. Second, the unique optimized structure of a low energy crossing point between the two states is determined with the help of the density
functional fractional occupation number approach. Finally, the respective energy of the two states at the crossing is estimated by a single point calculation. The prescription is
successfully applied to some simple topical examples from organic and from inorganic chemistry, respectively, concerning the spin-flip reactions and .
View Description Hide Description
We develop a method to extract local integrals, that is, integrals defined locally in the linear regime of an arbitrary point in phase space. The individual integral represents a
vibrational mode. We also propose an index that quantifies the extent of connection between neighboring local integrals. Those pieces that are smoothly connected over a wide range
represent a global structure of phase space. With a six-atomic Lennard-Jones cluster, we show that it is possible to identify which vibrational mode in the potential basin correlates
smoothly to that in the area of transition state, which is nothing but a reactive mode. As an application of the method, we attempt to enhance the structural transition by exciting the
reactive mode thus found. This method works successfully as shown in numerical calculations.
View Description Hide Description
The relaxation of the Pauli principle associated with density scaling is examined. Scaling the density has been investigated in the development of density functionalcomputational methods
with higher accuracy. Scaling the density by reduces the number of electrons to when . The minimum kinetic energy of the scaled density, , can be scaled back to the -electron system by
multiplying the -electron Kohn-Sham-type occupation numbers by to produce . This relaxes the Pauli principle when the orbital occupation numbers are greater than 1 in the -electron
system. The effects of antisymmetry on solutions to the Kohn-Sham equations are examined for Ne and the Be isoelectronic series. The changes in and the exchange energy when is varied show
that these two quantities are inextricably linked.
View Description Hide Description
A method that combines quantum mechanics (QM), typically a solute, the effective fragment potential (EFP) discrete solvent model, and the polarizable continuum model is described. The EFP
induced dipoles and polarizable continuum model (PCM) induced surface charges are determined in a self-consistent fashion. The gradients of these two energies with respect to molecular
coordinate changes are derived and implemented. In general, the gradients can be formulated as simple electrostatic forces and torques among the QM nuclei, electrons, EFP static
multipoles, induced dipoles, and PCM induced charges. Molecular geometry optimizations can be performed efficiently with these gradients. The formulas derived for EFP∕PCM can be generally
applied to other combined molecular mechanics and continuum methods that employ induced dipoles and charges.
View Description Hide Description
The electronic state of cyclic arising from the singly excited electron configuration is studied using multireference configuration interactionwave functions and a quadratic Jahn-Teller
Hamiltonian determined from those calculations. It is shown that these two states have both a symmetry-required seam of conical intersections at geometries and three proximal symmetry
equivalent seams, located on a circle with radius from the intersection. , a function of , the breathing mode, is quite small but only attains a value of zero at , resulting in a
confluence or intersection node of the three seams with the seam. At this point only, , the norm of half the energy difference gradient, the linear Jahn-Teller term, vanishes and the
intersection is of the Renner-Teller type. The close proximity of the previously unreported seams to the seam over the range of considered is a consequence of the small values of ,
compared to the quadratic Jahn-Teller term. The present analysis has important implications in the study of Jahn-Teller effects in ring systems and provides insight into a recent report
that characterized this seam as a Renner-Teller or glancing intersection.
View Description Hide Description
Modeling the change in the electrostatics of organic molecules upon moving from vacuum into solvent, due to polarization, has long been an interesting problem. In vacuum, experimental
values for the dipole moments and polarizabilities of small, rigid molecules are known to high accuracy; however, it has generally been difficult to determine these quantities for a polar
molecule in water. A theoretical approach introduced by Onsager [J. Am. Chem. Soc.58, 1486 (1936)] used vacuum properties of small molecules, including polarizability,dipole moment, and
size, to predict experimentally known permittivities of neat liquids via the Poisson equation. Since this important advance in understanding the condensed phase, a large number of
computational methods have been developed to study solutes embedded in a continuum via numerical solutions to the Poisson-Boltzmann equation. Only recently have the classical force fields
used for studying biomolecules begun to include explicit polarization in their functional forms. Here the authors describe the theory underlying a newly developed polarizable multipole
Poisson-Boltzmann (PMPB) continuum electrostatics model, which builds on the atomic multipole optimized energetics for biomolecular applications (AMOEBA) force field. As an application of
the PMPB methodology, results are presented for several small folded proteins studied by molecular dynamics in explicit water as well as embedded in the PMPB continuum. The dipole moment
of each protein increased on average by a factor of 1.27 in explicit AMOEBA water and 1.26 in continuum solvent. The essentially identical electrostatic response in both models suggests
that PMPB electrostatics offers an efficient alternative to sampling explicit solvent molecules for a variety of interesting applications, including binding energies, conformational
analysis, and prediction. Introduction of salt lowered the electrostatic solvation energy between 2 and , depending on the formal charge of the protein, but had only a small influence on
dipole moments.
View Description Hide Description
A recently proposed new family of density functionals [S. Grimme, J. Chem. Phys.124, 34108 (2006)] adds a fraction of nonlocal correlation as a new ingredient to density functional theory
(DFT). This fractional correlation energy is calculated at the level of second-order many-body perturbation theory (PT2) and replaces some of the semilocal DFTcorrelation of standard
hybrid DFT methods. The new “double hybrid” functionals (termed, e.g., B2-PLYP) contain only two empirical parameters that have been adjusted in thermochemical calculations on parts of
the G2/3 benchmark set. The methods have provided the lowest errors ever obtained by any DFT method for the full G3 set of molecules. In this work, the applicability of the new
functionals is extended to the exploration of potential energy surfaces with analytic gradients. The theory of the analytic gradient largely follows the standard theory of PT2 gradients
with some additional subtleties due to the presence of the exchange-correlation terms in the self-consistent field operator. An implementation is reported for closed-shell as well as
spin-unrestricted reference determinants. Furthermore, the implementation includes external point charge fields and also accommodates continuum solvation models at the level of the
conductor like screening model. The density fitting resolution of the identity (RI) approximation can be applied to the evaluation of the PT2 part with large gains in computational
efficiency. For systems with basis functions the evaluation of the double hybrid gradient is approximately four times more expensive than the calculation of the standard hybrid DFT
gradient. Extensive test calculations are provided for main group elements and transition metal containing species. The results reveal that the B2-PLYP functional provides excellent
molecular geometries that are superior compared to those from standard DFT and MP2.
□ Gas Phase Dynamics and Structure: Spectroscopy, Molecular Interactions, Scattering, and Photochemistry
View Description Hide Description
Recently, the three sugars ribose, deoxyribose, and fructose have been shown to undergo dissociative electron attachment at threshold, that is, to fragment upon capture of a zero-energy
electron. Here the electron acceptor properties of three fructose isomers are investigated in view of a doorway mechanism. Two key ingredients for a doorway mechanism, a weakly bound
state able to support a vibrational Feshbach resonance, and a valence anion more stable than neutral fructose are characterized. Moreover, possible structures for the observed fragment
anion are suggested.
View Description Hide Description
Relative state-to-state cross sections and steric asymmetries have been measured for the scattering process: , at collision energy. Comparison with the previously studied systems OH–HCl
and OH–HBr reveals relevant features of the potential energy surfaces of these molecular systems. Some measured differences concerning the internal energy distribution after collision and
the propensities for the impact with one or the other side of the OH molecule in scattering by HCl, HBr, and HI molecules are discussed.
View Description Hide Description
The authors present a first-principles prediction of the energies of the eight lowest-lying anharmonic vibrational states of , including the fundamental symmetric stretching mode and the
first overtone of the fundamental bending mode, which undergo a strong coupling known as Fermi resonance. They employ coupled-cluster singles, doubles, and (perturbative) triples [CCSD(T)
and CCSDT] in conjunction with a range of Gaussian basis sets (up to cc-pV5Z, aug-cc-pVQZ, and aug-cc-pCVTZ) to calculate the potential energy surfaces (PESs) of the molecule, with the
errors arising from the finite basis-set sizes eliminated by extrapolation. The resulting vibrational many-body problem is solved by the vibrational self-consistent-field and vibrational
configuration-interaction (VCI) methods with the PESs represented by a fourth-order Taylor expansion or by numerical values on a Gauss-Hermite quadrature grid. With the VCI, the best
theoretical estimates of the anharmonic energy levels agree excellently with experimental values within (the mean absolute deviation). The theoretical (experimental) anharmonic
frequencies of the Fermi doublet are 1288.9 (1285.4) and 1389.3 .
|
{"url":"http://scitation.aip.org/content/aip/journal/jcp/126/12","timestamp":"2014-04-18T07:26:21Z","content_type":null,"content_length":"197075","record_id":"<urn:uuid:5dd90eca-4119-4455-b8f9-3490fef0ce27>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
|
La Mirada Algebra 2 Tutor
Find a La Mirada Algebra 2 Tutor
...I teach my students to understand their subjects on an instinctive level by making it more relevant to them. I show them how to approach their learning materials so they keep up with and then
surpass their class' progress. And I stress the importance of motivation and discipline in their studies.
35 Subjects: including algebra 2, reading, calculus, geometry
...I understand the pressures that students face today, as I'm with teens most of my days. I also understand that there are students who have no interest in highly competitive schools, yet they
want a practical education that will serve them well in their future careers. I have successfully taught...
5 Subjects: including algebra 2, algebra 1, SAT math, linear algebra
I have taught math for over 5 years! Many of my students are from grade 2 to 12, some are from college. I also have a math tutor certificate for college students from Pasadena City College. I
graduated in 2012 from UCLA.
7 Subjects: including algebra 2, geometry, algebra 1, trigonometry
...I graduated from CSULB almost 2 years ago with a degree in Liberal Studies and a focus in Math. I have worked at the YMCA as an after school and Summer Day Camp Director. I love working with
students of all ages.
22 Subjects: including algebra 2, reading, writing, statistics
Hello, I am an experienced tutor of 3+ years for mostly high school students in multiple subjects. My specialty is math, including algebra, geometry, trigonometry, calculus, and statistics. I am
also well versed in science, history, and business related subjects.
23 Subjects: including algebra 2, calculus, accounting, statistics
|
{"url":"http://www.purplemath.com/la_mirada_ca_algebra_2_tutors.php","timestamp":"2014-04-16T18:59:19Z","content_type":null,"content_length":"23850","record_id":"<urn:uuid:67e0f59c-936f-4700-a683-ca2222b4ab4f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reverse Percentage
November 7th 2011, 10:02 AM #1
Nov 2011
Reverse Percentage
I am looking for the formula to figure out a reverse percentage: i.e. What would the retail price be if 25% is deducted from the retail price and the discounted price come to $3,474.60? Or if you
have a total price of $10, which includes 5% sales tax, how would I figure out what the product alone costs without the sales tax?
Re: Reverse Percentage
Thank you very much for your help, it is greatly appreciated! I had a dumb-dumb moment, that was pretty easy!
November 7th 2011, 10:13 AM #2
November 7th 2011, 11:27 AM #3
Nov 2011
|
{"url":"http://mathhelpforum.com/algebra/191380-reverse-percentage.html","timestamp":"2014-04-17T15:35:15Z","content_type":null,"content_length":"36090","record_id":"<urn:uuid:5fb8ee14-ebdc-487d-b61d-c66f998c0608>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
10 square meters equals how many square feet
You asked:
10 square meters equals how many square feet
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/10_square_meters_equals_how_many_square_feet","timestamp":"2014-04-17T13:01:21Z","content_type":null,"content_length":"53752","record_id":"<urn:uuid:475026ff-5582-44e5-9920-9b7cd08d8329>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On tensor categories attached to cells in affine Weyl groups, Representation theory of algebraic groups and quantum groups
Results 1 - 10 of 21
- Annals of Mathematics
"... Abstract. In this paper we extend categorically the notion of a finite nilpotent group to fusion categories. To this end, we first analyze the trivial component of the universal grading of a
fusion category C, and then introduce the upper central series ofC. For fusion categories with commutative Gr ..."
Cited by 76 (17 self)
Add to MetaCart
Abstract. In this paper we extend categorically the notion of a finite nilpotent group to fusion categories. To this end, we first analyze the trivial component of the universal grading of a fusion
category C, and then introduce the upper central series ofC. For fusion categories with commutative Grothendieck rings (e.g., braided fusion categories) we also introduce the lower central series. We
study arithmetic and structural properties of nilpotent fusion categories, and apply our theory to modular categories and to semisimple Hopf algebras. In particular, we show that in the modular case
the two central series are centralizers of each other in the sense of M. Müger. Dedicated to Leonid Vainerman on the occasion of his 60-th birthday 1. introduction The theory of fusion categories
arises in many areas of mathematics such as representation theory, quantum groups, operator algebras and topology. The representation categories of semisimple (quasi-) Hopf algebras are important
examples of fusion categories. Fusion categories have been studied extensively in the literature,
"... Abstract. This is the first in a series of papers devoted to describing the category of sheaves on the affine flag manifold of a (split) simple group in terms the Langlands dual group. In the
present paper we provide such a description for categories which are geometric counterparts of a maximal com ..."
Cited by 17 (10 self)
Add to MetaCart
Abstract. This is the first in a series of papers devoted to describing the category of sheaves on the affine flag manifold of a (split) simple group in terms the Langlands dual group. In the present
paper we provide such a description for categories which are geometric counterparts of a maximal commutative subalgebra in the Iwahori Hecke algebra H; of the anti-spherical module for H; and of the
space of Iwahori-invariant Whittaker functions. As a byproduct we obtain some new properties of central sheaves introduced in [G]. Acknowledgements. This project was conceived during the IAS special
year in Representation Theory (1998/99) led by G. Lusztig, as a result of conversations with D. Gaitsgory, M. Finkelberg and I. Mirkovic. The outcome was strongly influenced by conversations with A.
Beilinson and V. Drinfeld. The stimulating interest of A. Braverman, D. Kazhdan, G. Lusztig and V. Ostrik was crucial for keeping the project alive. We are very grateful to all these people. We thank
I. Mirkovic and D. Gaitsgory for the permission to use their unpublished results; and M. Finkelberg and D. Gaitsgory for taking the trouble to read the text and point out various lapses in the
exposition. The second author was supported by NSF and Clay Institute. 1.
, 2003
"... sheaves on the nil-cone of a simple complex algebraic group is introduced by the author in the paper Perverse coherent sheaves (the so-called perverse t-structure corresponding to the middle
perversity). In the present note we show that the same t-structure can be obtained from a natural quasi-excep ..."
Cited by 15 (4 self)
Add to MetaCart
sheaves on the nil-cone of a simple complex algebraic group is introduced by the author in the paper Perverse coherent sheaves (the so-called perverse t-structure corresponding to the middle
perversity). In the present note we show that the same t-structure can be obtained from a natural quasi-exceptional set generating this derived category. As a consequence we obtain a bijection
between the sets of dominant weights and pairs consisting of a nilpotent orbit, and an irreducible representation of the centralizer of this element, conjectured by Lusztig and Vogan (and obtained by
other means by the author in the paper On tensor categories attached to cells in affine Weyl groups, tobe published). 1.
"... Abstract. We characterize a natural class of modular categories of prime power Frobenius-Perron dimension as representation categories of twisted doubles of finite p-groups. We also show that a
nilpotent braided fusion category C admits an analogue of the Sylow decomposition. If the simple objects o ..."
Cited by 13 (3 self)
Add to MetaCart
Abstract. We characterize a natural class of modular categories of prime power Frobenius-Perron dimension as representation categories of twisted doubles of finite p-groups. We also show that a
nilpotent braided fusion category C admits an analogue of the Sylow decomposition. If the simple objects ofC have integral Frobenius-Perron dimensions then C is group-theoretical in the sense of
[ENO]. As a consequence, we obtain that semisimple quasi-Hopf algebras of prime power dimension are group-theoretical. Our arguments are based on a reconstruction of twisted group doubles from
Lagrangian subcategories of modular categories (this is reminiscent to the characterization of doubles of quasi-Lie bialgebras in terms of Manin pairs given in [Dr]). 1. introduction In this paper we
work over an algebraically closed field k of characteristic 0. By a fusion category we mean a k-linear semisimple rigid tensor category C with finitely many isomorphism classes of simple objects,
finite dimensional spaces of morphisms, and such that the unit object 1 of C is simple. We refer the reader to [ENO] for a general theory of such categories. A fusion category is pointed if all its
simple objects are invertible. A pointed fusion category is equivalent to Vec ω G, i.e., the category of G-graded vector spaces with the associativity constraint given by some cocycle ω ∈ Z 3 (G, k ×
) (here G is a finite group). 1.1. Main results. Theorem 1.1. Any braided nilpotent fusion category has a unique decomposition into a tensor product of braided fusion categories whose
Frobenius-Perron dimensions are powers of distinct primes. The notion of nilpotent fusion category was introduced in [GN]; we recall it in Subsection 2.2. Let us mention that the representation
category Rep(G) of a finite group G is nilpotent if and only if G is nilpotent. It is also known that fusion categories of prime power Frobenius-Perron dimension are nilpotent [ENO]. On the other
hand, Vec ω G is nilpotent for any G and ω. Therefore it is not true that any nilpotent fusion category is a tensor product of fusion categories of prime power dimensions.
"... Abstract. This paper is a continuation of [1]. In [1] we constructed an equivalence between the derived category of equivariant coherent sheaves on the cotangent bundle to the flag variety of a
simple algebraic group and a (quotient of) the category of constructible sheaves on the affine flag variet ..."
Cited by 8 (5 self)
Add to MetaCart
Abstract. This paper is a continuation of [1]. In [1] we constructed an equivalence between the derived category of equivariant coherent sheaves on the cotangent bundle to the flag variety of a
simple algebraic group and a (quotient of) the category of constructible sheaves on the affine flag variety of the Langlands dual group. Below we prove certain properties of this equivalence; provide
a similar “Langlands dual ” description for the category of equivariant coherent sheaves on the nilpotent cone; and deduce some conjectures by Lusztig and Ostrik. Acknowledgements. I am greatful to
all the people mentioned in acknowledgements in [1]. I also thank Eric Sommers for stimulating interest. The author is supported by NSF and Clay Institute. 1. Statements 1.1. Recollection of
notations and set-up. We keep the set-up and notations of [1]. In particular, Fℓ is the affine flag variety of a split simple group G over a field k which is either finite or algebraically closed; Wf
is the Weyl group of G, and W is the extended affine Weyl group; fW f ⊂ fW ⊂ W are the sets of minimal length representatives of respectively 2-sided and left cosets of Wf in W; PI is the category of
Iwahori equivariant perverse sheaves on Fℓ is the category whose objects are mixed Iwahori equivariant perverse sheaves on Fℓ, and morphisms are weight 0 geometric morphisms, i.e. weight 0 morphisms
between the pull-backs of sheaves to Fℓ ¯ k (the notation is (the notation used only for an algebraically closed k); while P mix I used for finite k only). Lw, w ∈ W are irreducible objects of PI, or
irreducible self-dual objects of P mix on the cardinality of k. The Serre quotient categories f PI, f PI f P mix
, 2002
"... Let G be a reductive algebraic group over the complex numbers, B a Borel subgroup of G, and T a maximal torus of B. We denote by Λ = Λ(G) the weight lattice of G with respect to T, and by Λ+ =
Λ+(G) the set of dominant weights with respect to the positive roots defined by B. Let g be the Lie algebra ..."
Cited by 4 (2 self)
Add to MetaCart
Let G be a reductive algebraic group over the complex numbers, B a Borel subgroup of G, and T a maximal torus of B. We denote by Λ = Λ(G) the weight lattice of G with respect to T, and by Λ+ = Λ+(G)
the set of dominant weights with respect to the positive roots defined by B. Let g be the Lie algebra of G, and let N denote the nilpotent cone in g.
, 2003
"... ABSTRACT. We study Lusztig’s theory of cells for quantum affine sln. Using the geometric construction of the quantum group due to Lusztig and Ginzburg– Vasserot, we describe explicitly the
two-sided cells, the number of left cells in a two–sided cell, and the asymptotic algebra, verifying conjecture ..."
Cited by 4 (2 self)
Add to MetaCart
ABSTRACT. We study Lusztig’s theory of cells for quantum affine sln. Using the geometric construction of the quantum group due to Lusztig and Ginzburg– Vasserot, we describe explicitly the two-sided
cells, the number of left cells in a two–sided cell, and the asymptotic algebra, verifying conjectures of Lusztig. 1.
- Transform. Groups
"... Abstract. We define a partial order on the set No,¯c of pairs (O, C), where O is a nilpotent orbit and C is a conjugacy class in Ā(O), Lusztig’s canonical quotient of A(O). We then construct an
order-reversing duality map No,¯c → L No,¯c that satisfies many of the properties of the original Spaltens ..."
Cited by 3 (2 self)
Add to MetaCart
Abstract. We define a partial order on the set No,¯c of pairs (O, C), where O is a nilpotent orbit and C is a conjugacy class in Ā(O), Lusztig’s canonical quotient of A(O). We then construct an
order-reversing duality map No,¯c → L No,¯c that satisfies many of the properties of the original Spaltenstein duality map. This generalizes work of Sommers [16]. 1.
, 2001
"... Abstract. Distinguished involutions in the affine Weyl groups, defined by G. Lusztig, play an essential role in the Kazhdan-Lusztig combinatorics of these groups. A distinguished involution is
called canonical if it is the shortest element in its double coset with respect to the finite Weyl group. E ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. Distinguished involutions in the affine Weyl groups, defined by G. Lusztig, play an essential role in the Kazhdan-Lusztig combinatorics of these groups. A distinguished involution is called
canonical if it is the shortest element in its double coset with respect to the finite Weyl group. Each two-sided cell in the affine Weyl group contains precisely one canonical distinguished
involution. In this note we calculate the canonical distinguished involutions in the affine Weyl groups of rank ≤ 7. We also prove some partial results relating canonical distinguished involutions
and Dynkin’s diagrams of the nilpotent orbits in the Langlands dual group. 1.
"... Abstract. We study 2-representations of finitary 2-categories with involution and adjunctions by functors on module categories over finite dimensional algebras. In particular, we define,
construct and describe in detail (right) cell 2-representations inspired by Kazhdan-Lusztig cell modules for Heck ..."
Cited by 2 (2 self)
Add to MetaCart
Abstract. We study 2-representations of finitary 2-categories with involution and adjunctions by functors on module categories over finite dimensional algebras. In particular, we define, construct
and describe in detail (right) cell 2-representations inspired by Kazhdan-Lusztig cell modules for Hecke algebras. Under some natural assumptions we show that cell 2-representations are strongly
simple and do not depend on the choice of a right cell inside a twosided cell. This reproves and extends the uniqueness result on categorification of Kazhdan-Lusztig cell modules for Hecke algebras
of type A from [MS].
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=7705354","timestamp":"2014-04-20T07:57:49Z","content_type":null,"content_length":"39024","record_id":"<urn:uuid:e2dd2b34-63fb-4615-9064-b60db42ac129>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Laplace Transforms - find the system response
September 12th 2011, 12:04 PM #1
Jul 2010
Laplace Transforms - find the system response
I'm stuck part way through this:
A Linear, time invariant system has the impulse response $h(t) = tu(t-1)$ Find the transfer function $H(s)$ and use it to find the response to the input $x(t) = u(t) - 2u(t-1) + u(t-2)$
I have found $H(s) = \frac{e^{-s}}{s^2} + \frac{e^{-s}}{s}$
And $X(s) = \frac{1}{s} - \frac{2e^{-s}}{s} + \frac{e^{-2s}}{s}$
But I'm confusing myself in the algebra to find Y(s)... I keep ending up with long, confusing equations. I can't seem to find the correct form for the inverse laplace transform...
I would prefer a few hints in the right direction or a starter rather than a full blown solution if you could
Re: Laplace Transforms - find the system response
I'm stuck part way through this:
A Linear, time invariant system has the impulse response $h(t) = tu(t-1)$ Find the transfer function $H(s)$ and use it to find the response to the input $x(t) = u(t) - 2u(t-1) + u(t-2)$
I have found $H(s) = \frac{e^{-s}}{s^2} + \frac{e^{-s}}{s}$
And $X(s) = \frac{1}{s} - \frac{2e^{-s}}{s} + \frac{e^{-2s}}{s}$
But I'm confusing myself in the algebra to find Y(s)... I keep ending up with long, confusing equations. I can't seem to find the correct form for the inverse laplace transform...
I would prefer a few hints in the right direction or a starter rather than a full blown solution if you could
The impulse response is $h(t)=t\ u(t-1)$ and it has 'only one' term in t... then why the tranform of h(t) You computed has two terms in s?...
Kind regards
September 12th 2011, 08:57 PM #2
|
{"url":"http://mathhelpforum.com/differential-equations/187850-laplace-transforms-find-system-response.html","timestamp":"2014-04-18T18:19:15Z","content_type":null,"content_length":"37298","record_id":"<urn:uuid:5472903a-31fa-4bde-89f1-598eedd7d532>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Basic Concepts of Topology
April 24th 2010, 03:45 PM #1
Apr 2010
Basic Concepts of Topology
So I'm reading about sets, and if A and B are combined, their commonalities are only counted once. This makes me think of sets as being one dimensional, i.e.:
A is (1,7). B is (5,10).
Are they one dimensional?
They demonstrate sets with circles, like a Ven diagram. This confuses me.
Is it like, a three dimensional graph with X, Y, and Z. X has a set of sets, Y has a set of sets, Z has a set of sets. Then you combine these all with a matrix?
I'm also fuzzy on the concept of a neighborhood.
Are there any specific mathematical concepts that are important to master to understand topology?
Dimension has no meaning when you talk about sets.
Venn diagrams are a nice way to demonstrate some basic principals. Each point on the diagram is supposed to represent an element, so a set looks like a bunch of these (circle is usually used).
I have no idea what you mean with the X, Y and Z.
A neighborhood is a set which has a given point in its interior.
If you want to understand topology properly I suggest you first look at metric spaces. Topology is to metric spaces, what metric spaces are to the Euclidean setting.
Topology > Metric Spaces > Euclidean Spaces
From general to specific.
When I was talking about X, Y, and Z I was thinking that each dimension of a Euclidean Space (i.e., "X") was a set... I am now under the impression that each dimension is actually a metric space,
The reason dimension has no meaning when talking about sets is because dimensions are constructed by using sets?
Metric spaces are a type of set where the distance between points is specified... or simply the concept of a distance between points is possible.
How do sets translate to numbers? When understanding sets, should I stay away from visualizing them in Euclidean Space?
Euclidian space is the d-product of the reals with a metric on it. Strictly speaking, saying each dimension is a set is wrong as dimension is a number and not an object. The projection of the
space on to a co-ordinate is a Euclidian space in its own right.
The reason dimension has no meaning when talking about sets is because dimensions are constructed by using sets?
The reason why dimension has no meaning when talking about sets is because they have no structure. The examples you keep thinking of are all canonical spaces with some sort of structure. What is
the dimension of two apples, a pear and four monkeys? It just doesn't make sense.
There is also more than one definition of a dimension.
Metric spaces are a type of set where the distance between points is specified... or simply the concept of a distance between points is possible.
How do sets translate to numbers? When understanding sets, should I stay away from visualizing them in Euclidean Space?
A metric space is where distance between two elements is specified.
How you choose to visualise sets is up to you, but remember sets have no structure on their own. They translate to numbers as numbers are just sets.
April 24th 2010, 05:22 PM #2
April 25th 2010, 11:49 AM #3
Apr 2010
April 25th 2010, 02:02 PM #4
|
{"url":"http://mathhelpforum.com/differential-geometry/141154-basic-concepts-topology.html","timestamp":"2014-04-21T11:01:07Z","content_type":null,"content_length":"39085","record_id":"<urn:uuid:664dd348-c6bc-4750-841d-53c0d4c6a981>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Basic Probality?
February 14th 2010, 12:57 PM
Basic Probality?
Having trouble working out the following problem for my son. Two cards are randomly selcted from a standard 52 card deck. What is the probability of getting two hearts or two cards less than four
( aces count as 1)?
February 14th 2010, 01:53 PM
Basic probability goes like this: # Of Successes/# Of Outcomes. In a 52 card deck, there are 52 possible cards (outcomes) you can select. How many of those cards are hearts? How many of those
cards are less than (but not equal to) four. How many of those cards are both less than four AND hearts?
The probability of getting two hearts OR two cards less than four is P(Two Hearts) + P(Less Than Four) - P(Two Hearts AND Less Than Four).
See if you can suss out the problem now, and see if you get why why subtract that last probability from the combination of the two.
February 14th 2010, 04:40 PM
thanks ANDS
I had the first part but the subtraction of the "AND" probability was eluding me. I understand the the subtraction removes the over lap. Thank you.
|
{"url":"http://mathhelpforum.com/statistics/128803-basic-probality-print.html","timestamp":"2014-04-16T10:47:32Z","content_type":null,"content_length":"5095","record_id":"<urn:uuid:c1857e6f-fc9e-4751-90ad-6049a15e728b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cubic function
If someone could please give me some assistance with this question, it would be very much appreciated
The graph of a cubic function cuts the x axis at (a,0) and touches the x axis when x=b. It has a y intercept of (0,a^3.b^3). Find the equation of the cubic function.
All I can think of how to start would be y= (x+a)(x-b)^2
So any help would be much appreciciated
|
{"url":"http://mathhelpforum.com/pre-calculus/174517-cubic-function.html","timestamp":"2014-04-16T20:19:15Z","content_type":null,"content_length":"46729","record_id":"<urn:uuid:ebfb90e2-dfa0-4cab-85ec-67ceb0a93aac>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Faradays Law, Calculate E field
The magnetic field is non-zero only within the smaller circle, radius R.
You want the E field long the boundary of the larger circle, radius 2R. The area of the larger circle which is outside of the smaller circle has no magnetic flux, so the flux enclosed by the larger
circle is the same as the flux enclosed by the smaller circle.
|
{"url":"http://www.physicsforums.com/showthread.php?s=0d029e43e28ada4db794f4cd171e6d62&p=3380821","timestamp":"2014-04-18T18:23:50Z","content_type":null,"content_length":"48882","record_id":"<urn:uuid:fb49f7c9-0340-45b3-8fb2-e66159c20c3b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
|
int 13h gets wrong disk geometry
03-10-2008, 09:27 AM
int 13h gets wrong disk geometry
I use bochs to run my own toy os. When it boots up, I use int 13h to
get some hard drive geometry info. However, the cylinder number seems
Here's my code (written in NASM):
xor eax, eax
mov ah, 08h ; Code for drive parameters
mov dx, 80h ; hard drive
int 0x13
jb .hderr ; No such drive?
;; cylinder number
xor ax, ax ; ax <- 0
mov ah, cl ; ax <- cl
shr ah, 6
and ah, 3 ; cl bits 7-6: high two bits of
maximum cylinder number
mov al, ch ; CH = low eight bits of maximum
cylinder number
;; sector number
and cl, 3Fh ; cl bits 5-0: max sector number (1-
;; head number
inc dh ; dh = 1 + max head number (0-origin)
mov [_dwNrHead], dh
mov [_dwNrSector], cl
mov [_dwNrCylinder], ax
jmp .hdok
... ... ...
Here's what I get: Cylinder: 404, Head: 16, Sector: 63
and here's ata configuration in the bochsrc:
ata0-master: type=disk, path="hd.img", mode=flat, cylinders=406,
heads=16, spt=63
Why the int 13h gets 404 but not 406??
Thank you in advance.
03-10-2008, 10:27 AM
Re: int 13h gets wrong disk geometry
forrest <forrest.yu@gmail.com> writes:[color=blue]
> and ah, 3 ; cl bits 7-6: high two bits of maximum cylinder number
> mov al, ch ; CH = low eight bits of maximum cylinder number
> Here's what I get: Cylinder: 404, Head: 16, Sector: 63
> and here's ata configuration in the bochsrc:
> ata0-master: type=disk, path="hd.img", mode=flat, cylinders=406,
> heads=16, spt=63
> Why the int 13h gets 404 but not 406??[/color]
Out by one is perfectly understandable, and expected, as a
maximum cylinder number would be one less than the number
of cylinders, as 0 is a valid cylinder number. Out by two
is a bit curious tho'.
Dear aunt, let's set so double the killer delete select all.
-- Microsoft voice recognition live demonstration
|
{"url":"http://fixunix.com/minix/359185-int-13h-gets-wrong-disk-geometry-print.html","timestamp":"2014-04-16T10:27:27Z","content_type":null,"content_length":"6606","record_id":"<urn:uuid:6b7da464-fab4-49b4-afd0-d2cda9c812ee>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
|