content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
[Haskell-cafe] type metaphysics
Lennart Augustsson lennart at augustsson.net
Mon Feb 2 11:28:08 EST 2009
The Haskell function space, A->B, is not uncountable.
There is only a countable number of Haskell functions you can write,
so how could there be more elements in the Haskell function space? :)
The explanation is that the Haskell function space is not the same as
the functions space in set theory. Most importantly Haskell functions
have to be monotonic (in the domain theoretic sense), so that limits
the number of possible functions.
-- Lennart
On Mon, Feb 2, 2009 at 3:49 PM, Martijn van Steenbergen
<martijn at van.steenbergen.nl> wrote:
> Hi Gregg,
> Firsly: I'm not an expert on this, so if anyone thinks I'm writing nonsense,
> do correct me.
> There are many answers to the question "what is a type?", depending on one's
> view.
> One that has been helpful to me when learning Haskell is "a type is a set of
> values." When seen like this it makes sense to write:
> () = { () }
> Bool = { True, False }
> Maybe Bool = { Nothing, Just True, Just False }
> Recursive data types have an infinite number of values. Almost all types
> belong to this group. Here's one of the simplest examples:
> data Peano = Zero | Suc Peano
> There's nothing wrong with a set with an infinite number of members.
> Gregg Reynolds wrote:
>> This gives a very interesting way of looking at Haskell type
>> constructors: a value of (say) Tcon Int is anything that satisfies
>> "isA Tcon Int". The tokens/values of Tcon Int may or may not
>> constitute a set, but even if they, we have no way of describing the
>> set's extension.
> Int has 2^32 values, just like in Java. You can verify this in GHCi:
> Prelude> (minBound, maxBound) :: (Int, Int)
> (-2147483648,2147483647)
> Integer, on the other hand, represents arbitrarily big integers and
> therefore has an infinite number of elements.
>> To my naive mind this sounds
>> suspiciously like the set of all sets, so it's too big to be a set.
> Here you're probably thinking about the distinction between countable and
> uncountable sets. See also:
> http://en.wikipedia.org/wiki/Countable_set
> Haskell has types which have uncountably many values. They are all functions
> of the form A -> B, where A is an infinite type (either countably or
> uncountably).
> If a set is countable, you can enumerate the set in such a way that you will
> reach each member eventually. For Haskell this means that if a type "a" has
> a countable number of values, you can define a list :: [a] that will contain
> all of them.
> I hope this helps! Let us know if you have any other questions.
> Martijn.
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
More information about the Haskell-Cafe mailing list
|
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2009-February/054812.html","timestamp":"2014-04-16T10:21:09Z","content_type":null,"content_length":"6306","record_id":"<urn:uuid:04b88c83-9768-4d36-aa6d-48fcc20d542a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2D Graphics Rendering: Bresenham's Line Drawing Algorithm (with Source Code)
In this tutorial I will explain how to draw lines using the Bresenham's line-drawing algorithm. And then show you complete line drawing function. For the sake of this series of tutorials I will use
the 16-bit mode, so we will be dealing with ushorts(or words) per pixel. I will also use the 16-bit color macro defined in the previous tutorial. The following is an explanation of how the
Bresenham's line-drawing algorithm works, rather than exact implementation.
Bresenham's line drawing algorithm, visually.
Take a look at this image. One thing to note here is that it is impossible to draw the true line that we want because of the pixel spacing(in other words there's not enough precision for drawing true
lines on a PC monitor especially when dealing with low resolutions). The Bresenham's line-drawing algorithm is based on drawing an approximation of the true line. The true line is indicated in bright
color, and its approximation is indicated in black pixels. In this example the starting point of the line is located exactly at 0, 0 and the ending point of the line is located exactly at 9, 6. The
way the algorithm works is this. First it decides which axis is the major axis and which is the minor axis. The major axis is longer than the minor axis. On this picture illustrated above the major
axis is the X axis. Each iteration progresses the current value of the major axis(starting from the original position), by exactly one pixel. Then it decides which pixel on the minor axis is
appropriate for the current pixel of the major axis. How can you approximate the right pixel on the minor axis that matches the pixel on the major axis? - that's what Bresenham's line-drawing
algorithm is all about. And it does so by checking which pixel's center is closer to the true line. On the picture above it would be easy to identify these pixels by eye. I added vertical spans to
let you grasp the idea better visually. Take a closer look. The center of each pixel is marked with a dot. The algorithm takes the coordinates of that dot and compares it to the true line. If the
span from the center of the pixel to the true line is less or equal to 0.5, the pixel is drawn at that location. That span is more generally known as the error term.
You might think of using floating variables but I assure you the whole algorithm is done in straight integer math with no multiplication or division in the main loops(no fixed point math either). How
is it possible? Basically, during each iteration through the main drawing loop the error term is tossed around to identify the right pixel as close as possible to the true line. Let's consider these
two deltas between the length and height of the line: dx = x1 - x0; dy = y1 - y0; This is a matter of precision and since we're working with integers you will need to scale the deltas by 2 generating
two new values: dx2 = dx*2; dy2 = dy*2; These are the values that will be used to change the error term. Why scale? The error term must be first initialized to 0.5 and that cannot be done using an
integer. To confuse you even further, finally the scaled values must be substracted by either dx or dy(the original, unscaled delta values) depending on what the major axis is (either x or y).
It's time for some code. Here is the initialization part.
// This function assumes a 16-bit drawing surface to be already locked
// locking is introduced in the previous tutorial
// lpitch and lpscreen are global
// lpitch - pitch of the locked surface
// lpscreen - points to the first pixel in the locked surface
void Line (int x1, int y1, int x2, int y2, ushort color)
int dx, //deltas
dx2, //scaled deltas
ix, //increase rate on the x axis
iy, //increase rate on the y axis
err, //the error term
i; //looping variable
int pitch = lpitch;
// identify the first pixel
ushort *ptr_vid = lpscreen + x1 + (y1 * (pitch >> 1));
// difference between starting and ending points
dx = x2 - x1;
dy = y2 - y1;
// calculate direction of the vector and store in ix and iy
if (dx >= 0)
ix = 1;
if (dx < 0)
ix = -1;
dx = abs(dx);
if (dy >= 0)
iy = (pitch >> 1);
if (dy < 0)
iy = -(pitch >> 1);
dy = abs(dy);
// scale deltas and store in dx2 and dy2
dx2 = dx * 2;
dy2 = dy * 2;
All variables are set and it's time to enter the main loop.
if (dx > dy) // dx is the major axis
// initialize the error term
err = dy2 - dx;
for (i = 0; i <= dx; i++)
*ptr_vid = color;
if (err >= 0)
err -= dx2;
ptr_vid += iy;
err += dy2;
ptr_vid += ix;
else // dy is the major axis
// initialize the error term
err = dx2 - dy;
for (i = 0; i <= dy; i++)
*ptr_vid = color;
if (err >= 0)
err -= dy2;
ptr_vid += ix;
err += dx2;
ptr_vid += iy;
} // end of Line(...
Check out the source code for the full implementation of line-drawing on the primary surface. The good thing about this function is even if it can be optimized it is still pretty fast. I did not
optimize it for the sake of a clear explanation of how this algorithm performs. But there are quite a few methods of doing so out there, I'm sure you could find some of them on the internet. It is
not 100% precise, but it is not very evident in any resolution higher than 320x240. This is really not a problem when you're working in 640x480 or higher resolution. If you want a super-precise line,
I would recommend looking up sub-pixel methods either online or in books.
Did this article help you learn something new?
If the content on this website somehow helped you to learn something new, please let others know about it by sharing it on your website or on your Facebook page with your friends. In addition you can
help by making a donation or bookmarking this site.
|
{"url":"http://www.falloutsoftware.com/tutorials/dd/dd4.htm","timestamp":"2014-04-17T21:22:37Z","content_type":null,"content_length":"39111","record_id":"<urn:uuid:154b669b-f6e2-425f-ace6-a05c9a594bbf>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Translating Slowly
You hate it when someone's translating something from another language and they go too quickly because they you can keep up with them, right? We hate that, too. We also hate it when we need to go
from English to math in a hurry. We prefer going slowly, piecing together a little bit at a time. We don't need to go from a paragraph straight to a symbolic expression, and it's often easier not to.
Ne comprenez-vous?
Sample Problem
What is the surface area of a rectangular box with a lid?
Let's think about this with common sense. Fortunately, we kept some in reserve for exactly this moment. To find the surface area, we need to add the surface area of the top, the surface area of the
bottom, and the surface areas of the four sides. Writing this partially in symbols, we see
(surface area of top) + (surface area of bottom) + (surface areas of sides)
To find the surface areas of the top, bottom, and sides, we'll need variables for the dimensions of the box. Let's use h for height, w for width, and l for length. It's a good idea to label these in
the picture, too. Don't worry that this will mess up the box. We weren't planning to enter it into any art contests anyway.
The surface area of the top and the surface area of the bottom are each lw, which comes less from common sense and more from the memorization of an uber-useful formula. We can now translate a bit
more into symbols:
lw + lw + (surface areas of sides)
All we have left to worry about are the surface areas of the sides. Worry we will, until we have it figured out. We're perfectionists like that. Also, we're Yoda. You had no idea.
The two sides on the left and right ends each have surface area wh, and the front and back sides each have surface area lh. Now we can finish translating to get
lw + lw + wh + wh + lh + lh
Nice. We didn't even need to use Google Translate. Finally, we tidy up a little, because this equation is a mess. Where was it raised, a barn?
The surface area of the box is
2lw + 2wh + 2lh
Translating from English to math a bit at a time can make the work take a little longer, but if it helps you find the right answer consistently, it's probably worth it. When we say "probably," we
mean "definitely." We were being sarcastic. There's a first time for everything.
Translating Slowly Practice:
Liana has 7 times as many chocolate bars as her friend Beth. Liana may be happy about that right now, but we'll come back to her after her third heart attack. Express the number of chocolate bars
Liana has in terms of the number of chocolate bars Beth has.
Felicity has some cookies, and Liana has four times as many cookies. Apparently, Liana has polished off all those candy bars and still isn't full. Goodness gracious, Liana, eat an apple.
Express the number of cookies Liana has in terms of the number of cookies Felicity has.
The length of a bed is two feet longer than its width. In terms of its width, how long is the bed?
Jonathan is three inches shorter than Jules. Jules is two inches shorter than Justin. How tall is Jonathan in relation to Justin?
What is the total area covered by the shape shown below (in terms of s)?
IMAGE: Reformat image so that there is only a half-circle on left - no part of the circle should be drawn inside the box, and vice versa.
Linda divides a pie equally between herself, her two kids, and three of their friends. She won't even come close to dividing the ice cream evenly, but that's another story for another day. What
fraction of the pie does each person get?
Translate the following phrase from English into mathematical symbols: An amount.
Translate the following phrase from English into mathematical symbols: The sum of a number and one-half that same number.
Translate the following phrase from English into mathematical symbols: Four times the difference of a quantity and seven.
Translate the following phrase from English into mathematical symbols:
The quotient of a number and the total of the number and thirteen.
Translate the following phrase from English into mathematical symbols: The quotient of seven and the difference between 4 and twice a value.
|
{"url":"http://www.shmoop.com/word-problems/translation-help.html","timestamp":"2014-04-21T10:01:52Z","content_type":null,"content_length":"48216","record_id":"<urn:uuid:66db998f-6324-4c4a-a08d-2d0c5c4a8d2a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
|
San Juan Capistrano Math Tutor
Find a San Juan Capistrano Math Tutor
...These are the essentials of course, and practice makes perfect. Raising SAT scores is as much about practice as it is learning new skills. Students can practice on their own time so that we can
make the most of tutoring time (saves money too). With the combination of personal practice and clear teaching, students begin to recognize each SAT problem.
24 Subjects: including calculus, SAT math, grammar, prealgebra
I'm a retired Mechanical Engineer eager to motivate young minds to pursue their dreams. I earned BS and Master degrees from Universidad de Los Andes, Bogota, Colombia and a PhD degree from
University of Wisconsin-Madison. I love teaching younger ones; I used to teach at Rochester Institute of Tech...
6 Subjects: including calculus, geometry, trigonometry, precalculus
...A clear understanding of organic reactions depends upon an appreciation of the mechanism of electron flow. Those topics will be emphasized in my tutoring. My background and qualifications to
tutor organic chemistry include the Shafer and Bowen Award for Excellence in Laboratory Instruction, which I earned from Dartmouth College in 2005 along with my B.A. in chemistry.
22 Subjects: including geometry, logic, probability, algebra 1
...The lesson plans are developed on a learner-centered basis. In addition to pronunciation, intonation, and grammar, the one-on-one tutoring sessions will concentrate on idiomatic use, accent
reduction and culture appropriateness. The tutoring sessions will be designed according to the needs of t...
3 Subjects: including SPSS, Chinese, ESL/ESOL
...A few years later, I scored a 5 on the AP Calculus BC exam in 10th grade. I also took AP Chemistry, AP Physics B, and AP Physics C: Mechanics during my senior year, scoring a 5 on each of them.
In 9th grade, I took the old SAT and scored 1580, with an 800 on the Verbal section and a 780 on the Math section.
6 Subjects: including calculus, physics, precalculus, trigonometry
|
{"url":"http://www.purplemath.com/San_Juan_Capistrano_Math_tutors.php","timestamp":"2014-04-17T04:18:31Z","content_type":null,"content_length":"24378","record_id":"<urn:uuid:c379ae77-b60e-4788-a524-8a79038e14b2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ellipse Arc
Toolbar / Icon:
Menu: Draw - Ellipse - Ellipse Arc
Shortcut: E, A
Commands: ellipsearc | ea
Draws ellipse arcs with a given center, major and minor axis and start and end angles.
1. Set the center of the ellipse using the mouse or enter a coordinate in the command line.
2. Define the major axis by clicking the endpoint of the axis, which is a point on the ellipse. You can also enter a coordinate into the command line or enter an angle and major radius in the format
50<30 where 50 is the major radius and 30 is the ellipse angle.
3. Define the endpoint of the minor axis which is also a point on the ellipse or enter the length of the minor axis.
4. Set the start angle with the mouse or by entering a coordinate or the angle amount in the command line.
5. Set the end angle the same way as the start angle.
|
{"url":"http://www.qcad.org/doc/qcad/latest/reference/en/scripts/Draw/Ellipse/EllipseArcCPPA/doc/EllipseArcCPPA_en.html","timestamp":"2014-04-16T20:07:16Z","content_type":null,"content_length":"2116","record_id":"<urn:uuid:a0d18d28-802f-4ec9-b8ce-297171f19296>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
JavaScript Data Types
Every variable in JavaScript has a data type which dictates the values that can be stored in it. However, JavaScript is a weakly typed programming language, which means that a variable is not
constrained to a single data type for its entire lifetime. In many ways, this makes a programmer’s life more difficult because a variable can potentially be holding any type of data at any given
time. This requires the programmer to thoroughly understand how the different data types behave, as well as how they interact with each other. This is the first in a series of posts which explores
the intricacies of JavaScript data types.
JavaScript data types can generally be divided into two categories ― primitives and objects. Primitives represent the most basic types of data supported by a language. In JavaScript, the primitive
data types are Undefined, Null, Boolean, Number, and String. Objects, on the other hand, are composite data types. Each object is a collection of primitives and other objects. The following
sections describe the primitive data types in detail. The object type will be revisited in a future post.
Undefined Data Type
The Undefined data type represents an absent or unknown value. This data type consists of a single value, undefined. Variables which have not yet been assigned a value default to the undefined
value. Variables can also intentionally be assigned the undefined value, although this is relatively uncommon*. The following example shows two variables which are both undefined. The first
variable, “foo”, is undefined because no value is assigned to it. The second variable, “bar”, is intentionally set to undefined.
var foo;
var bar = undefined;
*The undefined value is not directly assigned. There is actually a global variable named “undefined” whose value is the undefined value.
Null Data Type
Much like the Undefined type, the Null data type also represents the absence of a value. Unlike Undefined, Null represents an intentional lack of a value. The Null data type is represented by a
single literal, null. The null value is often used to initialize or clear object variables. The following example shows how the null value is assigned to a variable.
var foo = null;
Boolean Data Type
The Boolean type consists of two literals, true and false, which correspond to the truth values of Boolean logic. Boolean variables are typically used to represent the results of a comparison (less
than, greater than, etc.). They are also useful for representing the presence, or lack thereof, of a value. An example of this is a checkbox which is either checked or not checked. In the
following example, the “isSet” variable is set to false. The example also stores the result of a comparison in “isGreater”. Since two is greater than one, “isGreater” is true.
var isSet = false;
var isGreater = (2 > 1);
Number Data Type
All numeric data is represented by the Number data type. This type includes both negative and positive numbers. Negative numbers are always preceded by a minus sign. Positive integers can be
preceded by a plus sign, but it is not required. Numbers can be formatted in a variety of ways. The following list describes each of these ways. All numbers are assumed to be base ten unless
otherwise noted.
• Integers ― Integers are positive and negative numbers that do not have a fractional part.
• Real numbers ― Real numbers can have both an integer and a fractional part.
• Scientific notation ― Scientific notation is useful for representing extremely large and extremely small values. Values represented in scientific notation are formatted as a coefficient,
followed by the letter “e”, followed by an exponent. The coefficient can be an integer or real number, and the letter “e” can be either lowercase or uppercase. The exponent, however, must be an
• Hexadecimals ― Hexadecimal (or hex) values are base sixteen integers. Hexadecimal uses the ten decimal digits 0-9 and the letters A-F to represent numbers. Hex is often used as shorthand for
representing binary values. Hexadecimal values must begin with the characters “0x”. Hex values are not case sensitive. Therefore, the values 0XDEADBEEF and 0xdeadbeef are equivalent.
• Octals ― Octal values are base eight integers. Octal numbers can only include the decimal digits 0-7. JavaScript octals are specified by adding a leading zero to a number. However, because
people tend to subconsciously ignore leading zeros, octal notation can easily be confused with base ten notation. To prevent this problem, octal values are prohibited in strict mode.
The following example shows how each number format is written.
var integer = 100;
var real = 3.14; // pi
var scientific = 3.14e2; // 314, or 3.14 * 10^2
var octal = 0144; // base ten value is 100
var hexadecimal = 0x64; // base ten value is 100
Although programmers can specify numeric data in several formats, internally JavaScript stores all numbers as floating point numbers. A floating point number is a binary representation of a real
number. There are several ways to implement floating point numbers, however JavaScript uses the IEEE-754 standard. One interesting note is that IEEE-754 defines two zero values, +0 and -0. The two
values are generally treated as the same number, however it is useful to be aware of the distinction. The IEEE standard defines several other special numbers, which are explained below.
The Number type includes a special “Not-a-Number” (NaN) value which represents unrepresentable numbers. NaN can be used in mathematical computations, but any such computation will result in NaN.
The following example includes three statements which evaluate to NaN. The first statement divides zero by zero. The result is undefined as a real number and is assigned the value NaN. The second
statement computes the square root of a negative number. The result is an imaginary number, and is therefore treated as NaN. The third statement assigns the NaN value directly to a variable.
var foo = 0/0; // foo equals zero divided by zero
var bar = Math.sqrt(-1); // bar equals the square root of -1
var baz = NaN;
The Number type can also represent the mathematical concept of infinity. When used in code, infinity is represented by the value Infinity. Negative infinity can also be expressed as -Infinity.
Arithmetic involving infinity is governed by the rules in the following list. Note that JavaScript’s rules regarding infinity are not always the same as those in mathematics.
• Any finite number added to, or subtracted from, Infinity is Infinity.
• Any finite number added to, or subtracted from, -Infinity is -Infinity.
• Adding Infinity and -Infinity yields NaN.
• Any positive value (including Infinity) multiplied by Infinity is Infinity.
• Any positive value (including Infinity) multiplied by -Infinity is -Infinity.
• Any negative value (including -Infinity) multiplied by -Infinity is Infinity.
• Any negative value (including -Infinity) multiplied by Infinity is -Infinity.
• Zero multiplied by Infinity or -Infinity is NaN.
• Any finite value divided by Infinity or -Infinity is zero.
• Infinity divided by any finite positive value is Infinity.
• Infinity divided by any finite negative value is -Infinity.
• -Infinity divided by any finite negative value is Infinity.
• -Infinity divided by any finite positive value is -Infinity.
• Infinity divided by Infinity or -Infinity is NaN.
• -Infinity divided by Infinity or -Infinity is NaN.
• Any positive value (including Infinity) divided by zero is Infinity.
• Any negative value (including -Infinity) divided by zero is -Infinity.
String Data Type
Textual data is represented by the String data type. A JavaScript string is an ordered sequence of zero or more characters. String literals are created by enclosing a character sequence within
double or single quotes. The choice of using single or double quotes is purely stylistic. The only restriction is that the opening and closing quote must be of the same type. In other words,
strings beginning with a double quote must also end with a double quote. Similarly, strings that start with a single quote must be terminated with a single quote. It is also important to realize
that the opening and closing quotes are not part of the string value. The quotes are merely used to delimit the beginning and end of the string. The following example assigns three string literals
to variables. The first variable, “foo”, stores a string enclosed in double quotes, while the string stored in “bar” uses single quotes. The third string, stored in “baz”, is a special string
containing no characters, known as the empty string.
var foo = "Hello World!";
var bar = 'Hello Again!';
var baz = "";
Escape Sequences
One shortcoming of string literals is that they cannot represent certain characters. For example, a double quote cannot appear within a double quoted string literal because it would be interpreted
as the string’s terminator. String literals are also incapable of representing non-printing characters such as tabs and line breaks. In order to represent problematic characters, JavaScript
provides escape sequences. An escape sequence is a combination of characters, beginning with a backslash, that is interpreted as a single character (referred to as the escaped character). The
character(s) following the backslash determine the escaped character. The following list enumerates JavaScript’s escape sequences and the characters that they represent.
• \b ― Backspace
• \t ― Horizontal tab
• \n ― Line feed (new line)
• \v ― Vertical tab
• \f ― Form feed
• \r ― Carriage return
• \” ― Double quote
• \’ ― Single quote
• \\ ― Backslash
• \xXX ― Latin-1 encoded character specified by two hexadecimal digits. The hex value must be between 00 and FF. For example, \xA9 represents the copyright symbol.
• \uXXXX ― Unicode character specified by four hexadecimal digits. For example, the copyright symbol is specified by \u00A9.
• \XXX ― Latin-1 encoded character specified by up to three octal digits. The octal value must be between 0 and 377. Like octal numbers, octal escape sequences are also prohibited in strict mode.
The following example creates a string literal which contains several common escape sequences. Note the escaped double quote characters and the new line escape sequence.
var str = "Say \"Hello World\"\nAnd start a new line";
The previous string looks somewhat convoluted. However, when the string is displayed, the escape sequences are replaced, resulting in the following output.
Say "Hello World"
And start a new line
It is also possible to escape characters that do not have any special meaning. For example, the escape sequence \c is simply replaced by the letter “c”. However, this should be avoided as there is
no logical reason for doing so.
Things to Remember
• JavaScript is a weakly typed language, meaning that a variable’s data type can change during execution.
• The primitive data types are Undefined, Null, Boolean, Number, and String.
• The Undefined type represents an absent or unknown of value.
• The Null type represents an intentional lack of value.
• Boolean values can hold two values, true and false.
• Numeric data is stored in the IEEE-754 floating point format.
• The String type is used to store textual data.
One thought on “JavaScript Data Types”
1. Thanks for the very helpful article on javascript data types especially about numeric data types.
|
{"url":"http://cjihrig.com/blog/javascript-data-types/","timestamp":"2014-04-17T03:55:03Z","content_type":null,"content_length":"34061","record_id":"<urn:uuid:5e2be5ed-a20e-488e-bb4a-abecd8e98d3b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RC Phase Shift Oscillators - All About Circuits Forum
Not to worry. Confusion frequently accompanies the learning of a new concept.
Originally Posted by
does the shift in phase depend on the frequency..?
Yes. The shift in phase does depend on frequency.
Originally Posted by
Could you please explain to me how each stage contributes 60 degrees phase ?
Using the oscillator in the link you supplied as the circuit under dicussion, I should make clear that each RC stage contributes its portion toward the overall phase shift. This phase shift
contribution by each RC stage is 60 degrees only if the feedback network is made up of three identical RC stages.
I call your attention to the plot that is included in the write-up at the link you provided. You will see that the y-axis is marked off in degrees of phase shift and the x-axis is a logarithmic plot
of frequency. Set aside the blue and yellow traces for the moment as they relate to the phase-lead configuration. You will see that one of the phase-lag plots (the red trace) starts at zero frequency
with a phase-shift of zero degrees and as the frequency increases the phase shift which is negative indicating delay increases to a value of -90 degrees. It never quite reaches -90 degrees, it just
approaches -90 degress. The green trace is a different RC phase-lag stage. It goes through the same change but does so at a different rate due to the fact that is it made up of a different values of
R and C. This limitation of the phase-lag RC circuit to -90 degrees is the reason that it takes more than two RC stages to accomplish the total -180 degrees.
By performing this plot for each of the three RC stages in the oscillator design, it is possible to predict the frequency of oscillation by summing the phase shift of all three RC stages at each
frqeuency and noting at which frequency the three sum to 180 degress or in the phase lag case -180 degrees.
I think you would benefit by studying more material on RC circuits such at the material that is available on the AllAboutCircuits tutorial website. This would give you a stronger foundation for you
to use in your study of oscillators.
|
{"url":"http://forum.allaboutcircuits.com/showthread.php?t=4336","timestamp":"2014-04-19T22:28:43Z","content_type":null,"content_length":"70705","record_id":"<urn:uuid:0eee4d7b-c04b-45df-a74c-1247b4ac8d72>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fourier Transform of a Triangular Voltage Pulse
1. The problem statement, all variables and given/known data
So this is a physics problem, but this question doesn't really have to do with the "physics" part of it as much as simply calculating the Fourier transform. (This is a second year physics course and
our prof is trying to briefly teach us math tools like this in learning quantum mechanics).
2. Relevant equations
[tex] \tilde{g}(\omega) = \frac{1}{\sqrt2\pi} \int g(t) e^{-i \omega t} dt [/tex]
3. The attempt at a solution
I have done the calculation of g(ω) several times and got an answer
[tex] \frac{2}{(\tau \omega ^2 \sqrt2 \pi)} (1 - cos(\omega \tau)) [/tex]
I believe it is right, but since the work to get it is extensive I don't want to type it up unless someone thinks I made an error. My actual concern is that I have a problem sketching the transform.
I graphed it on Wolfram so I have a general idea, but I really have no idea how to find the amplitude, width, and whether it should be centred at ω=0 or at a k[0] value. Any insight would be greatly
|
{"url":"http://www.physicsforums.com/showthread.php?p=3817703","timestamp":"2014-04-19T02:09:16Z","content_type":null,"content_length":"31050","record_id":"<urn:uuid:c63abe9c-bc3a-459d-80b8-84c55c170530>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An Ancient Polynesian Culture Used A Completely Unique Binary Counting System
Binary, or base two, is the number system that computer systems use, as opposed to the decimal, or base ten, system used in our day-to-day lives.
Binary is generally associated with high technology and modern mathematics.
However, in a new paper published Dec. 15 in the Proceedings of the National Academy of Sciences, psychologists Andrea Bender and Sieghard Beller discuss how centuries ago, a Polynesian culture on
the small island of Mangareva developed a binary system to facilitate counting and calculations.
Mangareva is located in French Polynesia, about 1,000 miles southeast of Tahiti. Humans settled the island in three main waves — two waves of Polynesian settlement between 500 and 800 CE and between
1150 and 1450 CE, and a third wave after European colonization in the 19th century.
Mangarevan society, like many Polynesian societies, was based around a strict hierarchy of chiefs and peasants. The economy was built around trade, tributes, and feasts — peasants would offer the
chief tributes of staple food products, particularly turtles, fish, coconuts, octopuses, and breadfruit, and these goods would be redistributed by the chief at large feasts.
This economic organization made counting very important in Mangareva and other similar Polynesian cultures — keeping track of trade, tribute, and feast goods was essential to the system, and must
have been quite difficult in the absence of written notation.
All Polynesian cultures, including the Mangarevans, had a general counting system for day-to-day affairs. This system was a decimal system, based on powers of ten, similar to our own.
However, for those important tribute goods — turtles, fish, coconuts, octopuses, and breadfruit — the Mangarevans developed a special counting system, based partially on binary. This system was
recorded by the French missionaries who came to the island in the 19th century, and the paper's authors note some of the missionaries' ironic role in both recording, and, by introducing literacy and
Arabic numerals, leading to the extinction of this unique system.
The authors relied on the missionaries' reports, anthropological inference, and an abstract analysis of the system, to get an idea of how it worked and how it was used.
The special counting system is a hybrid of a decimal system and a binary system. Decimal systems like the one we use are based on powers of ten — we have ten digits (0, 1, 2, ..., 9) and we count
higher numbers by using digit multiples of powers of ten — 234 is two hundreds, plus three tens, plus four ones.
Binary systems like those used by computers are based on powers of two. There are only two digits — 1 and 0, and place-value is based on the powers of two: 1, 2, 4, 8, 16, 32, and so on. Counting in
binary, we start with 1, and then two is 10: 2 + 0. Three is 11: 2 + 1. Four is a power of two, and is written 100: 4 + 0 + 0. Five is 101: 4 + 0 + 1.
The Mangarevan system combined the decimal and binary systems in a unique way. Small numbers — one through nine — are represented by their normal digit words. But, for medium size numbers, the system
switches over to binary. The Mangarevans had special words for 10, 20, 40, and 80 — the first few powers of two, multiplied by ten. For larger numbers, the system switched back to decimal, taking
decimal multiples of eighty.
So, a number like 112 would be represented as the Mangarevan language equivalent of "eighty twenty ten two": 112 = 80 + 20 + 10 + 2. Another example: 361 would be "four eighties, forty, one": 361 = 4
x 80 + 40 + 1.
The big advantage of this hybrid system is that much of arithmetic becomes much easier, especially in a culture without writing. Addition in a decimal system has a pretty large number of rules that
we have to just memorize to be able to efficiently add. For example, 5 + 6 = 11 is something that gets drilled into most schoolchildren, as it is pretty clearly impractical to start at 5 and count
one at a time up 6 more every time we want to add these two numbers.
In the Mangarevan system, addition in the decimal parts works just like this.
But, when adding in the binary part of the Mangarevan system, there are only two basic addition rules. If a power of two number is added to itself, you get the next power of two number up: twenty
plus twenty equals forty. If a power of two number is added to a different power of two number, just include both in the sum: twenty plus ten is just "twenty ten".
This makes addition very straightforward in the Mangarevan system, a very useful property for counting up amounts of tribute goods.
The Mangarevan system is impressive, since it shows how different cultures can develop diverse number systems based on their needs. The human mind, and the human capacity for numeracy, is an
incredibly creative and flexible thing.
Comments on this post are now closed.
|
{"url":"http://www.businessinsider.com/mangarevan-binary-number-system-2013-12","timestamp":"2014-04-16T20:16:37Z","content_type":null,"content_length":"121058","record_id":"<urn:uuid:e9e3b757-f464-41bc-944b-c76d007a2a40>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proving That Proving Is Hard
February 23, 2010
On the computational complexity of the theory of the real numbers
Alfred Tarski is one of the great logicians of the twentieth century. He is famous for the formal definition of truth, the decidability of many important theories, and many other results—one of the
strangest is his famous joint theorem with Stefan Banach. This theorem proves that any solid ball of one pound of gold can be cut into a finite number of pieces and reassembled into two solid balls
each of one pound of gold. Pretty neat trick.
Today I want to talk about the complexity of theories. There are some classic results that I think should be wider known, and their proofs use techniques that may have application elsewhere.
In the early days of theory there was a great deal of interest in the complexity of decision problems. I think these are natural problems, often they could be answered with strong lower bounds, and
further some of them are close to practical questions. For example, the theory of the reals occurs in many actual problems—especially special cases of the theory of the reals. The theory restricted
to existential sentences is quite natural and arise in many applications.
Upper Bounds on the Theory of the Reals
Tarski worked on various theories with the main goal of proving their decidability. Since Kurt Gödel’s famous result on the incompleteness of Peano Arithmetic, a goal has been to find weaker theories
that are complete. Such a theory has two neat properties:
1. the theory can either prove or disprove all statements in the theory;
2. the theory is usually decidable.
The latter only needs the theory to have a reasonable axiom system, then it will be decidable. Given a sentence ${\phi}$ in such a theory the decision procedure is simple:
Start enumerating proofs from the theory. If a proof of ${\phi}$ appears, then say the sentence is valid; if a proof of ${eg \phi}$ appears, then say the sentence is invalid.
Note, one of ${\phi}$ or ${eg \phi}$ must appear, since the theory is complete; and both cannot appear, since the theory is consistent. A complete theory is defined to always be consistent.
Tarski turned away from natural numbers to real numbers. The theory he studied is the theory of real numbers. It has a language that allows any first order formula made up from ${\{+,-,\times,=\}}$,
and its axioms are those for a field, with additional axioms for a real closed field. The last are axioms that state that every odd degree polynomial must have a root.
One of the main results of Tarski is:
Theorem: The first order theory of the real numbers is complete.
He used a method that is called quantifier elimination. A theory has quantifier elimination if the theory has the property that every formula is equivalent to an open formula: a formula with no
quantifiers. Tarski theorem immediately proves that the theory is decidable, but left open the actual computational complexity of its decison problem. Theorists to the rescue.
Lower Bounds on the Theory of the Reals
The following table lists some of the basic results concerning decidability of various theories. The problem is: Given a sentence ${\phi}$ in the theory is the sentence provable or not.
Clearly for propositional logic the problem is NP-complete, and has unknown computational complexity
Predicate calculus concerns whether or not an arbitrary formula is logically valid; as long as there is at least one predicate letter ${A(x,y,\dots)}$ of arity at least two, the theory is
undecidable. The trick is it is possible to encode the behavior of a Turing machine into a sentence using ${A}$, and show that the sentence is valid if and only if the Turing machine halts.
Michael Fischer and Michael Rabin in 1974 proved several beautiful exponential lower bounds on the complexity of any decision procedure for various theories. I will discuss today their work on the
theory of real numbers.
Of course no theory that includes propositional formulas can have an easy decision procedure, unless P=NP. My opinions aside, it is reasonable to expect the theory of reals to be much more powerful
than NP. This expectation turns out to be true: the interplay between the ability to add and multiply real numbers with predicate calculus’s abilities allows powerful encoding tricks that cannot be
done in pure propositional logic. Together these two features, operations on reals and predicate calculus, allow strong lower bounds to be proved.
Fischer and Rabin proved:
Theorem: In the theory of reals, there is a constant ${c>0}$ so that there are true sentences of length ${n>n_{0}}$ whose shortest proof in the theory is ${2^{cn}}$.
This is a very strong theorem. It is unconditional, and shows that the proofs themselves are huge. This is stronger than just proving they are hard to find, since if the proofs are large there can be
no method that always proves valid theorems quickly—the proofs are just too big.
Lower Bounds on the Complexity of Theories
I thought that it might be useful to recall how such proofs are obtained. They are quite clever, in my opinion, and rely on several neat tricks. Partly I hope that these tricks could be used
elsewhere in complexity theory.
How do you prove lower bounds on a theory? If the theory of the reals could define the set of integers, then the theory would be undecidable. Since the theory is decidable by the famous result of
Tarski, it must be the case the integers are undefinable. What Mike and Michael show is how to define a very large set of integers, with a relatively small formula. This allows them to use the same
techniques that proved the predicate calculus was undecidable to prove their own result.
For example, imagine we could define ${x \in \mathbb{N} \wedge x < 2^{2^{n}}}$ by ${B_{n}(x)}$ where ${B_{n}(x)}$ is a “small” formula. Then, the plan is to use this to simulate the halting problem
for Turing machines that do not use too much tape. An easy way to see that this will work is to look at a simpler application. Suppose that ${P(x)}$ stands for
$\displaystyle \forall r \forall s \ B_{n}(x) \wedge B_{n}(r) \wedge B_{n}(s) \wedge rs = x \rightarrow r=1 \vee s=1.$
Then, clearly ${P(x)}$ defines the primes below ${2^{2^{n}}}$. The sentence,
$\displaystyle \forall x \exists y \ B_{n}(x) \wedge B_{n+1}(y) \wedge y>x \wedge P(y) \wedge P(y+2)$
says that for any ${x<2^{2^{n}}}$ there is a twin prime above it. This would be very exciting if it had a short proof—actually even weaker statements would be exciting. What Mike and Michael prove is
in general such sentences are hard to prove—they do not, unfortunately, show this particular sentences is hard. Oh well.
A Recursion Trick
Mike and Michael do construct a formula ${B_{n}(x)}$ that defines
$\displaystyle x \in \mathbb{N} \wedge x < 2^{2^{n}}.$
Their construction uses recursion: ${B_{n}}$ is defined in terms of ${B_{m}}$ for some ${m<n}$. This should come as no shock, since recursion is so useful in all of computer science. However, in
order to make their theorem work, they need to avoid making too many recursive calls. They use a “trick” that was discovered by Mike Fischer with Albert Meyer and independently by Volker Strassen.
Let ${F(x,y)}$ be any formula and consider the conjunction:
$\displaystyle G = F(x_{1},y_{1}) \wedge F(x_{2},y_{2}).$
It follows that ${G \leftrightarrow H}$ where
$\displaystyle H = \forall x \forall y \big[ \left((x=x_{1} \wedge y=y_{1}) \vee (x=x_{2} \wedge y=y_{2}) \right) \rightarrow F(x,y) \big].$
The reason this construction is useful is simple: it is the size of the resulting formulas. The size of ${G}$ is roughly twice that of ${F}$, but the size of ${H}$ is only the size of ${F}$ plus a
constant. This is one of the key tricks used in their proof: it keeps the size of the formula ${B_{n}}$ small.
A Name Trick
In their construction of ${B_{n}}$ they also use many variable names. The naive method would use an exponential number of variables, and this would cause their construction to explode in size. They
cleverly reuse names—in this way avoid having to use very long names. It is not too far from a trick that was used, many years later, by Adi Shamir in his famous proof that ${\mathsf{IP}=\mathsf
Mike and Michael use this trick. Consider this formula,
$\displaystyle \forall x \dots \forall x A(x) \dots \ \ \ \ \ (1)$
There is a name collision between the first ${x}$ and the second ${x}$: we would usually write this with different variables as
$\displaystyle \forall x \dots \forall x' A(x') \dots$
to make it clearer. However, the name collision is allowed in first order logic formulas, since it is easy to disambiguate the meaning of the formula (1). The importance of this trick is it allows
the re-use of variable names; this re-use saves the number of variables needed and this reduces the size of the formulas ${B_{n}}$.
Lower Bounds on the Theory of the Complex Numbers?
I believe their results hold for this case too, but they do not seem to explicitly state this. Perhaps this is a well known result, but I cannot track down the reference.
Open Problems
Can we use any of the tricks here in other parts of complexity theory? The recursion trick seems to me to be quite powerful. Perhaps we can use this trick in another part of theory.
A Postscript
Congratulations to Michael Rabin for being awarded the Dan David Prize for 2010. Wonderful.
1. February 23, 2010 2:36 pm
How about Proving that, Proving that Proving is Hard, is Hard?
2. February 23, 2010 3:22 pm
The recursion trick is useful in showing that TQBF (True Quantified Boolean Formula) is PSPACE-complete.
3. February 24, 2010 4:45 am
Just because proofs are huge doesn’t necessarily make them hard to find. Richard Borcherds pointed out (I have the impression this is something well known by logicians, but he’s the one I heard
it from) that you can write a Gödel-like sentence that is true, but instead of saying “I cannot be proved”, says “I cannot be proved in less than 10**1000 symbols”. The sentence is provable since
you can enumerate proofs that are big enough, and since it’s true, there’s no shorter proof. However, the big proof has low complexity–it’s not hard to find. And in fact by adding a new axiom
(specifically the consistency of the system) the proof becomes trivial (since the sentence is true).
One of these days I want to read Boolos and Jeffrey’s book “Computability and Logic” which I’ve heard has a bunch of stuff like this in it.
□ February 24, 2010 12:39 pm
I’m sorry but your comment does not make sense. “10**1000 symbols” is still just a constant number of symbols. The theorem by Fischer and Rabin mentioned here provides an exponential
lowerbound on the size of proof with respect to the size n of the input formula. So even when you already know the proof, simply write down this proof will take exponential time.
☆ February 25, 2010 3:18 am
Well fine, but again, “difficult to find” and “lengthy to write down” are not the same thing. Like the decimal expansion of (10**n)-1 is exponentially long in |n| but easy to compute.
Also what is the proof length in the Fischer-Rabin theorem if you introduce reflection principles (iterated consistency statements) to the axioms, etc.?
□ February 25, 2010 10:16 am
This whole business of proof complexity is always about searching for polynomial size proofs. If there’s no polynomial size proof, then it’s the end of the story.
But you have a point, because we might say that the proof is given implicitly, which allows us to encode long proofs. Each implicit proof is a polytime Turing Machine M(i) (i in binary),
where M(i) returns the proof line at position i. For implicit proof, you might want to have a look at the Masters thesis by Pavel Sanda, “Implicit propositional proofs”. I’m not aware of any
strengthen version of the Fischer-Rabin theorem that says even implicit proofs are hard to find. But maybe someone here can give us more pointers.
☆ February 25, 2010 7:56 pm
I’ll try to find that thesis about implicit proofs–I’d never heard that term before. My main question is whether just because there’s no short proof in a particular formal system, doesn’t
mean there isn’t one in a stronger system that’s still considered to have reasonable axioms. PA has no proof of CON(PA), but if you don’t believe PA is consistent, why do you care what
you can prove in it? If you believe PA then you should be willing to use PA+CON(PA)+CON(PA+CON(PA))+…. in your proofs (“iterated reflection”). I think that means all the Pi-0-1 statements
in the signature of PA become provable, but I’m not sure of that. I also seem to remember that all arithmetic statements are decidable if you iterate the consistency statement through the
transfinite countable ordinals, but then you no longer have an effective theory.
4. February 25, 2010 4:55 am
>This theorem proves that any solid ball of one pound of gold can be cut into a finite number of > pieces and reassembled into two solid balls each of one pound of gold. Pretty neat trick.
It actually says form and shape identical to the original ball, the mass would be different
□ February 28, 2010 1:00 am
Also, since each one of the finitely many pieces is allowed to be infinite, the trick is not that neat after all.
5. February 28, 2010 12:33 pm
The main result described in this post is really fascinating and the techniques for shortening formulas seem cool. Quantifier elimination apparently also has applications to optimization—actual
hard-core problems that one could program and solve, which I’d like to explore more sometime.
So I feel a bit bad offering mildly critical remarks on a great post. But… wouldn’t the statement of the Fischer-Rabin result be a bit clearer if the inequality between n and n_0 were written in
the reverse order, so that it would be clear that n_0 is the length of the formula? And…it might seem harmless, but the use of “ball of gold” in stating the Banach-Tarski theorem bothers me. The
theorem is not about balls of gold; it’s a piece of mathematics. Properties of balls of gold are the province of physics, chemistry, and everyday life. The theorem probably has little relevance
to the properties of balls of gold. It’s possible some naive blogosphere-navigators may come across this statment and be misled. Of course, if they are so easily misled they’re likely to
encounter bigger problems than this one… still…
I think questions of how mathematics relates to physics and reality are fascinating and somewhat important… I guess the use of this example, without maybe some cute parable introduced by “if gold
really behaved like … which of course it doesn’t…” ..just seems to make a little light of this relationship. Not that all mathematics should have a close relationship to reality…but I like to see
the nature of this relationship, or lack of it, treated seriously enough to avoid casually turning theorems into statements that taken literally, probably misrepresent both the physics and the
Sorry if that comes off sounding like a rant. I don’t mean it to. Learning about the Fischer-Rabin result from such a clearly written post was a real pleasure and I mean to look into it, and the
implications for optimization, further.
6. August 18, 2010 7:31 pm
“Clearly for propositional logic the problem is NP-complete, and has unknown computational complexity”. Don’t you mean NP-hard ? Because if it was complete, the complexity would be known.
1. Wine, Physics, and Song » Blog Archive » Interesting links for Sunday 2/28/2010: political econ of protectionism, complexity of the theory of the reals,
Recent Comments
Jon Awbrey on In Praise Of P=NP Proofs
Joshua Zelinsky on In Praise Of P=NP Proofs
mkatkov on In Praise Of P=NP Proofs
Ibrahim Cahit on In Praise Of P=NP Proofs
Paul Beame on In Praise Of P=NP Proofs
In Praise Of P=NP Pr… on No-Go Theorems
In Praise Of P=NP Pr… on Graph Isomorphism and Graph…
In Praise Of P=NP Pr… on Can Amateurs Solve P=NP?
Jon Awbrey on Triads and Dyads
Pip on Triads and Dyads
Hendrik Jan Hoogeboo… on Triads and Dyads
Mike R on The More Variables, the B…
maybe wrong on The More Variables, the B…
Jon Awbrey on The More Variables, the B…
Henry Yuen on The More Variables, the B…
|
{"url":"https://rjlipton.wordpress.com/2010/02/23/proving-that-proving-is-hard/","timestamp":"2014-04-20T15:58:23Z","content_type":null,"content_length":"107123","record_id":"<urn:uuid:0e5101e9-f49d-4858-85b0-a389b2e09d84>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Article - National News
Mathematicians try to come up with new Equations to Attract Blacks
by Lorinda Bullock
NNPA National Correspondent
WASHINGTON (NNPA) – Mathematicians are known for figuring out the world’s most difficult equations and finding ways to apply them to nearly every aspect of daily life. Black mathematicians find
themselves not only working in their chosen field of study, but also working to solve one of their most complex equations yet—why so few of them exist.
Of the nearly 15, 000 math professors in the United States, there are only about 300 who are Black and about 500 who are Hispanic. Out of the 433 Math Ph.D.s awarded last year to U.S. citizens, 14
were awarded to Black Americans, said the American Mathematical Society.
Duane Cooper, a math professor at Morehouse College, said a general perception of math being ''too difficult'' contributes to the low numbers.
“I think when students say math doesn’t make sense; it just kind of hurts me because nothing makes more sense than mathematics,” Cooper said. “Everything fits together beautifully and logically and
so in some sense if it doesn’t make sense, somewhere we have failed to help you see why it makes sense.”
So rather than keep their elite club of professors, statisticians, and analysts exclusive, Black mathematicians like Cooper are striving to widen their circle.
In just the last two weeks, two major events have taken place to encourage greater Black and minority participation in all levels of math—the Blackwell-Tapia Conference in Minnesota and the 16th
annual MathFest that was held at Howard University.
“One of the major purposes of the conference is to showcase what’s been achieved by this group of people and to give an opportunity for people to get together for the younger people in the field to
meet the successful senior people,” said Douglas Arnold, a professor of mathematics and director of the Institute for Math and Its Applications at the University of Minnesota.
During the Blackwell-Tapia conference, the nearly 150 minority mathematicians joined together to discuss trends in minorities in math, and put on a program called “Math Is Cool” for nearly 100 local
minority high school students.
Cooper knows all too well the importance of all of these functions. When he earned his Ph.D in 1993, he was one of about five Blacks to be awarded a doctorate in mathematics that particular year. He
said events like the Blackwell-Tapia Conference and Mathfest are encouraging a new generation of Black mathematicians.
“The numbers (of Black Ph.D.s) were in single digits fairly steadily until the late 90s. But we’ve stayed there. So it’s still a small number… There are various programs and efforts to try to do a
little better. But there’s still plenty to be done,” he said.
At the MathFest, math undergraduate students from Howard, Morehouse, Spelman, Delaware State, Morgan State and others met their peers and mathematicians working in science, national security, and for
large accounting firms.
Panelists at MathFest explained that math can help the U.S. government break foreign codes in our airwaves to figuring out why Monarch butterflies may no longer exist in the next 20 years.
During a question and answer period, students were delighted to find out their chosen career path can be lucrative and fulfilling. Certain jobs, the panelists said, may have starting of $60,000 with
just a Bachelor’s degree. For Ph.D.s, the students were told, some tenured math professors could easily earn six figures.
Ashley Crump, junior math major from Howard, fell in love with math as a fourth grader in Ft. Worth, Texas. She said her fourth grade teacher and high school Advanced Placement Calculus teachers
inspired her to pursue math in college. She found the entire conference helpful.
“When I first got here (to Howard), I had no idea what I was going to do with math. I had no idea about graduate school, no one ever told me about that. I was just doing it because I liked math. So
programs like these, different conferences to go to, really teach you more about the opportunities, more about your field. You get to meet a lot of people and you see those same people at different
conferences so you get to network,” she said.
Crump plans on going to graduate school and pursuing her Ph.D. Crump said like her teachers, she would like to go her old school and encourage Black students to get into math.
“I want to at some point and go back to explain to students there’s money to be made and people don’t like it so if you can do it. Go do it and you will be a commodity,” she said.
The idea of getting excited about math and spreading it to other young Black people is exactly why Scott Williams became one of the founders of the National Association of Mathematicians, the
organization responsible for MathFest, and the creator of the Mathematicians of the African Diaspora website.
Williams, a world-renowned math professor currently at the State University of New York at Buffalo, remembers when he was one of about four Black Ph.D.s in 1969.
Sitting in the back row of the auditorium, Williams was beaming as he looked out over the crowd mixed with students, professors and math professionals discussing internship and job opportunities.
“When I started out I didn’t know anybody (Black) in mathematics. It was a while before I got to learn a few people. So I think organizations like this are phenomenal,” he said.
“I realized we needed to have some connections.”
Numbers from the College Board show that while numbers are improving for Black students taking the Advanced Placement Calculus exams in the last decade, they still make up a small percentage of test
Of the 248,000 students who took the AP Calculus AB and BC exams in 2006, only 9,680 were Black.
Crump said kids need to become “comfortable” with math early on, but more enthusiastic teachers and parents are needed to guide kids along the way.
“I think middle school is the most important time in your life. You learn the most and that’s when you decide you’re going to college. I think it’s the most important time that we need to express to
young, Black students that they need to be comfortable with math. They may not love it, but they need to become comfortable,” she said.
Eager students like Crump reassure Williams that the future of Black mathematicians is in good hands.
“There’s just a wealth of possibilities. Kids think, you look at the math teachers in high school and this is what I can do with it. You can do so much more,” he said.
“I know people with degrees in mathematics who have gone into law and medicine and all kinds of things. You are trained to think precisely about things. This is one advantage to have that training.
So there are many, many things possible with mathematics.”
|
{"url":"http://www.ima.umn.edu/~arnold/press/BlackPressUSA-11-19-06.html","timestamp":"2014-04-16T08:16:10Z","content_type":null,"content_length":"35059","record_id":"<urn:uuid:9a21b640-799f-4577-ac51-680c32d174ae>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Characteristic Algebras of Fully Discrete Hyperbolic Type Equations
SIGMA 1 (2005), 023, 9 pages nlin.SI/0506027 http://dx.doi.org/10.3842/SIGMA.2005.023
Characteristic Algebras of Fully Discrete Hyperbolic Type Equations
Ismagil T. Habibullin
Institute of Mathematics, Ufa Scientific Center, Russian Academy of Sciences, 112 Chernyshevski Str., Ufa, 450077 Russia
Received August 04, 2005, in final form November 30, 2005; Published online December 02, 2005
The notion of the characteristic Lie algebra of the discrete hyperbolic type equation is introduced. An effective algorithm to compute the algebra for the equation given is suggested. Examples and
further applications are discussed.
Key words: discrete equations; invariant; Lie algebra; exact solution; Liuoville type equation.
pdf (187 kb) ps (148 kb) tex (11 kb)
1. Leznov A.N., Savel'ev M.V., Group methods of integration of nonlinear dynamical systems, Moscow, Nauka, 1985 (in Russian).
2. Shabat A.B., Yamilov R.I., Exponential systems of type I and the Cartan matrices, Preprint, Ufa, 1981.
3. Zabrodin A.V., The Hirota equation and the Bethe ansatz, Teoret. Mat. Fiz., 1998, V.116, N 1, 54-100 (English transl.: Theoret. and Math. Phys., 1998, V.116, N 1, 782-819).
4. Ward R.S., Discrete Toda field equations, Phys. Lett. A, 1995, V.199, 45-48.
5. Adler V.E., Startsev S.Ya., On discrete analogues of the Liouville equation, Teoret. Mat. Fiz., 1999, V.121, N 2, 271-284 (English transl.: Theoret. and Math. Phys., 1999, V.121, N 2, 1484-1495).
6. Hirota R., The Bäcklund and inverse scattering transform of the K-dV equation with nonuniformities, J. Phys. Soc. Japan, 1979, V.46, N 5, 1681-1682.
7. Habibullin I.T., Characteristic algebras of the discrete hyperbolic equations, nlin.SI/0506027.
8. Habibullin I.T., Discrete Toda field equations, nlin.SI/0503055.
|
{"url":"http://www.emis.de/journals/SIGMA/2005/Paper023/","timestamp":"2014-04-20T03:21:43Z","content_type":null,"content_length":"5476","record_id":"<urn:uuid:ac52cc5b-8693-4e16-b007-b878831f6aed>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ADC Guide, Part 11: ADC Noise
For convenience here are pdfs of part 10 and part 11 of the series dealing with ADC noise issues.
In the previous part of this series, we discussed about noise basics and how they affect an ADC’s output. We will continue this discussion about noise and cover Signal-to-Noise and Distortion ratio
and ENoB, all commonly used specifications of an ADC.
Peak-to-peak noise is important for a very limited set of applications where the accuracy of analog to digital conversion is of utmost importance. One such example is precise weighing scales where
the ADC has to measure very small analog voltages extremely accurately.
For most of the general applications of an ADC, RMS noise is the parameter considered as the measure for DC noise performance of the ADC. It is apparent in Figure 3 from the previous part of this
series that the typical distribution of a grounded input histogram approximately assumes the shape of a Gaussian curve. This Gaussian curve is marked in red in Figure 3. The difference between the
actual distribution and perfect Gaussian arises from the DNL of the ADC. As we have already seen in part 5 of this series, if DNL is more than -1 LSB, missing codes can result which will make this
distribution go far off from an ideal Gaussian.
We can compute the RMS noise by statistical methods. As a rule of thumb, peak-to-peak noise is around 6 to 8 times the RMS noise of the ADC, assuming an approximate Gaussian distribution. RMS noise
can be expressed in terms of number of counts or number of LSBs, similar to peak-to-peak noise.
The Signal-to-Noise Ratio (SNR) is the parameter of an ADC which accounts for the noise in the ADC. As it was derived in the first part of this series, the SNR of an ideal ADC is given by equation
(4) below:
Click on image to enlarge.
The SNR of an ideal ADC is also known as signal-to-quantization noise ratio (SQNR) for obvious reasons.
For a practical ADC, the signal-to-noise ratio is always less than the SNR value of an ideal ADC of the same resolution due to added noise from the noise sources mentioned previously. The SNR for a
practical ADC can be calculated from the FFT of the output of the ADC. It depends upon the power of the fundamental signal and noise. The noise power can be estimated by removing the power of
fundamental and harmonic components from the total signal power. The RMS noise voltage is marked as
Figure 1
. Therefore, the SNR of practical ADC is given by
equation (5)
as below:
Click on image to enlarge.
Although we do not consider the power content of the harmonics frequencies when calculating the SNR of an ADC, harmonics are in fact equally important when selecting an ADC for a particular
application. ‘THD+N’ is the parameter which adds up the effect of noise and harmonics. It is defined as the power of harmonics and noise with respect to power of fundamental frequency component and
is given by
equation (6)
Click on image to enlarge.
|
{"url":"http://www.planetanalog.com/document.asp?doc_id=532482&piddl_msgorder=thrd","timestamp":"2014-04-17T01:28:36Z","content_type":null,"content_length":"115282","record_id":"<urn:uuid:d0daa602-6c34-417b-a8f0-7b274f30adf8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Continuous Random Variables
When a random variable may assume any numerical value in one or more intervals on the real number line, then the random variable is called a continuous random variable. For example, the EPAcombined
city and highway mileage of a randomly selected midsize car is a continuous random variable. Furthermore, the temperature (in degrees Fahrenheit) of a randomly selected cup of coffee at a fast-food
restaurant is also a continuous random variable. We often wish to compute probabilities about the range of values that a continuous random variable x might attain. For example, suppose that marketing
research done by a fast-food restaurant indicates that coffee tastes best if its temperature is between 153° F and 167° F. The restaurant might then wish to find the probability that x, the
temperature of a randomly selected cup of coffee at the restaurant, will be between 153° and 167°. This probability would represent the proportion of coffee served by the restaurant that has a
temperature between 153° and 167°. Moreover, one minus this probability would represent the proportion of coffee served by the restaurant that has a temperature outside the range 153° to 167°.
In general, to compute probabilities concerning a continuous random variable x, we assign probabilities to intervals of values by using what we call a continuous probability distribution. To
understand this idea, suppose that f (x) is a continuous function of the numbers on the real line, and consider the continuous curve that results when f (x) is graphed. Such a curve is illustrated in
Figure 6.1. Then:
Continuous Probability Distributions
The curve f(x) is the continuous probability distribution of the random variable x if the probability that x will be in a specified interval of numbers is the area under the curve f(x) corresponding
to the interval. Sometimes we refer to a continuous probability distribution as a probability curve or as a probability density function.
An area under a continuous probability distribution (or probability curve) is a probability. For instance, consider the range of values on the number line from the number a to the number b—that is,
the interval of numbers from a to b. If the continuous random variable x is described by the probability curve f(x), then the area under f(x) corresponding to the interval from a to b is the
probability that x will attain a value between a and b. Such a probability is illustrated as the shaded area in Figure 6.1. We write this probability as P(a ≤x≤ b). For example, suppose that the
continuous probability curve f(x) in Figure 6.1 describes the random variable x = the temperature of a randomly selected cup of coffee at the fast-food restaurant. It then follows that P(153≤ x ≤167)
- the probability that the temperature of a randomly selected cup of coffee at the fast-food restaurant will be between 153° and 167°—is the area under the curve f(x) between 153 and 167.
We know that any probability is 0 or positive, and we also know that the probability assigned to all possible values of x must be 1. It follows that, similar to the conditions required for a
discrete probability distribution, a probability curve must satisfy the following properties:
Properties of a Continuous Probability Distribution
The continuous probability distribution (or probability curve) f(x) of a random variable x must satisfy the following two conditions: 1 f (x)≥0 for any value of x. 2 The total area under the curve f
(x) is equal to 1. Any continuous curve f (x) that satisfies these conditions is a valid continuous probability distribution. Such probability curves can have a variety of shapes—bell-shaped and
symmetrical, skewed to the right, skewed to the left, or any other shape. In a practical problem, the shape of a probability curve would be estimated by looking at a frequency (or relative frequency)
histogram of observed data. Later in this article, we study probability curves having several different shapes.
We have seen that to calculate a probability concerning a continuous random variable, we must compute an appropriate area under the curve f (x). In theory, such areas are calculated by calculus
methods and/or numerical techniques. Because these methods are difficult, needed areas under commonly used probability curves have been compiled in statistical tables. As we need them, we show how to
use the required statistical tables. Also, note that since there is no area under a continuous curve at a single point, the probability that a continuous random variable x will attain a single value
is always equal to 0. It follows that in Figure 6.1 we have P(x=a) = 0 and P(x = b)=0. Therefore, P(a ≤ x ≤ b) equals P(a < x < b) because each of the interval endpoints a and b has a probability
that is equal to 0.
|
{"url":"http://answers.mheducation.com/math/statistics/college-statistics/continuous-random-variables","timestamp":"2014-04-21T14:40:59Z","content_type":null,"content_length":"52215","record_id":"<urn:uuid:6b4b4af1-035f-489a-aef4-61f805e6f481>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maps that are a.e. equal have almost the same graphs
up vote 3 down vote favorite
Let $X$ and $Y$ be two measurable spaces, and let $p$ be a probability measure on $X\times Y$. Denote by $p_X$ the marginal of $p$ on $X$, that is an image of $p$ under projection on $X$. Consider
two measurable functions $f, g:X\to Y$ such that $f = g$ holds $p_X$-a.e. Is that true that $$ p\left(\mathrm{Gr}[f]\,\Delta\, \mathrm{Gr}[g]\right) = 0 \tag{1} $$ where $\Delta$ is the symmetric
difference of sets and $$ \mathrm{Gr}[f]:=\{(x,f(x)):x\in X\} $$ is the graph of $f$ in $X\times Y$. Actually, I am mostly interested in the case when both $X$ and $Y$ are Borel spaces, and $f$ and
$g$ are universally measurable maps, so in case $(1)$ does not hold in general, I would be still happy to know whether it holds true under some the latter assumptions.
I guess, one of the sufficient conditions would be that $p$ admits a regular kernel $\mu$ w.r.t. $p_X$.
pr.probability measure-theory descriptive-set-theory
add comment
1 Answer
active oldest votes
The answer is easily yes, because we have $$ \operatorname{Gr}(f)\Delta \operatorname{Gr}(g)\subset N\times Y$$ where $N:=\{x\in X\, :\, f(x)\neq g(x) \}$ by assumption has null
up vote 3 down vote measure $$p_X(N):= p(\operatorname{Pr}_X^{-1}(N))=p( N\times Y )=0.$$
Indeed, easier than I thought! – Ilya Jul 11 '13 at 13:03
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability measure-theory descriptive-set-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/136396/maps-that-are-a-e-equal-have-almost-the-same-graphs","timestamp":"2014-04-17T18:39:29Z","content_type":null,"content_length":"52273","record_id":"<urn:uuid:7a13d261-864f-471d-8fd0-be8ab6d780da>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elementary and Intermediate Algebra 5th edition by Tussy | 9781111567682 | Chegg.com
Elementary and Intermediate Algebra 5th edition
Details about this item
Elementary and Intermediate Algebra: Algebra can be like a foreign language, but ELEMENTARY AND INTERMEDIATE ALGEBRA, 5E, gives you the tools and practice you need to fully understand the language of
algebra and the "why" behind problem solving. Using Strategy and Why explanations in worked examples and a six-step problem solving strategy, ELEMENTARY AND INTERMEDIATE ALGEBRA, 5E, will guide you
through an integrated learning process that will expand your reasoning abilities as it teaches you how to read, write, and think mathematically. Feel confident about your skills through additional
practice in the text and Enhanced WebAssign. With ELEMENTARY AND INTERMEDIATE ALGEBRA, 5E, algebra will make sense because it is not just about the x...it's also about the WHY.
Back to top
Rent Elementary and Intermediate Algebra 5th edition today, or search our site for Alan S. textbooks. Every textbook comes with a 21-day "Any Reason" guarantee. Published by CENGAGE Learning.
|
{"url":"http://www.chegg.com/textbooks/elementary-and-intermediate-algebra-5th-edition-9781111567682-1111567689?ii=13&trackid=967f7a84&omre_ir=1&omre_sp=","timestamp":"2014-04-20T22:33:22Z","content_type":null,"content_length":"22791","record_id":"<urn:uuid:23d4be19-d243-4356-93ee-f88c4428b93b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Berlin, MA Math Tutor
Find a Berlin, MA Math Tutor
...I present regularly for work before small audiences (5-10 folks) as well as large (my record so far is more than 4,500!). Public and motivational speaking is one of my favorite ways to spend
time. I would be happy to do some coaching in this area as well, either generally or in run-up to a speci...
63 Subjects: including algebra 2, GRE, SAT math, writing
...I usually explore different problem solving techniques with the student until we find the ones that work best. I give lots of practice problems to ensure the student is confident and excelling
in the specific areas. I also help with homework.
16 Subjects: including SAT math, trigonometry, reading, precalculus
...Another way I work with kids is as a ski coach. What I love about tutoring is similar to what I love about coaching--helping students improve their performance.I have taught chemistry to a
number of students in the past few years. I've found that students often have many of the skills needed to...
31 Subjects: including probability, geometry, chemistry, prealgebra
...I am also flexible with my schedule. Looking forward to hearing from you! Cheers, SusieI have played violin since I was 5 years old.
11 Subjects: including algebra 1, SAT math, algebra 2, Spanish
...People are surprised at how quickly they can learn these subjects once they are given a clear explanation. I have over 20 years of experience tutoring accounting, finance, economics and
statistics. I have a master's degree in accounting, and I currently teach statistics, accounting, and finance at local colleges, where students have given me great evaluations.
14 Subjects: including algebra 2, trigonometry, SAT math, algebra 1
Related Berlin, MA Tutors
Berlin, MA Accounting Tutors
Berlin, MA ACT Tutors
Berlin, MA Algebra Tutors
Berlin, MA Algebra 2 Tutors
Berlin, MA Calculus Tutors
Berlin, MA Geometry Tutors
Berlin, MA Math Tutors
Berlin, MA Prealgebra Tutors
Berlin, MA Precalculus Tutors
Berlin, MA SAT Tutors
Berlin, MA SAT Math Tutors
Berlin, MA Science Tutors
Berlin, MA Statistics Tutors
Berlin, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Berlin_MA_Math_tutors.php","timestamp":"2014-04-17T21:34:35Z","content_type":null,"content_length":"23638","record_id":"<urn:uuid:0313d090-3867-41fd-b48f-f3e4182e4faf>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sketches for categories of models of complete theories
up vote 3 down vote favorite
In Accessible categories : the foundations of categorical model theory, chapter 3 p.58, Makkai and Paré claim that there is "an (obvious) identification of a class of sketches so that the categories
Mod(S) for such sketches S are precisely the categories of models of complete theories with elementary embeddings as morphisms".
In other terms, there seems to be a "sketchable" counterpart of the property of completeness of a formal theory. But there is no explicit reference in the book.
What can be this sketch-theoretical property for such identification? Is there any paper where this identification is explicit?
Many thanks, in advance.
ct.category-theory model-theory categorical-logic sketches
add comment
1 Answer
active oldest votes
This isn't a direct answer, but rather consists of two references (in the same book) which when you put them together provides a proof that all elementary categories are sketchable.
The book is "Locally Presentably Categories and Accessible Categories" by Adamek and Rosicky.
The first reference, Theorem 5.42 and Theorem 5.44. These theorems together show that a category is accessible if and only if it is equivalent to a cateory whose objects are models of
up vote 2 some theory $T \subseteq L_{\kappa,\kappa}$ and whose morphisms consist of all embeeddings which preserve all formulas of $L_{\kappa, \kappa}$.
down vote
In particular this implies that for any first order theory $T$ the category whose objects are of models of $T$ and whose morphisms are elementary embeddings is accessible.
The second reference is Chapter 2F, and in particular Theorem 2.58 and Theorem 2.60 where they show a category is accessible if and only if it is sketchable.
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory model-theory categorical-logic sketches or ask your own question.
|
{"url":"http://mathoverflow.net/questions/144583/sketches-for-categories-of-models-of-complete-theories","timestamp":"2014-04-16T20:12:23Z","content_type":null,"content_length":"50411","record_id":"<urn:uuid:38661f1c-6282-46b1-9283-12c1e12825b5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thermodynamic formula - transposition help please
December 17th 2012, 01:03 PM #1
Dec 2012
Thermodynamic formula - transposition help please
Hi all,
I've recently decided to get back into education having been out for years....I was ok at Math's back in the day!!
I'm studying thermodynamics at HND level and have come across a formula I just can't seem to get my head around.
I have the answer from the worked example solution but can't get how he got there. any help much appreciated
Formula for calculating amount of flash steam generated from hot condensate
(mg + mf ) hf1 = mg x hg2 + mf x hf2 .....we're looking for mg - the mass of gas/steam released
the figures are as follows
(mg+mf) = 550 -- the amount of condensate supplied (mf is water content, mg is vapour content)
hf1 = 670 -- enpalthy value of condensate going in
hg2 = 2697 -- enpalthy value of steam in vessel at a different pressure
hf2 = 467 -- enpalthy of water contenet of steam in vessel
In order to remove one of the two unknown's mf which is not given = (550 - mg)
Putting into formula
550 x 670 = mg x 2697 + (550 - mg)467
it's at this point that the formula needs to be transposed for mg and I have spent ages with no success.
Thanks in advance guys.
Re: Thermodynamic formula - transposition help please
(mg + mf ) hf1 = mg x hg2 + mf x hf2 .....we're looking for mg - the mass of gas/steam released
I'm just guessing with the variables you provided w/regard to subscripts and such ...
$mg \cdot hf_1 + mf \cdot hf_1 = mg \cdot hg_2 + mf \cdot hf_2$
$mg \cdot hf_1 - mg \cdot hg_2 = mf \cdot hf_2 - mf \cdot hf_1$
$mg(hf_1 - hg_2) = mf(hf_2 - hf_1)$
$mg = \frac{mf(hf_2 - hf_1)}{hf_1 - hg_2}$
December 17th 2012, 03:17 PM #2
|
{"url":"http://mathhelpforum.com/algebra/209997-thermodynamic-formula-transposition-help-please.html","timestamp":"2014-04-21T03:28:07Z","content_type":null,"content_length":"35148","record_id":"<urn:uuid:0ea3b8cb-3c5d-43cc-88ef-0ba251faf248>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
February 8, 2011 by John Scammell
Thanks to this post on Dan Meyer’s blog, and an ensuing conversation between Dan and Curmudgeon, I was pointed to an article that I think would make a pretty compelling problem in Math 10C or Math
10-3 measurement.
The article describes a 17 year old driver who was given a $190 ticket for going 62 miles an hour in a 45 mile an hour zone. His parents, however, had installed a GPS system in his car to track his
speed and driving habits, and they claim the GPS proves their son was only going 45 miles an hour at the time the ticket was issued. It appears to have taken two years of legal wrangling, before the
ticket was finally upheld, and he had to pay the fine. I wouldn’t tell the students that yet, though.
Here’s a link to the article: Speeding Teenager
Lesson Plan
1. Present the problem.
Give the students the following excerpt from the article:
Shaun Malone was 17 when a Petaluma police officer pulled him over on Lakeville Highway the morning of July 4, 2007, and wrote him a ticket for going 62 mph in a 45-mph zone.
Malone, now 19, was ordered to pay a $190 fine, but his parents appealed the decision, saying data from a GPS system they installed in his car to monitor his driving proved he was not speeding.
What ensued was the longest court battle over a speeding ticket in county history.
In her five-page ruling, Commissioner Carla Bonilla noted the accuracy of the GPS system was not challenged by either side in the dispute, but rather they had different interpretations of the
All GPS systems in vehicles calculate speed and location, but the tracking device Malone’s parents installed in his 2000 Toyota Celica GTS downloaded the information to their computer. The system
sent out a data signal every 30 seconds that reported the car’s speed, location and direction. If Malone ever hit 70 mph, his parents received an e-mail alert.
Malone was on his way to Infineon Raceway when Officer Steve Johnson said he clocked Malone’s car going 62 mph about 400 feet west of South McDowell Boulevard.
The teen’s GPS, however, pegged the car at 45 mph in virtually the same location.
At issue was the distance from the stoplight at Freitas Road — site of the first GPS “ping” that showed Malone stopped — to the second ping 30 seconds later, when he was going 45 mph. Bonilla
said the distance between those two points was 1,980 feet.
2. Ask the students to discuss the article. In the end they will come to the question we want explored. Was young Shaun guilty of speeding?
3. Let them answer the question. Have them prepare a defense for Shaun, or an argument for the prosecution.
4. Show them the Commissioner’s conclusion, based on mathematics.
Bonilla said the distance between those two points was 1,980 feet, and the GPS data confirmed the prosecution’s contention that Malone had to have exceeded the speed limit.
“The mathematics confirm this,” she wrote.
Teacher Resource
An extension, eventually.
I have been attempting to contact the person mentioned in this local article, but so far he hasn’t responded to me. Similar mathematics could prove he wasn’t driving as excessively fast as the red
light camera claimed, but I would need to get a copy of his ticket to show that.
Picking this up over here: this is a) strong work and b) fast work. You moved quick on this one, John. Back when I first read the post, I wasn’t sure how to structure the learning, but I think you
found a good angle. Perhaps you could try to pull together some multimedia (I’m thinking, specifically, of screenshots from Google Map) to illustrate the mathematics a little better.
I’d like to get some pictures, and I certainly need to clean up the solution. I’m on it.
2 Responses
|
{"url":"http://thescamdog.wordpress.com/2011/02/08/speeding/","timestamp":"2014-04-21T10:40:19Z","content_type":null,"content_length":"65701","record_id":"<urn:uuid:21bade0d-691d-495e-a9ab-6f4ddf4d7cd3>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
|
U-Phil: Blogging the Likelihood Principle: New Summary
October 31, 2012
By Mayo
(This article was originally published at Error Statistics Philosophy » Statistics, and syndicated at StatsBlogs.)
U-Phil: I would like to open up this post, together with Gandenberger’s (Oct. 30, 2012), to reader U-Phils, from December 6- 19 (< 1000 words) for posting on this blog (please see # at bottom of
post). Where Gandenberger claims, “Birnbaum’s proof is valid and his premises are intuitively compelling,” I have shown that if Birnbaum’s premises are interpreted so as to be true, the argument is
invalid. If construed as formally valid, I argue, the premises contradict each other. Who is right? Gandenberger doesn’t wrestle with my critique of Birnbaum, but I invite you (and Greg!) to do so.
I’m pasting a new summary of my argument below.
The main premises may be found on pp. 11-14. While these points are fairly straightforward (and do not require technical statistics), they offer an intriguing logical, statistical and linguistic
puzzle. The following is an overview of my latest take on the Birnbaum argument. See also “Breaking Through the Breakthrough” posts: Dec. 6 and Dec 7, 2011.
Gandenberger also introduces something called the methodological likelihood principle. A related idea for a U-Phil is to ask: can one mount a sound, non-circular argument for that variant? And while
one is at it, do his methodological variants of sufficiency and conditionality yield plausible principles?
Graduate students and others invited!
New Summary of Mayo Critique of Birnbaum’s Argument for the SLP
Deborah Mayo
See also a (draft) of the full PAPER corresponding to this summary. Yet other links to the Strong Likelihood Principle SLP: Mayo 2010; Cox & Mayo 2011 (appendix).
Please alert me to corrections, not all the symbols transferred so well.
1. (SLP): For any two experiments E’ and E” with different probability models f’, f” but with the same unknown parameter θ, if the likelihood of outcomes x’* and x”* (from E’ and E”
respectively) are proportional to each other, then x’* and x”* should have the identical evidential import for any inference concerning parameter θ.
SLP pairs. When the antecedent holds, x’* and x”* are said to have “the same likelihood function”, i.e., f’(x’; θ) = cf”(x”, θ) for all θ, c a positive constant. In such cases, we abbreviate by
saying x’* and x”* are SLP pairs, and the asterisk * will be used to indicate this.
So we can abbreviate the SLP as follows:
SLP: for any two experiments, E’ and E”, if x’* and x”* are SLP pairs (from E’ and E” respectively) then
Infr [E’](x’*) equiv Infr [E”](x”*).
2.1 SLP Violation with Binomial, Negative Binomial
Example 1 . Binomial vs. Negative Binomial. Consider independent Bernoulli trials, with the probability of success at each trial an unknown constant θ, but produced by different procedures, E’, E”.
E’ is Binomial with a pre-assigned number n of Bernoulli trials, say 20, and R, the number of successes observed. In E” trials continue until a pre-assigned number r, say 6, of successes has
occurred, with the number N trials recorded. The sampling distribution of R is Binomial:
f(R; θ) = ([n]C[ r] ) θ^r(1– θ)^n-r
while the sampling distribution of N is Negative Binomial.
f(N; θ) = ([n-1]C[ r-1]) θ^r(1– θ)^n-r
If two outcomes from E’ and E” respectively, have the same number of successes and failures, r and n, then they have the “same” likelihood, in the sense that they are proportional to θ^r(1– θ)^n-r.
The two outcomes, x’* and x”* are SLP pairs. But the difference in the sampling distributions of the respective statistics, R and N, of E’ and E” respectively, entails a difference in p-values or
confidence level assessments. Accordingly, their evidential appraisals differ for sampling distribution inference. Thus x’* and x”* are SLP pairs leading to an SLP violation.
An SLP violation with Binomial (E’) and Negative Binomial (E”):
(E’, r=6) and (E”, n=20) have proportional likelihoods
but Infr[E’] (x’*= 6) is not equiv to Infr[ E”](x”*=20).
Loss of relevant information if the index is erased
In making inferences about θ on the basis of data x in sampling theory, relevant information would be lost if the report removed the index from E and reported:
Data x consisted of r successes in n Bernoulli trials, generated from either a Binomial experiment with n fixed at 20, or a negative binomial experiment with r fixed at 6—erasing the index
indicating the actual source of data.
2.2 SLP violation with fixed normal testing and optional stopping: E’, E”
Example 2. Fixed vs. sequential sampling. Suppose X’ and X” are sets of independent observations from N(μ,σ^2), with σ known, and p-values are to be calculated for the null hypothesis μ = 0. In E’
the sample size is fixed, whereas in E” the sampling rule is to continue sampling until 1.96σ/√n is attained or exceeded. Suppose E” is first able to stop with n = 169 trials. Then x” has a
proportional likelihood to a result that could have occurred from E’, where n was fixed in advance to be 169, and result x’ is 1.96σ/√n from 0. Although the corresponding p-values would be
different, the two results would be inferentially equivalent according to the SLP. This application of the SLP to the case of optional stopping is often call this the Stopping Rule Principle SRP
(Berger and Wolpert 1988).[i]
SLP violation with Fixed Normal Testing and Optional Stopping: E’, E”
(E’, 1.96σ/13) and (E”, n = 169) have proportional likelihoods
Infr[E’] (1.96σ /13) is not equiv to Infr[ E”]( n = 169).
(a) Sufficient Statistic: Let data x= (x[1],x[2],…,x[n]) be a realization of random variable X, following a distribution f, a statistic T(x) is a sufficient statistic if the following relation
f(x; θ) = f[T](t; θ) f[x|T](x| t)
where f[x|T] does not depend on the unknown parameter θ.
(b) Sufficiency Principle (general): If random sample X, in experiment E, has probability density f(x; θ), and the assumptions of the model are valid, and T is minimal sufficient for θ, then if t
(X’) = t(X”), then Infr[E’](x’) = Infr[E”](x”).
Since the sufficiency principle holds for different inference schools, any application must take into account the relevant method for inference under discussion (Cox and Mayo 2010).
(c) Sufficiency Principle applied in sampling theory: If a random variable X, in experiment E, arises from f(x;θ), and the assumptions of the model are valid, then all the information about θ
contained in the data may be obtained from considering its minimal sufficient statistic t and the sampling distribution f[T](t;θ) of experiment E.
Weak Conditionality Principle (WCP):If a mixture experiment is performed, with components E’, E” determined by a randomizer (independent of the parameter of interest), then once (E’,x’) is known,
inference should be based on E’ and its sampling distribution; not on the sampling distribution of the convex combination of E’ and E”.
4.1 Understanding the WCP
The WCP includes a prescription and a proscription for the proper evidential interpretation of x’, once it is known to have come from E’:
The evidential meaning of any outcome (E’, x’) of any experiment E having a mixture structure is the same as the evidential meaning of the corresponding outcome x’ of the corresponding component
experiment E’, ignoring otherwise the over-all structure of the original experiment.” (Birnbaum 1962, 279)
While the WCP seems obvious enough, it is actually rife with equivocal potential. To avoid this, we belabor here its three assertions.
• First, it applies once we know which component of the mixture has been observed, and what the outcome was (E^j, x^j). (Birnbaum considers mixtures with just two components).
• Second, there is the prescription about evidential equivalence. Once it is known E^j has generated the data, given that our inference is about a parameter of E^j, inferences are appropriately
drawn in terms of the sampling distribution in E^j –the experiment known to have been performed.
• Third, there is the proscription: In the case of informative inferences about parameter of E^j our inference should not be influenced by whether the decision to perform E^j was determined by a
coin flip or fixed all along. Misleading informative inferences result from averaging over the convex combination of E^j and an experiment known not to have given rise to the data. The latter
may be called the unconditional sampling distribution.
A second ambiguity. Casella and Berger (2002) write:
The [weak] Conditionality principle simply says that if one of two experiments is randomly chosen and the chosen experiment is done, yielding data x, the information about θ depends only on the
experiment performed….The fact that this experiment was performed, rather than some other, has not increased, decreased, or changed knowledge of θ. (emphasis added, 293)
Casella and Berger’s intended meaning is the correct claim:
(i) Given it is known that measurement x’ is observed as a result of using tool E’, then it does not matter (and it need not be reported) whether or not E’ was chosen by a random toss (that might
have resulted in using tool E”) or fixed all along.
Compare this to a false and unintended reading:
(ii) If some measurement x is observed, then it does not matter (and it need not be reported) if it came from a precise tool E’ or imprecise tool E”.
Claim (i) by contrast, may well be warranted, not on purely mathematical grounds, but as the most appropriate way to report the precision of the result attained, as when WCP applies.
The linguistic similarity of (i) and (ii) may explain the equivocation that vitiates the Birnbaum argument.
4.3 Is WCP an Equivalence? (you may wish to compare this to my earlier treatments, e.g., Mayo 2010
A central question is whether WCP is a proper equivalence, holding in both directions (Evans, et.al..1986, Durbin 1970). Weighing against viewing it as an equivalence is this: it makes no sense to
say one should use the unconditional rather than the conditional assessment (once it is known which component of a mixture was performed), and at the same time maintain the unconditional and
conditional assessments are evidentially equivalent. WCP prescribes conditioning on the experiment known to have produced the data, and not the other way around. It is only because these do not
yield equivalent appraisals that the WCP may serve to avoid counterintuitive assessments (e.g., that would otherwise be permitted from those famous weighing machines). It is their inequivalence, in
short, that gives Cox’s WCP its normative proscriptive force:
WCP proscription: Once (E’, x’) is known, Infr[E’](x’) should be computed using, not the unconditional sampling distribution over E’ and E”, but rather, the sampling distribution of E’.
Yet there is an equivalence within the WCP , and so long as it is consistently interpreted, raises no problems.[ii] This turns out to be the linchpin of disentangling the Birnbaum argument.
To hold WCP for a given context is to judge that the information that E’ was determined by a flip is a redundancy, equivalent to conjoining a tautology to the outcome (E’, x’):
• Knowing that (E’, x’) occurred,
• Infr[E’](x’) equiv [Infr[E’](x’) and (Either E’ was chosen by flipping, or E’ was fixed)]
where it given that the flipping conjunct in no way alters the construal of (E’, x’). [iii]
Viewing the WCP as endorsing a genuine “two-way” equivalence requires viewing any known experimental result as equivalent, evidentially, to its being a component of a corresponding mixture, even
though it is known that in fact E was not chosen by a mixture. While this may seem unsettling, no untoward evidential interpretations result so long as the proscriptive part of the WCP remains, and
is not contradicted (say by allowing the imaginary mixture to influence the interpretation of the known “component”).
5. Birnbaum’s Argument
SLP: for any two experiments, E’ and E”, if x’* and x”* are SLP pairs (from E’ and E” respectively) then Infr [E’](x’*) equiv Infr [E”](x”*).
Begin with any case where the antecedent of the SLP holds. The task is to show the two ought to be deemed evidentially equivalent.
Premise 1:
Suppose we have observed (E’, x’*) with an SLP pair (E”, x”*). Then view (E’, x’*) as having resulted from getting heads on the toss of a fair coin, where tails would have meant performing E”
(any other irrelevant randomizer would do). This is sometimes called the “enlarged experiment”. Now construct the Birnbaum test statistic T-B defined in terms of the enlarged experiment:
T-B(E^j, x^j*) = (E’, x’*), if x’= x’* or j = 2 and x” = x”*.
Else, report the outcome (E^j, x^j ).
In words: in the case of a member of an SLP pair, statistic T-B has the effect of erasing the index j. Inference based on T-B is to be computed averaging over the performed and unperformed
experiments E’ and E”. This is the unconditional formulation of the enlarged experiment. This gives premise one:
(1) For any (E’, x’*), the result of construing its evidential import in terms of the unconditional formation is that:
Infr[E-B](x’*) equiv Infr[E-B](x”*)
The likelihood functions of (E’, x’*) and (E”, x”*) are proportional for all θ, being .5f(x’*;θ) and .5f(x”*; θ).
However E’ and E” are different models of the experiment producing the two likelihoods, and the enlarged model associated with T-B is yet a third model of the experiment. The second premise now
concerns the WCP:
(2) Once it is known that E’ produced the outcome x’*, compute the inference just as if it were known all along that E^’ was going to be performed, i.e., one should use the conditional formulation,
ignoring any mixture structure:
Infr[E-B](x’*) equiv Infr[E’](x’*)
More generally, once (x^j*) is known to have come from E^j, j = 1 or 2, premise (2) is
Infr[E-B](x^j*) equiv Infr[E’](x^j*)
From premises (1) and (2) it is concluded, for any arbitrary SLP pair x’*, x”*,
Infr[E’](x’*) equiv Infr[E”](x”*)
The SLP is said to follow. This is an unsound argument.
A sound argument must be both deductively valid and have all true premises.
Consider the truth of the two premises of Birnbaum’s argument. Premise one: (Infr[E-B](x’*) equiv Infr[E-B](x”*) is true provided that
Infr[E-B](x’*) is the inference from (E’, x’) averaging over the unconditional sampling distribution of statistic T-B. In effect it reports just the likelihood of x*, which enters inference in
terms of the convex combination of E’ and E”.
For premise two to be true
(i.e., Infr[E-B](x^j*) equiv Infr[E’](x^j*) for j= 1, 2)
Infr[E-B](x^j*) must refer the inference from (E^j, x^j*) modeled in terms of the sampling distribution of E^j alone. The experiment E-B on which inference is to be based has different meanings in
each premise. The argument is invalid.
5.2 Second formulation: allowing true “if then” premises
We can formulate the argument so that both premises are true “if then” statements[iv] incorporating the stipulated sampling distributions:
As before, suppose an arbitrary member of an SLP pair (E’, E”) is observed, e.g.,
(E’, x’*) is observed. The question is to its evidential import.
(1) If Infr[E-B](x’*) is computed unconditionally, averaging over the sampling distributions of T-B, then
Infr[E-B](x’*) equiv Infr[E-B](x”*)
(2) If Infr[E-B](E^j,x^j*) is computed conditionally, using the sampling distribution of E^j:
Infr[E-B](x^j*) equiv Infr[E’](x^j*) for i= 1, 2.
Construed as “if then” claims, the premises can both be true, but then we cannot validly infer the SLP:
Infr[E’](x^’*) equiv Infr[E”](x^”*)
We would need contradictory antecedents to hold.
The formal invalidity is proved by any SLP violation, since in that case, the premises are true and the conclusion is false. SLP violation pairs are readily available (e.g., Examples 1 and 2), and no
contradiction results. In fact, we have demonstrated something stronger: whenever we deal with an SLP violation pair, the two “if then” premises, when true yield a false conclusion.
REFERENCES: See Paper.
[i] Applying the stopping rule principle requires stipulating that the stopping rule was uninformative for the inference, as in the above example.
[ii] Birnbaum himself is conflicted here. In his later, 1969 paper, Note 11, Birnbaum asserts, “The formulation of the conditionality concept as one of equivalence”, as in [WCP] was proposed by him
in (1962) as the natural explication of the concept, not withstanding the one-sided form to which applications of the concept had been restricted (substitution of simpler for less simple models of
evidence). This proposal seems to have found general acceptance among those interested in the concept.
[iii] For that matter, as Birnbaum suggests (1969, 119), a “trivial but harmless” augmentation to any experiment might be to toss a fair coin and report heads or tails (where this was irrelevant to
the original model). Given (E’, x’),
Infr[E’](x’) equiv [Infr[E’](x’) and either a coin was tossed or it was not].
He intends the move in applying the WCP is to be just as innocuous as the report of an irrelevant coin toss.
[iv] I am deliberately avoiding the term “conditional” statement, since it is used with a very different sense throughout.
#: This will give graduate students at my 28 Nov., 2012 presentation of this paper, as part of the (PH500) seminar, London School of Economics, a chance to submit something. Inquiries: error@vt.edu
For some older examples of U-Phils, see an earlier post, and search this blog.
Filed under:
Likelihood Principle
Please comment on the article here: Error Statistics Philosophy » Statistics
|
{"url":"http://www.statsblogs.com/2012/10/31/u-phil-blogging-the-likelihood-principle-new-summary/","timestamp":"2014-04-21T12:19:56Z","content_type":null,"content_length":"63390","record_id":"<urn:uuid:af9f3bec-22a4-4a9d-9060-6fefa49d64b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A rocket-driven sled running on a straight level track has been used to study the physiological...
Get your Question Solved Now!!
A rocket-driven sled running on a straight level track has been used to study the physiological...
Introduction: Physics
More Details: A rocket-driven sled running on a straight level track has been used to study the physiological effects of large accelerations on astronauts. One such sled can
attain a speed of 418 m/s in 1.5 s starting from rest. What is the acceleration of the sled, assuming it is constant? Answer in units of m/s^2.
Please log in or register to answer this question.
0 Answers
Related questions
|
{"url":"http://www.thephysics.org/105192/rocket-driven-running-straight-track-study-physiological","timestamp":"2014-04-21T07:20:13Z","content_type":null,"content_length":"105593","record_id":"<urn:uuid:f3120b90-ab16-48d7-87de-76d6ee6b32c3>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Basic Papers on Cybernetics and Systems Science
The following is a list of references used for the course SS-501, INTRODUCTION TO SYSTEMS SCIENCE, at the Systems Science Department of SUNY Binghamton in 1990.
All of the classic, "required" papers have been reprinted in the book: Klir G.J. (1992) Facets of Systems Science, (Plenum, New York). You can order photocopies of many papers via the CARL UnCover
service, which provides a search through a database containing millions of papers in thousands of academic journals covering all disciplines. Other, specific bibliographic references of books and a
selected number of papers can be found in the library database of the Department of Medical Cybernetics and AI at the University of Vienna. A number of more recent books and papers can be found in
our bibliography on complex, evolving systems, and in the bibliography of the Principia Cybernetica Project.
Key: * Required
R Recommended
Abraham, Ralph: (1987) "Dynamics and Self-Organization", in:
/Self-Organizing Systems/, ed. Eugene Yates, pp. 599-616, Plenum
Review of the scope and extent of modern dynamics theory,
especially as related to problems in self-organization. Useful
after an elementary understanding of dynamical systems.
Abraham, Ralph, and Shaw, Christophe: (1987) "Dynamics: a
Visual Introduction", in: /Self-Organizing Systems: Emergence/,
ed. Eugene Yates, pp. 543-598, Plenum
Ackoff, Russel: (1979) "Future of Operational Research is
Past", /General Systems Yearbook/, v. 24, pp. 241-252
R Arbib, Michael A: (1966) "Automata Theory and Control Theory: A
Rapproachement", /Automatica/, v. 3, pp. 161-189
A unification of automata theory and control theory in a broader
theory of dynamic systems.
Arbib, Michael A, and Rhodes, JL et. al.: (1968) "Complexity and
Graph Complexity of Finite State Machines and Finite Semi-Groups",
in: /Algorithmic Theory of Machines, Languages and Semi-Groups/,
ed. MA Arbib, pp. 127-145, Academic Press, New York
A rigorous formulation of descriptive complexity of systems in
terms of finite state machines.
* Ashby, Ross: (1958) "General Systems Theory as a New
Discipline", /General Systems Yearbook/, v. 3:1
* (1958) "Requisite Variety and Implications for Control of Complex
Systems", /Cybernetica/, v. 1, pp. 83-99
* (1964) "Introductory Remarks at Panel Discussion", in: /Views
in General Systems Theory/, ed. M. Mesarovic, pp. 165-169, Wiley,
New York
(1965) "Measuring the Internal Informational Exchange in a
System", /Cybernetica/, v. 1, pp. 5-22
A readable paper that explains how the Shannon entropy can be
used in analyzing systems.
(1968) "Some Consequences of Bremermann's Limit for Information
Processing Systems", in: /Cybernetic Problems in Bionics/, ed.
H Oestreicher et. al, pp. 69-76, Gordon and Breach, New York
(1970) "Information Flows Within Coordinated Systems", /Progress
in Cybernetics/, v. 1, ed. J. Rose, pp. 57-64, Gordon and Breach,
(1972) "Systems and Their Informational Measures", in: /Trends in
General Systems Theory/, ed. GJ Klir, pp. 78-97, Wiley, New York,
* (1973) "Some Peculiarities of Complex Systems", /Cybernetic
Medicine/, v. 9:2, pp. 1-6
Atlan, Henri: (1981) "Hierarchical Self-Organization in Living
Systems", in: /Autopoises/, ed. Milan Zeleny, North Holland, New
Auger, Peter: (1989) "Microcanonical Ensembles with
Non-equiprobable States", /Int. J. Gen. Sys./, v. 20:3, pp.
Aulin, AY: (1975) "Cybernetics as Foundational Science of
Action", /Cybernetic/, v. 3
* (1979) "Law of Requisite Hierarchy", /Kybernetes/, v. 8, pp.
Bahm, AJ: (1981) "Five Types of Systems Philosophies", /Int. J.
Gen. Sys./, v. 6
(1983) "Five Systems Concepts of Society", /Behavoral Science/,
v. 28
(1984) "Holons: Three Conceptions", /Systems Research/, v. 1:2,
pp. 145-150
Comparison of three system philosophies.
(1986) "Nature of Existing Systems", /Systems Research/, v. 3:3,
Pergamon, Oxford
Philosophical analysis of the necessary and sufficient conditions
for systemic processes.
(1988) "Comparing Civilizations as Systems", /Systems Research/,
v. 5:1
Macroscopic structural, semantic analysis of cultural systems.
Bailey, Kenneth D: (1984) "Equilibrium, Entropy and
Homeostasis", /Systems Research/, v. 1:1, pp. 25-43
Excellent survey of these concepts in multiple disciplines.
Balakrishnan, AV: "On the State Space Theory of Linear
Systems", /J. Mathematical Analysis and Appl./, v. 14:3, ed.
1966, pp. 371-391
* Barto, AG: (1978) "Discrete and Continuous Model", /Int. J.
Gen. Sys./, v. 4:3, pp. 163-177
Bennett, Charles H: (1986) "On the Nature and Origin of Complexity
in Discrete, Homogeneous, Locally-Interacting Systems",
/Foundations of Physics/, v. 16, pp. 585-592
On Bennett's measure of algorithmic depth.
Black, M: (1937) "Vagueness: An Exercise in Logical Analysis",
/Philosophy of Science/, v. 4, pp. 427-455
Probably the best discussion of the meaning of vagueness and its
importance in science and philosophy.
* Boulding, Ken: (1956) "General Systems Theory - The Skeleton of
Science", /General Systems Yearbook/, v. 1, pp. 11-17
(1968) "Specialist with a Universal Mind", /Management Science/,
v. 14:12, pp. B647-653
* (1974) "Economics and General Systems", /Int. J. Gen. Sys./, v.
1:1, pp. 67-73
Bovet, DP: (1988) "An Introduction to Theory of Computational
Complexity", in: /Measures of Complexity/, ed. L Peliti, A
Vulpiani, pp. 102-111, Springer-Verlag, New York
Braitenberg, Valentino: "Vehicles: Expirement in Synthetic
Psychology", /IEEE Trans. of Syst., Man, and Cyb./
On the complex, seemingly lifelike behavior of simply designed
cybernetic robots.
* Bremermann, HJ: (1962) "Optimization Through Evolution and
Recombination", in: /Self-Organizing Systems/, ed. MC Yovits et.
al., pp. 93-106, Spartan, Washington DC
(1967) "Quantifiable Aspects of Goal-Seeking Self-Org. Systems",
in: /Progress in Theoretical Biology/, v. M Snell, pp. 59-77,
Academic Press, New York
Brillouin, Leon: (1953) "Negentropy Principle of Information",
/J. of Applied Physics/, v. 24:9, pp. 1152-1163
First Brillouin essay, on the relation between thermodynamic and
informational entropies.
* Bunge, Mario: (1978) "General Sys. Theory Challenge to
Classical Philosophy of Science", /Int. J. Gen. Sys./, v. 4:1
(1981) "Systems all the Way", /Nature and Systems/, v. 3:1, pp.
Carnap, Rudolph, and Bar-Hillel, Y: (1952) "Semantic
Information", /British J. for Philosopy of Science/, v. 4, pp.
Cavallo, RE, and Pichler, F: (1979) "General Systems
Methodology: Design for Intuition Ampl.", in: /Improving the
Human Condition/, Springer-Verlag, New York
Caws, P: (1974) "Coherence, System, and Structure", /Idealistic
Studies/, v. 4, pp. 2-17
Chaitin, Gregory J: (1975) "Randomness and Mathematical Proof",
/Scientific American/, v. 232:5
(1977) "Algorithmic Information Theory", /IBM J. Res. Develop./,
v. 21:4, pp. 350-359
Introduction of Chaitin's version of Kolmogorov complexity.
(1982) "Godel's Theorem and Information", /Int. J. Theoretical
Physics/, v. 22
* Checkland, Peter: (1976) "Science and Systems Paradigm", /Int.
J. Gen. Sys./, v. 3:2, pp. 127-134
Chedzey, Clifford S, and Holmes, Donald S: (1976) "System
Entropies of Markov Chains", /General Systems Yearbook/, v. XXI,
pp. 73-85
(1977) "System Entropy and the Monotonic Approach to Equilibrium",
/General Systems Yearbook/, v. 22, pp. 139-142
(1977) "System Entropy of a Discrete Time Probability Function",
/General Systems Yearbook/, v. 22, pp. 143-146
(1977) "First Discussion of Markov Chain System Entropy Applied
to Physics", /General Systems Yearbook/, v. 22, pp. 147-167
Cherniak, Christophr: (1988) "Undebuggability and Cognitive
Science", /Communications of the ACM/, v. 31:4
Like Bremmerman's limit, some simple mathematics on the limits of
computational methods.
Christensen, Ronald: (1985) "Entropy Minimax Multvariate
Statistical Modeling: I", /Int. J. Gen. Sys./, v. 11
R Conant, Roger C: (1969) "Information Transfer Required in
Regulatory Processes", /IEEE Trans. on Sys. Sci. and Cyb./, v. 5:4,
pp. 334-338
A discussion of the use of the Shannon entropy in the study of
R (1974) "Information Flows in Hierarchical Systems", /Int. J.
Gen. Sys./, v. 1, pp. 9-18
Using classical (Shannon) information theory, it is shown that
hierarchical structures are highly efficient in information
* (1976) "Laws of Information Which Govern Systems", /IEEE Trans.
Sys., Man & Cyb./, v. 6:4, pp. 240-255
* Conant, Roger C, and Ashby, Ross: (1970) "Every Good Regulator
of Sys. Must Be Model of that Sys.", /Int. J. Systems Science/,
v. 1:2, pp. 89-97
Cornacchio, Joseph V: (1977) "Systems Complexity: A
Bibliography", /Int. J. Gen. Sys./, v. 3, pp. 267-271
De Raadt, JDR: (1987) "Ashby's Law of Requisite Variety: An
Empirical Study", /Cybernetics and Systems/, v. 18:6, pp.
R Eigen, M, and Schuster, P: (1977) "Hypercycle: A Principle of
Natural Self-Org.", /Naturwissenschaften/, v. 64,65
Classical work on molecular feedback mechanisms.
Engell, S: (1984) "Variety, Information, and Feedback",
/Kybernetes/, v. 13:2, pp. 73-77
Erlandson, RF: (1980) "Participant-Oberver in Systems
Methodologies", /IEEE Trans. on Sys., Man, and Cyb./, v. SMC-10:1,
pp. 16-19
Ferdinand, AE: (1974) "Theory of Systems Complexity", /Int. J.
Gen. Sys./, v. 1:1, pp. 19-33
A paper that connects defect probability with systems complexity
through the maximum entropy principle. Also investigates the
relationship between modularity and complexity.
Ford, Joseph: (1986) "Chaos: Solving the Unsolvable, Predicting
the Unpredictable", in: /Chaotic Dynamics and Fractals/,
Academic Press
Fascinating account of the relation between chaotic dynamics, the
limits of observability, constructive mathematics, exsitence and
uniqueness, and the "ideology" of the scientific community.
Gaines, Brian R: "An Overview of Knowledge Acquisition and Transfer",
/IEEE Proc. on Man and Machine/, v. 26:4
GSPS type methods as the general form of all science. Relation
of Klir's GSPS methodology to other inductive methodologies.
R (1972) "Axioms for Adaptive Behavior", /Int. J. of Man-Machine
Studies/, v. 4, pp. 169-199
Perhaps the most comprehenseive foundational work on adaptive
* (1976) "On the Complexity of Causal Models", /IEEE Trans. on
Sys., Man, & Cyb./, v. 6, pp. 56-59
R (1977) "System Identification, Approximation and Complexity",
/Int. J. Gen. Sys./, v. 3:145, pp. 145-174
A thorough discussion on the relationship among complexity,
credibiilty, and uncertainty associated with systems models.
* (1978) "Progress in General Systems Research", in: /Applied
General Systems Research/, ed. GJ Klir, pp. 3-28, Plenum, New
* (1979) "General Systems Research: Quo Vadis?", /General Systems
Yearbook/, v. 24, pp. 1-9
* (1983) "Precise Past - Fuzzy Future", /Int. J. Man-Machine
Studies/, v. 19, pp. 117-134
* (1984) "Methodology in the Large: Modeling All There Is",
/Systems Research/, v. 1:2, pp. 91-103
R Gallopin, GC: "Abstract Concept of Environment", /Int. J. Gen.
Sys./, v. 7:2, pp. 139-149
A rare discussion of the concept of environment by a well known
Gardner, MR: (1968) "Critical Degenerotes in Large Linear
Systems", /BCL Report/, v. 5:8, EE Dept., U. Ill, Urbana
A report on an experimental investigation whose purpose is to
determine the relationship between stability and connectance of
linear systems.
* Gardner, MR, and Ashby, Ross: (1970) "Connectance of Large
Dynamic (Cybernetic) Systems", /Nature/, v. 228:5273, pp. 784
Gelfland, AE, and Walker, CC: (1977) "Distribution of Cycle
Lengths in Class of Abstract Sys.", /Int. J. Gen. Sys./, v. 4:1,
pp. 39-45
* Goguen, JA, and Varela, FJ: (1979) "Systems and Distinctions:
Duality and Complementarity", /Int. J. Gen. Sys./, v. 5:1, pp.
Gorelick, George: (1983) "Bogdanov's Tektology: Naure,
Development and Influences", /Studies in Soviet Thought/, v. 26,
pp. 37-57
Greenspan, D: (1980) "Discrete Modeling in Microcosm and
Macrocosm", /Int. J. Gen. Sys./, v. 6:1, pp. 25-45
* Hall, AS, and Fagan, RE: (1956) "Definition of System",
/General Systems Yearbook/, v. 1, pp. 18-28
Harel, David: (1988) "On Visual Formalisms", /Communications of
the ACM/, v. 31:5
Reasonable, critical extensions of "Venn Diagrams", general
consideration of the representation of multidimensional systems.
Henkind, Steven J, and Harrison, Malcolm C: (1988) "Analysis of
Four Uncertainty Calculi", /IEEE Trans. Man Sys. Cyb./, v. 18:5,
pp. 700-714
On Bayesian, Dempster-Shafer, Fuzzy Set, and MYCIN methods of
uncertainty management.
Herbenick, RM: (1970) "Peirce on Systems Theory", /Transaction
of the S. Peirce Soc./, v. 6:2, pp. 84-98
R Huber, GP: (1984) "Nature and Design of Post-Industrial
Organizations", /Management Science/, v. 30:8, pp. 928-951
Excellent paper discussing the changing nature of organizations
in the information society.
* Islam, S: (1974) "Toward Integrating Two Systems Theories By
Mesarovic and Wymore", /Int. J. Gen. Sys./, v. 1:1, pp. 35-40
Jaynes, ET: (1957) "Information Theory and Statistical
Mechanics", /Physical Review/, v. 106,108, pp. 620-630
A classic paper. Information theory as a sufficient and elegant
basis for thermodynamics. But does it follow that thermodynamics
is necessarily dependent on information theory, or that entropy
is "just" incomplete knowledge? Compares principle of maximum
entropy with assumptions of ergodicity, metric transitivity,
and/or uniform a priori distributions. Prediction as microscopic
to macroscopic explanation; interpretation as macro to micro.
Johnson, Horton A.: (1970) "Information Theory in Biology After
18 Years", /Science/, v. 6/26/70
Scathing critique of the role of "classical" information theory
in biological science. Most of these criticisms are still
unanswered, if being addressed in a roundabout way (e.g.
algorithmic complexity theory).
Joslyn, Cliff: (1988) "Review: Works of Valentin Turchin",
/Systems Research/, v. 5:1
Short introduction to Turchin's cybernetic theories of universal
* Kampis, G: (1989) "Two Approaches for Defining 'Systems'",
/Int. J. Gen. Sys./, v. 15, pp. 75-80
Kaufmann, Stuart A: (1969) "Metabolic StabilityandEpigenesis in
Randomly Constructed Genetic Nets", /Journal of Theoretical Biology/,
v. 22, pp. 437-467
(1984) "Emergent Properties in Random Complex Automata",
/Physica/, v. 10D, pp. 145
Kellerman, E: (1968) "Framework for Logical Cont.", /IEEE
Transactions on Computers/, v. E-17:9, pp. 881-884
Klapp, OE: (1975) "Opening and Closing in Open Systems",
/Behav. Sci./, v. 20, pp. 251-257
Philosophy on the dynamics of social processes; entropic
R Klir, George: (1970) "On the Relation Between Cybernetics and
Gen. Sys. Theory", in: /Progress in Cybernetics/, v. 1, ed. J
Rose, pp. 155-165, Gordon and Breach, London
A formal discussion on the relation between the fields of
"cybernetics" and "systems science", concluding that the former
is a subfield of the latter.
(1972) "Study of Organizations of Self-Organizing Systems", in:
/Proc. 6th Int. Congress on Cyb./, pp. 162-186, Wammer, Belgium
(1976) "Ident. of Generative Structures in Empirical Data", /Int.
J. Gen. Sys./, v. 3:2, pp. 89-104
(1978) "General Systems Research Movement", in: /Sys. Models for
Decision Modeling/, ed. N Sharif et. al., pp. 25-70, Asian Inst.
Tech., Bangkok
* (1985) "Complexity: Some General Observations", /Systems
Research/, v. 2:2, pp. 131-140
* (1985) "Emergence of 2-D Science in the Information Society",
/Systems Resarch/, v. 2:1, pp. 33-41
* (1988) "Systems Profile: the Emergence of Systems Science",
/Systems Research/, v. 5:2, pp. 145-156
Klir, George, and Way, Eileen: (1985) "Reconstructability
Analysis: Aims, Results, Problems", /Systems Research/, v. 2:2,
pp. 141-163
Introduction to the methods of reconstruction as well as their
relevance to general philosophical problems.
Kolmogorov, AN: (1965) "Three Approaches to the Quantitative Definition
of Information", /Problems of Information Transmission/, v. 1:1,
pp. 1-7
First introduction of algorithmic meatrics of complexity and
Krippendorf, Klaus: (1984) "Epistemological Foundation for
Communication", /J. of Communication/, v. 84:Su
On the necessary cybernetics of communication.
Krohn, KB, and Rhodes, JL: (1963) "Algebraic Theory of
Machines", in: /Mathematical Theory of Automata/, ed. J. Fox, pp.
341-384, Ploytechnic Press, Brooklyn NY
(1968) "Complexity of Finite Semigroups", /Annals of
Mathematics/, v. 88, pp. 128-160
Layzer, David: (1988) "Growth of Order in the Universe", in:
/Entropy, Information, and Evolution/, ed. Bruce Weber et. al.,
pp. 23-40, MIT Press, Cambridge
On the thermodynamics of cosmological evolution, and the
necessity of "self-organization" in an expanding universe.
* Lendaris, GG: (1964) "On the Definition of Self-Organizing
Systems", /IEEE Proceedings/, v. 52, pp. 324-325
R Lettvin, JY, and Maturana, HR: (1959) "What the Frog's Eye Tell
the Frog's Brain", /Proceedings of the IRE/, v. 47, pp.
Classic early paper in cybernetics, cited as the basis of
"constructive" psychological theory.
Levin, Steve: (1986) "Icosahedron as 3D Finite Element in
Biomechanical Supp.", in: /Proc. 30th SGSR/, v. G, pp. 14-23
R (1989) "Space Truss as Model for Cervical Spine Mechanics",
NOTE: Manuscript
Startling theory of the necessary foundations of biomechanics in
2-d triangular (heaogonal) plane packing and 3-d dodecahedral
space packing.
Lloyd, Seth, and Pagels, Heinz: (1988) "Complexity as
Thermodynamic Depth", /Annals of Physics/, v. 188, pp. 1
Perhaps a classic, on their new measure as the difference betwen
fine and coarse entropy. Comparison with other measures of
depth, complexity, and information.
* Lofgren, Lars: (1977) "Complexity of Descriptions of Sys: A
Foundational Study", /Int. J. Gen. Sys./, v. 3:4, pp. 197-214
Madden, RF, and Ashby, Ross: (1972) "On Identification of
Many-Dimensional Relations", /Int. J. of Systems Science/, v. 3,
pp. 343-356
An early paper contributing to the area that is known now as
reconstructibility analysis.
Makridakis, S, and Faucheux, C: (1973) "Stability Properties of
General Systems", /General Systems Yearbook/, v. 18, pp. 3-12
Makridakis, S, and Weinstraub, ER: (1971) "On the Synthesis of
General Systems", /General Systems Yearbook/, v. 16, pp. 43-54
Margalef, D Ramon: (1958) "Information Theory in Ecology",
/General Systems Yearbook/, v. 3, pp. 36-71
* Marschal, JH: (1975) "Concept of a System", /Philosophy of
Science/, v. 42:4, pp. 448-467
* May, RM: (1972) "Will a Large Complex System be Stable?",
/Nature/, v. 238, pp. 413-414
McCulloch, Warren, and Pitts, WH: "Logical Calculus of Ideas
Immanent in Nervous Activity", /Bull. Math. Biophysics/, v. 5
Classic early work on the neural nets as a logical modeling
McGill, WJ: (1954) "Multivariate Information Transmission",
/Psychometrica/, v. 19, pp. 97-116
Mesarovic, MD: (1968) "Auxiliary Functions and Constructive
Specification of Gen. Sys.", /Mathematical Systems Theory/,
v. 2:3
R Miller, James G: (1986) "Can Systems Theory Generate Testable
Hypotheses?", /Systems Research/, v. 3:2, pp. 73-84
On systems theoretic research programs attempting to unify
scientific theory through hypothesized isomorphies among levels
of analysis.
Negoita, CV: (1989) "Review: Fuzzy Sets, Uncertainty, and
Information", /Kybernetes/, v. 18:1, pp. 73-74
Good analysis of the significance of fuzzy set theory.
Pattee, Howard: "Evolution of Self-Simplifying Systems", in:
/Relevance of GST/, ed. Ervin Laszlo, George Braziller, New York,
"Instabilities and Information in Biological Self-Organization",
in: /Self Organizing Systems/, ed. F. Eugene Yates, Plenum, New York
(1973) "Physical Problems of Origin of NaturalControl", in:
/Biogenesis, Evolution, Homeostasis/, ed. A. Locker,
Springer-Verlag, New York
(1978) "Complementarity Principle in Biological and Social Structures",
/J. of Social and Biological Structures/, v. 1
(1985) "Universal Principle of Measurement and Language Function in
Evolving Systems", in: /Complexity, Language, and Life/, ed. John
Casti, pp. 268-281, Springer-Verlag, Berlin
(1988) "Simulations, Realizations, and Theories of Life", in:
/Artificial Life/, ed. C Langton, pp. 63-77, Addison-Wesley,
Redwood City CA
Patten, BC: (1978) "Systems Approach to the Concept of
Environment", /Ohio J. of Science/, v. 78:4, pp. 206-222
Pearl, J: (1978) "On Connection Between Complexity and Credibility of
Inferred Models", /Int. J. Gen. Sys./, v. 4:4, pp. 255-264
Theoretical study that shows that credibility of determinstic
models inferred from data tends to increase with data size and
decrease with the complexity of the model.
Pedrycz, W: (1981) "On Approach to the Analysis of Fuzzy
Systmes", /Int. J. of Control/, v. 34, pp. 403-421
Peterson, JL: (1977) "Petri Nets", /ACM Computing Surveys/, v.
9:3, pp. 223-252
R Pippenger, N: (1978) "Complexity Theory", /Scientific
American/, v. 238:6, pp. 114-124
Excellent discussion of one facet of complexity.
* Porter, B: (1976) "Requisite Variety in the Systems and Control
Sciences", /Int. J. Gen. Sys./, v. 2:4, pp. 225-229
Prigogine, Ilya, and Nicolis, Gregoire: (1972) "Thermodynamics
of Evolution", /Physics Today/, v. 25, pp. 23-28
Briefer introduction to far-from-equilibrium thermodynamics,
hypercycles, and evolutionary theory. Criticzed as confused.
Rapoport, Anatol: (1962) "Mathemantical Aspects of General
Systems Theory", /General Systems Yearbook/, v. 11, pp. 3-11
Rivier, N: (1986) "Structure of Random Cellular Networks and
Their Evolution", /Physica/, v. 23D, pp. 129-137
Brilliant introduction to the theory of the equilibrium
distribution of macroscopic entities (cells) in multiple kinds of
substances: metals, soap suds, and animal and vegetable tissues;
according to a non-thermodynamic maximum entropy law. Subsumes
other laws from these specific disciplines.
(1988) "Statistical Geometry of Tissues", in: /Thermodynamics and
Pattern Formation in Biology/, pp. 415-445, Walter de Gruyter,
New York
* Rosen, Robert: (1977) "Complexity as a Systems Property", /Int.
J. Gen. Sys./, v. 3:4, pp. 227-232
* (1978) "Biology and Systems Resarch", in: /Applied General
Systems Research/, ed. GJ Klir, pp. 489-510, Plenum, New York
* (1979) "Anticipatory Systems", /General Systems Yearbook/, v.
24, pp. 11-23
* (1979) "Old Trends and New Trends in General Systems Resarch",
/Int. J. Gen. Sys./, v. 5:3, pp. 173-184
* (1981) "Challenge of Systems Theory", /General Systems
* (1985) "Physics of Complexity", /Systems Research/, v. 2:2, pp.
* (1986) "Some Comments on Systems and Systems Theory", /Int. J.
Gen. Sys./, v. 13:1, pp. 1-3
Rosenbluth, Arturo, and Wiener, Norbert: (1943) "Behavior,
Purpose, and Teleology", /Philosophy of Science/, v. 10, pp.
Original introduction of teleonomy, teleology, goal-seeking, and
intentionality in cybernetic terms.
Rothstein, J: (1979) "Generalized Entropy, Boundary Conditions,
and Biology", in: /Maximum Entropy Formalism/, ed. RD Levine, pp.
423-468, Cambridge U., Cambridge
On boundary conditions in biology, organisms as "well-informed
heat engines", definition of mutual information, order as an
entropy measure.
Sadovsky, V: (1979) "Methodology of Science and Systems
Approach", /Social Science/, v. 10, Moscow
Saperstein, Alvin M.: (1984) "Chaos: A Model for the Outrbreak
of War", /Nature/, v. 309
Schedrovitzk, GP: (1962) "Methdological Problems of Systems
Research", /General Systems Yearbook/, v. 11, pp. 27-53
Schneider, Eric D: (1988) "Thermodynamics, Ecological Succession and
Natural Selection: A Common Thread", in: /Entropy, Information, and
Evolution/, pp. 107-138, Bruce Weber et. al., Cambridge
On the thermodynamics of maturing ecosystems, relation to
Principle of Maximum Entropy Production.
Shaw, Robert: (1981) "Strange Attractors, Chaotic Behavior and
Information Flow", /Zeitschrift fur Naturforschung/, v. 36a
(1984) /Dripping Faucet as a Model Chaotic System/, Aeriel Press,
Santa Cruz
Best explanation of the nature of chaotic processes, especially
with respect to information theory.
Simon, Herbert: (1965) "Architecture of Complexity", /General
Systems Yearbook/, v. 10, pp. 63-76
* (1988) "Predication and Prescription in Systems Modeling",
NOTE: IIASA manuscript
Skarda, CA, and Freeman, WJ: (1987) "How Brains Make Chaos Into
Order", /Behavioral and Brain Sciences/, v. 10
Interpretation of neurological experiments revealing the
cybernetic basis of perception, the reliance on chaotic dynamics,
and the non-locality of mental representations. Resting as
chaos, perception as stable attractors, seizures as cyclic
Skilling, John: (1989) "Classic Maximum Entropy", in: /Maximum
Entropy and Bayesian Methods/, ed. J. Skilling, pp. 45-52, Kluwer,
New York
Mathematical introduction to the traditonal MaxEnt method as
applied to data analysis.
Smith, C Ray: (1990) "From Rationality and Consistency to
Bayesian Probability", in: /Maximum Entropy and Bayesian Methods/,
ed. P. Fougere, Kluwer, New York
Mathematical introduction to the relation between inductive and
deductive reasoning, Cox's axioms, Bayes' theorem, and Jaynes'
MaxEnt program.
Smith, RL: (1989) "Systemic, not just Systematic", /Systems
Research/, v. 6:1, pp. 27-37
* Svoboda, A: "Model of the Instinct of Self-Preservation", in:
/MISP: A Simulation of a Model.../, ed. KA Wilson, NOTE: From
French,Inf.Proc.Mach. 7
Swenson, Rod: (1989) "Emergent Attractors and Law of Maximum Entropy
Production", /Systems Research/, v. 6:3, pp. 187-198
Good references for general evolution. Discussion of minimax
entropy production and emergence, biological thermodynamics.
Szilard, L: (1964) "On Decrease of Enteopy in Thermodynamic Systems by
Intervention of Intelligent Beings", /Behavioral Science/, v. 9
Classic first paper on the necessary relation between
informational and thermodynamic entropies.
Takahara, Y, and Nakao, B: (1981) "Characterization of
Interactions", /Int. J. Gen. Sys./, v. 7:2, pp. 109-122
Takahara, Y, and Takai, T: (1985) "Category Theoretical
Framework of General Systems", /Int. J. Gen. Sys./, v. 11:1, pp.
Thom, Rene: (1970) "Topological Models in Biology", in:
/Towards a Theoretical Biology/, v. 3, ed. CH Waddington, Aldine,
On self-simplifying systems.
Tribus, Myron: (1961) "Information Theory as the Basis for
Thermostatistics and Thermodynamics", /J. Applied Mechanics/, v. 28,
pp. 108
Full description of the derivation of basic thermodynamics from
Jayne's maximum entropy formalism.
R Turchin, Valentin: (1982) "Institutionalization of Values",
/Worldview/, v. 11/82
Review of Turchin's social theory and defense of reviews of
_Phenomenon of Science_ and _Inertia of Fear_.
(1987) "Constructive Interpretation of Full Set Theory", /J. of
Symbolic Logic/, v. 52:1
Almost complete reconstruction of ZF set theory from a
constructivist philosophy, including implementation in the REFAL
Turney, P: (1989) "Architecture of Complexity: A New
Blueprint", /Synthese/, v. 79:3, pp. 515-542
Ulanowicz, R, and Hennon, B: (1987) "Life and Production of
Entropy", /Proc. R. Soc. London/, v. B 232, pp. 181-192
Excellent: principle of maximum entropy production; positive
feedback=autocatalysis; lasers as high dissipative, low entropy
producing systems; nuclear autocatalysis as greatest source of
entropy production; measurement techniques for biotic entropy
production; all chemical organization as either extinct or in
organisms; high efficiency as high entropy production; on metrics
of evolution.
v Bertalanfy, Ludwig: (1950) "An Outline of General Systems
Theory", /British J. of Philosophy of Science/, v. 1, pp.
(1962) "General Systems Theory - A Criticial Review", /General
Systems Yearbook/, v. 7, pp. 1-20
* Varela, FG, and Maturana, HR et. al.: (1974) "Autopoiesis: the
Organization of Living Systems, its Characterization, and a Model",
/Biosystems/, v. 5, pp. 187-196
First definition of autopoeisis.
von Foerster, Heinz: (1960) "On Self-Organizing Systems and
their Environments", in: /Self-Organizing Systems/, ed. Yovitz and
Cameron, Pergamon
Well written, many interesting observations. Proof of
meaningless of the term "SOS", first (?) discussion of "growth of
phase space" route to organization, on relative information,
order from noise principle.
von Neumann, John: (1963) "General and Logical Theory of
Automata", in: /Collected Works/, v. 5, ed. AH Taub, pp. 288-328,
Classic. On thermodynamics and fundamental cybernetics,
digital/analog distinctions and relations in complex systems.
(1963) "Probability, Logic, and Synthesis of Reliable Organization
from Unreliable Parts", in: /Collected Works/, v. 5, ed. AH Taub,
pp. 329-378, Pergamon
On logics, automata, and information theory.
* Waelchli, F: (1989) "Eleven Theses of General Systems Theory",
/Systems Research/, NOTE: To appear
R Walker, CC: (1971) "Behavior of a Class of Complex Systems",
/J. Cybernetics/, v. 1:4, pp. 55-67
A good example of the use of the computer in discovering systems
science laws.
Walker, CC, and Ashby, Ross: (1966) "On Temporal Characteristics of
Behavior in Certain Complex Systems", /Kybernetik/, v. 3:2, pp.
Warfield, JN, and Christakis, AN: (1986) "Dimensionality",
/Systems Research/, v. 3:3, Pergamon, Oxford
R Weaire, D, and Rivier, N: (1984) "Soaps, Cells and Statistics:
Random Patterns in 2-D", /Contemporary Physics/, v. 25:1, pp.
Continution of Rivier 1986.
* Weaver, Warren: (1968) "Science and Complexity", /American
Scientist/, v. 36, pp. 536-544
From Klir, on organized simplicity, unorganized complexity, and
organized complexity.
White, I: (1988) "Limits and Capabilities of Machines: A
Review", /IEEE Trans. on Sys., Man, and Cyb./, v. 18:6, pp.
Wicken, Jeffrey: (1987) "Entropy and Information: Suggestions
for a Common Language", /Philosphy of Science/, v. 54:2, pp.
Solid paper on more modern view of the relation between
thermodynamics and information theory.
Wilson, David S: (1989) "Reviving the Superorganism", /J.
Theor. Bio./, v. 136, pp. 337-356
On levels of selection, criteria for being an organism, systems
vs. aggregates. Wilson is a current SUNY faculty.
Wolfram, Stephen: (1988) "Complex Systems Theory", in:
/Emerging Syntheses in Science/, ed. David Pines, pp. 183-190,
Addison-Wesley, New York
Example of a more simplistic appeal to entropy as a metric of
Zadeh, Lofti A: (1958) "On the Identification Problem", /IRE
Trans. on Circuit Theory/, v. CT-3, pp. 277-281
* (1962) "From Circuit Theory to Systems Theory", /IRE
Proceedings/, v. 50, pp. 856-865
* (1963) "On the Definition of Adaptibility", /IEEE Proceedings/,
v. 51, pp. 469-470
(1963) "General Identification Problem", in: /Proceedings of the
Princeton Conference on the Identification Problem in Communications
and Control/, pp. 1-17
(1965) "Fuzzy Sets and Systems", in: /Systems Theory/, ed. J.
Fox, pp. 29-37, Polytechnic Press, Brooklyn NY
R (1973) "Outline of a New Approach to Analysis of Complex Sys.",
/IEEE Trans. on Sys., Man and Cyb./, v. 1:1, pp. 28-44
A motivation for using fuzziness in dealing with very complex
systems is discussed in detail.
(1982) "Fuzzy Systems Theory: Framework for Analysis of Buerocratic
Systems", in: /Sys. Meth. in Social Science Res./, ed. RE Cavallo,
pp. 25-41, Kluwer-Nijhoff, Boston
R Zeigler, BP: (1974) "Conceptual Basis for Modeling and
Simulation", /Int. J. Gen. Sys./, v. 1:4, pp. 213-228
A solid systems science conceptual framework for modeling and
simulation is introduced.
R (1976) "Hierarchy of Systems Specifications and Problems of
Structural Inference", in: /PSA 1976/, v. 1, ed. F.Suppe,
PD Asquith, pp. 227-239, Phil. Sci. Assoc., E. Lansing
Introduces a hierarchy of systems types (a formal treatment).
Zeleny, Milan: (1979) "Special Book Review", /Int. J. Gen.
Sys./, v. 5, pp. 63-71
(1988) "Tectology", /Int. J. Gen. Sys./, v. 14, pp. 331-343
On Bogdonav, an important historical figure in systems science.
Zwick, Martin: (1978) "Fuzziness and Catastrophe", in: /Proc.
of the Int. Conf. of Cyb.andSoc/, pp. 1237-1241, Tokyo/Kyoto
(1978) "Dialectics and Catastrophe", in: /Sociocybernetics/, ed.
F.Geyer et. al., pp. 129-155, Martinus Nijhoff, The Hauge,Neth.
R (1978) "Requisite Variety and the Second Law", in: /Proc. Int.
Conf. of Cyb. and Soc./, pp. 1065-1068, IEEE Sys. Man Cyb.,
Establishes the equivalence of Ashby's Requisite Variety Law and
the second law of thermodynamics.
(1978) "Quantum Measurement and Godel's Proof", /Speculations in
Science and Tech./, v. 1, pp. 135-145
R (1979) "Cusp Catastrophe and Laws of Dialectics", /System and
Nature/, v. 1, pp. 177-187
Expression of dialectical concepts (quantity to quality,
negation, interpenetration of opposites) in terms of catastrophe
(1982) "Dialectic Thermodynamics", /General Systems Yearbook/, v.
27, pp. 197-204
R (1984) "Information, Constraint, and Meaning", in: /Proceedings
SGSR/, ed. AW Smith, pp. 93-99, Intersystems
Wonderful treatment of the relation between syntax and semantics
in information theory.
Copyright© 1996 Principia Cybernetica - Referencing this page
C. Joslyn, & F. Heylighen,
Jul 22, 1996 (modified)
July, 1990 (created)
|
{"url":"http://pespmc1.vub.ac.be/CSPAPER.html","timestamp":"2014-04-18T04:29:48Z","content_type":null,"content_length":"41292","record_id":"<urn:uuid:833ffea3-bc5c-4ea9-a75e-fc664d9871a8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fords Accounting Tutor
...Have a great day! Pauline T.I passed the CPA exam in 1994. I have since been working in finance roles.
36 Subjects: including accounting, chemistry, English, precalculus
I am a highly motivated, passionate math teacher who has taught in high performing schools in four states and two countries. I have previously taught all grades from 5th to 10th and am extremely
comfortable teaching all types of math to all level learners. I am a results driven educator who motivates and educates in a fun, focused atmosphere.
7 Subjects: including accounting, geometry, algebra 1, algebra 2
As a mature mother of two teenage daughters, I have many years of experience with children both in and out of the classroom. I am certified in NJ to teach K-5 all subjects as well as 5-8 Math and
5-8 Science. I am also able to tutor high school students in Algebra I, Algebra II and SAT/ACT prep.
58 Subjects: including accounting, English, physics, writing
...I am detail oriented. I provide practice exercises from start to finish with the student, including topics and questions that are similar to those on a Regents exam. I try to spend as much time
as that student may need, and provide a variety of practice exercises.
47 Subjects: including accounting, chemistry, reading, writing
...Tutoring for many years has afforded me the opportunity to go through most of the available material, and I have pinpointed the techniques and strategies that work most effectively on each
question type. In a conventional classroom setting, many students are unable to keep up with the pace of th...
55 Subjects: including accounting, English, calculus, reading
|
{"url":"http://www.purplemath.com/Fords_Accounting_tutors.php","timestamp":"2014-04-20T02:04:18Z","content_type":null,"content_length":"23637","record_id":"<urn:uuid:4648e055-b081-486a-b6ff-aaed3f03599b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Universal prediction of individual sequences
Results 1 - 10 of 128
- JOURNAL OF THE ASSOCIATION FOR COMPUTING MACHINERY , 1997
"... We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no
assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the ..."
Cited by 317 (66 self)
Add to MetaCart
We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no
assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit
sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum
achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching
leading constants in most cases. We then show howthis leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in
this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes.
, 1995
"... Caching and prefetching are important mechanisms for speeding up access time to data on secondary storage. Recent work in competitive online algorithms has uncovered several promising new
algorithms for caching. In this paper we apply a form of the competitive philosophy for the first time to the pr ..."
Cited by 236 (11 self)
Add to MetaCart
Caching and prefetching are important mechanisms for speeding up access time to data on secondary storage. Recent work in competitive online algorithms has uncovered several promising new algorithms
for caching. In this paper we apply a form of the competitive philosophy for the first time to the problem of prefetching to develop an optimal universal prefetcher in terms of fault ratio, with
particular applications to large-scale databases and hypertext systems. Our prediction algorithms for prefetching are novel in that they are based on data compression techniques that are both
theoretically optimal and good in practice. Intuitively, in order to compress data effectively, you have to be able to predict future data well, and thus good data compressors should be able to
predict well for purposes of prefetching. We show for powerful models such as Markov sources and nth order Markov sources that the page fault rates incurred by our prefetching algorithms are optimal
in the limit for almost all sequences of page requests.
- In Proceedings of the 12th International Conference on Machine Learning , 1995
"... Abstract. We generalize the recent relative loss bounds for on-line algorithms where the additional loss of the algorithm on the whole sequence of examples over the loss of the best expert is
bounded. The generalization allows the sequence to be partitioned into segments, and the goal is to bound th ..."
Cited by 198 (18 self)
Add to MetaCart
Abstract. We generalize the recent relative loss bounds for on-line algorithms where the additional loss of the algorithm on the whole sequence of examples over the loss of the best expert is
bounded. The generalization allows the sequence to be partitioned into segments, and the goal is to bound the additional loss of the algorithm over the sum of the losses of the best experts for each
segment. This is to model situations in which the examples change and different experts are best for certain segments of the sequence of examples. In the single segment case, the additional loss is
proportional to log n, where n is the number of experts and the constant of proportionality depends on the loss function. Our algorithms do not produce the best partition; however the loss bound
shows that our predictions are close to those of the best partition. When the number of segments is k +1and the sequence is of length ℓ, we can bound the additional loss of our algorithm over the
best partition by O(k log n + k log(ℓ/k)). For the case when the loss per trial is bounded by one, we obtain an algorithm whose additional loss over the loss of the best partition is independent of
the length of the sequence. The additional loss becomes O(k log n + k log(L/k)), where L is the loss of the best partition with k +1segments. Our algorithms for tracking the predictions of the best
expert are simple adaptations of Vovk’s original algorithm for the single best expert case. As in the original algorithms, we keep one weight per expert, and spend O(1) time per weight in each trial.
- IEEE Transactions on Information Theory , 1998
"... Abstract — This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the
self-information loss function, which is directly related to the theory of universal data compression. Both th ..."
Cited by 136 (11 self)
Add to MetaCart
Abstract — This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the
self-information loss function, which is directly related to the theory of universal data compression. Both the probabilistic setting and the deterministic setting of the universal prediction problem
are described with emphasis on the analogy and the differences between results in the two settings. Index Terms — Bayes envelope, entropy, finite-state machine, linear prediction, loss function,
probability assignment, redundancy-capacity, stochastic complexity, universal coding, universal prediction. I.
- Games and Economic Behavior , 1999
"... We present a simple algorithm for playing a repeated game. We show that a player using this algorithm suffers average loss that is guaranteed to come close to the minimum loss achievable by any
fixed strategy. Our bounds are nonasymptotic and hold for any opponent. The algorithm, which uses the mult ..."
Cited by 134 (14 self)
Add to MetaCart
We present a simple algorithm for playing a repeated game. We show that a player using this algorithm suffers average loss that is guaranteed to come close to the minimum loss achievable by any fixed
strategy. Our bounds are nonasymptotic and hold for any opponent. The algorithm, which uses the multiplicative-weight methods of Littlestone and Warmuth, is analyzed using the Kullback–Liebler
divergence. This analysis yields a new, simple proof of the min–max theorem, as well as a provable method of approximately solving a game. A variant of our game-playing algorithm is proved to be
optimal in a very strong sense. Journal of Economic Literature
- IEEE Transactions on Information Theory , 2001
"... In this paper we show that on-line algorithms for classification and regression can be naturally used to obtain hypotheses with good datadependent tail bounds on their risk. Our results are
proven without requiring complicated concentration-of-measure arguments and they hold for arbitrary on-lin ..."
Cited by 133 (8 self)
Add to MetaCart
In this paper we show that on-line algorithms for classification and regression can be naturally used to obtain hypotheses with good datadependent tail bounds on their risk. Our results are proven
without requiring complicated concentration-of-measure arguments and they hold for arbitrary on-line learning algorithms. Furthermore, when applied to concrete on-line algorithms, our results yield
tail bounds that in many cases are comparable or better than the best known bounds.
, 1999
"... At each point in time a decision maker must choose a decision. The payoff in a period from the decision chosen depends on the decision as well as the state of the world that obtains at that
time. The difficulty is that the decision must be made in advance of any knowledge, even probabilistic, about ..."
Cited by 115 (2 self)
Add to MetaCart
At each point in time a decision maker must choose a decision. The payoff in a period from the decision chosen depends on the decision as well as the state of the world that obtains at that time. The
difficulty is that the decision must be made in advance of any knowledge, even probabilistic, about which state of the world will obtain. A range of problems from a variety of disciplines can be
framed in this way. In this
- Transactions on Information Theory , 2004
"... Abstract—We address the problem of how throughput in a wireless network scales as the number of users grows. Following the model of Gupta and Kumar, we consider identical nodes placed in a fixed
area. Pairs of transmitters and receivers wish to communicate but are subject to interference from other ..."
Cited by 115 (3 self)
Add to MetaCart
Abstract—We address the problem of how throughput in a wireless network scales as the number of users grows. Following the model of Gupta and Kumar, we consider identical nodes placed in a fixed
area. Pairs of transmitters and receivers wish to communicate but are subject to interference from other nodes. Throughput is measured in bit-meters per second. We provide a very elementary
deterministic approach that gives achievability results in terms of three key properties of the node locations. As a special case, we obtain throughput for a general class of network configurations
in a fixed area. Results for random node locations in a fixed area can also be derived as special cases of the general result by verifying the growth rate of three parameters. For example, as a
simple corollary of our result we obtain a stronger (almost sure) version of the log throughput for random node locations in a fixed area obtained by Gupta and Kumar. Results for some other
interesting non-independent and identically distributed (i.i.d.) node distributions are also provided. Index Terms—Ad hoc networks, capacity, deterministic, individual sequence, multihop, random,
scaling, throughput, wireless networks. I.
, 1999
"... The complexity of the mobility tracking problem in a cellular environment has been characterized under an information-theoretic framework. Shannon’s entropy measure is iden-tified as a basis for
comparing user mobility models. By building and maintaining a dictionary of individual user’s path update ..."
Cited by 112 (12 self)
Add to MetaCart
The complexity of the mobility tracking problem in a cellular environment has been characterized under an information-theoretic framework. Shannon’s entropy measure is iden-tified as a basis for
comparing user mobility models. By building and maintaining a dictionary of individual user’s path updates (as opposed to the widely used location up-dates), the proposed adaptive on-line algorithm
can learn subscribers’ profiles. This technique evolves out of the con-cepts of lossless compression. The compressibility of the variable-to-fixed length encoding of the acclaimed Lempel-Ziv family
of algorithms reduces the update cost, whereas their built-in predictive power can be effectively used to re-duce paging cost.
- Annals of Statistics , 1999
"... We study estimation in the class of stationary variable length Markov chains (VLMC) on a finite space. The processes in this class are still Markovian of higher order, but with memory of
variable length yielding a much bigger and structurally richer class of models than ordinary higher order Markov ..."
Cited by 85 (5 self)
Add to MetaCart
We study estimation in the class of stationary variable length Markov chains (VLMC) on a finite space. The processes in this class are still Markovian of higher order, but with memory of variable
length yielding a much bigger and structurally richer class of models than ordinary higher order Markov chains. From a more algorithmic view, the VLMC model class has attracted interest in
information theory and machine learning but statistical properties have not been explored very much. Provided that good estimation is available, an additional structural richness of the model class
enhances predictive power by finding a better trade-off between model bias and variance and allows better structural description which can be of specific interest. The latter is exemplified with some
DNA data. A version of the tree-structured context algorithm, proposed by Rissanen (1983) in an information theoretical set-up, is shown to have new good asymptotic properties for estimation in the
class of VLMC's, even when the underlying model increases in dimensionality: consistent estimation of minimal state spaces and mixing properties of fitted models are given. We also propose a new
bootstrap scheme based on fitted VLMC's. We show its validity for quite general stationary categorical time series and for a broad range of statistical procedures. AMS 1991 subject classifications.
Primary 62M05; secondary 60J10, 62G09, 62M10, 94A15 Key words and phrases. Bootstrap, categorical time series, central limit theorem, context algorithm, data compression, finite-memory sources, FSMX
model, Kullback-Leibler distance, model selection, tree model. Short title: Variable Length Markov Chain 1 Research supported in part by the Swiss National Science Foundation. Part of the work has
been done while visiting th...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=120856","timestamp":"2014-04-19T20:06:43Z","content_type":null,"content_length":"40545","record_id":"<urn:uuid:461fa105-fcdf-47c7-90e3-e1170921a4a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
|
algebraic transformations
September 8th 2010, 12:35 PM #1
Jul 2010
algebraic transformations
Can anyone help me with these equations.
make T the subject of the logarithmic relationship
M= V In T
the following formula occurs in thermodynamics
Q= R0 In (V2/V1)
Transform to make V2 the subject.
The ratio of belt tensions in a v-belt transmission is given by
R= γ0 / e^sinα
transform to make α the subject
"l"n, not "I"n!! I have seen that for year, and I have never understood why people write "In(x)" "l" for "l"ogarithm, "n" for "n"atural!
Any way, you solve an equation (make T the subject) by "un doing" what has been done to T. Here, if you were given T you would find M by doing two things: first take the logarithm of T, then
multiply by V. To solve for T, do the opposite, in the opposite order. The opposite of "mutiply by V" is "divide by V" so you divide both sides by V: M/V= V ln(T)/V= ln(T). Now you do the
opposite of "logarithm" which is the exponential: f(x)= ln(x) is defined to be the inverse function to $g(x)= e^x$. Take the exponential of both sides:
$e^{M/V}= e^{ln(T)}= T$. $T= e^{M/V}$.
the following formula occurs in thermodynamics
Q= R0 In (V2/V1)
Transform to make V2 the subject.
Same basic idea: $Q= R_0 ln(V2/V1)$. Q/R= ln(V2/V1). $e^{Q/R}= V2/V1$, $V1e^{Q/R}= V2$.
The ratio of belt tensions in a v-belt transmission is given by
R= γ0 / e^sinα
transform to make α the subject[/QUOTE]
$R= \frac{y_0}{e^{sin(\alpa)}}$
Multiply both sides by $e^{sin(\alpha)}$:
$Re^{sin(\alpha)}= y_0$
Divide both sides by R:
$e^{sin(\alpha)}= \frac{y_0}{R}$
Take the logarithm (inverse to exponential) of both sides:
$ln(e^{sin(\alpha)}= sin(\alpha)= ln(y_0/R)$
Take the inverse sine of both sides:
$sin^{-1}(sin(\alpha))= \alpha= sin^{-1}(ln(y_0/R))$
September 8th 2010, 01:36 PM #2
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/algebra/155587-algebraic-transformations.html","timestamp":"2014-04-18T19:21:02Z","content_type":null,"content_length":"37297","record_id":"<urn:uuid:856e53e5-8e86-4698-abf6-79fba451fb7f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Confidence Statements
Associated With Sampling Plans
Dr. Wayne A. Taylor
Suppose the single sampling plan with sample size n=50 and accept number a=1 is being used. If a lot is accepted, one can state with 90% confidence that the lot is less than 7.56% defective.
Likewise, if the lot is rejected, one can state with 95% confidence that the lot is above 0.715% defective.
The values 0.715 and 7.56 correspond to the AQL and LTPD[0.10] of the sampling plan respectively. These values are obtained from the operating characteristic (OC) curve of the sampling plan. The
figure below shows the OC curve of the above sampling plan.
The bottom axis is the percent defective. The left axis gives the corresponding probability of acceptance. For example, a 3% defective lot has a 0.56 probability of acceptance. Each sampling plan has
its own distinctive OC curve.
The AQL and LTPD[0.10] represent two points on the OC curve. The AQL is defined to be that percent defective with a 95% chance of acceptance. The following figure shows that for the single sampling
plan n=50 and a=1, the AQL is 0.715% defective. The AQL represents a level of defects routinely accepted by the sampling plan.
Whenever a sampling plan rejects a lot, one can state with 95% confidence that the lot is above the AQL. This is a result of the fact that lots at or below the AQL are not likely to be rejected. The
level of confidence, 95%, is a direct result of the fact that at the AQL there is a 95% chance of acceptance.
The LTPD[0.10] is defined to be that percent defective with a 10% chance of acceptance. The above figure shows that for the single sampling plan n=50 and a=1, the LTPD[0.10] is 7.56% defective. The
LTPD[0.10] represents a level of defects routinely rejected by the sampling plan.
Whenever a sampling plan accepts a lot, one can state that with 90% confidence that the lot is below the LTPD[0.10]. This is a result of the fact that lots at or above the LTPD[0.10] are not likely
to be accepted. The level of confidence, 90%, is equal to 100% - 10% where 10% is the chance of acceptance at the LTPD[0.10].
The AQL and LTPD[0.10] represent special cases of percentiles of the OC curve. The AQL is the 95^th percentile while the LTPD[0.10] is the 10^th percentile. Other percentiles can be used as well. The
LTPD[0.05] is the 5^th percentile. If a lot is accepted, one can state with 95% confidence that the lot is below the LTPD[0.05].
The percentiles of sampling plans from MIL-STD-105E and ANSI/ASQC Z1.4 can be obtained from Table X of those standards. Most other tables of sampling plans provide similar information. The software
accompanying my book Guide to Acceptance Sampling can also be used to determine the percentiles.
It is important to note that passing a sampling plan does not mean that the lot is "good." If we consider lots above the AQL as bad and lots below the AQL as good, then rejecting a lot proves that
the lot is bad. However, passing the lot only proves that the lot is below the LTPD, not the AQL. Sampling plans will accept some lots above the AQL if such lots are produced.
Appeared in FDC Control, Food Drug & Cosmetic Division ASQ, No. 116, December 1997, p. 2
Copyright © 1997 Taylor Enterprises, Inc.
|
{"url":"http://www.variation.com/techlib/as-6.html","timestamp":"2014-04-18T05:38:24Z","content_type":null,"content_length":"11741","record_id":"<urn:uuid:e6948012-2c45-41aa-af48-8840d452251f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Preprints 2008
The three-letter code attached to the preprint number indicates the scientific programme during which the paper was written. Click on the code to see the programme details.
Preprint No. Author(s) Title and publication details
NI08001-SIS D Forcella, A Hanany, Y-H He, A Zaffaroni Mastering the master space
NI08002-SIS P Benincase, A Buchel, MP Heller, RA Janik On the supergravity description of boost invariant conformal plasma at strong coupling
NI08003-SIS RA Janik, M Trzetrzelewski Supergravitations from one loop perturbative N = 4 SYM
NI08004-SIS T Quella, V Schomerus, T Creutzig Boundary spectra in superspace sigma-models
NI08005-SIS L Del Debbio, MT Frandsen, H Panagopoulos, F Sannino Higher representations on the lattice: perturbative studies
NI08006-SIS N Evans, E Threlfall Mesonic quasinormal modes of the Sakai-Sugimoto model at high temperature
NI08007-AGA MJ Gruber, M Helm, I Veselić Optimal Wegner estimates for random Schrödinger operators on metric graphs
NI08008-SIS K Zarembo Quantum giant magnons
NI08009-AGA MJ Gruber, DH Lenz, I Veselić Uniform existence of the integrated density of states for combinatorial and metric graphs over Z^d
NI08010-HOP SN Chandler-Wilde, P Monk The PML for rough surface scattering
NI08011-SIS G D'Appollonio, T Quella The diagonal cosets of the Heisenberg group
NI08012-HOP S Langdon, M Mokgolele, SN Chandler-Wilde High frequency scattering by convex curvilinear polygons
NI08013-CSM DG Wagner Weighted enumeration of spanning subgraphs with degree constraints
NI08014-HOP SN Chandler-Wilde, IG Graham Boundary integral methods in high frequency scattering
NI08015-CSM J Cibulka, J Hladký, MA Lacroix, DG Wagner A combinatorial proof of Rayleigh monotonicity for graphs
NI08016-SIS M Cirafici, A Sinkovics, RJ Szabo Cohomological gauge theory, quiver matrix models and Donaldson-Thomas theory
NI08017-HOP SN Chandler-Wilde, M Lindner Limit operators, collective compactness and the spectral theory of infitinte matrices
NI08018-SCH J Baek, GJ McLachlan Mixtures of factor analyzers with common factor loadings for the clustering and visualisation of high-dimensional data
NI08020-HOP V Michel, AS Fokas A unified approach to various techniques for the non-uniqueness of the inverse gravimetric problem and wavelet-based methods
NI08021-SCH CP Robert, N Chopin, J Rousseau Harold Jeffreys' Theory of Probability revisited
NI08022-SCH CP Robert, MA Beaumont, J–M Marin, J–M Cornuet Adaptivity for ABC algorithms: the ABC-PMC scheme
NI08024-CSM B Jackson, A Sokal Zero-free regions for multivariate Tutte polynomials (alias Potts-model partition functions) of graphs and matroids
NI08025-CSM ØJ Rødseth, JA Sellers, H Tverberg Enumeration of the degree sequences of non-separable graphs and connected graphs
NI08026-CSM T Gateva-Ivanova, S Majid Quantum spaces associated to multipermutation solutions of level 2
NI08027-CSM B Jackson Counting 2-connected deletion-minors of binary matroids
NI08028-CSM PJ Cameron, D Johannsen, T Prellberg, P Schweitzer Counting defective parking functions
NI08029-CSM PJ Cameron Oligomorphic permutation groups
NI08030-CSM B Bollobás, S Janson, O Riordan Sparse random graphs with clustering
NI08031-LAA JF Lynch A logical characterization of individual-based models
NI08032-HRT CV Tran, L Blackbourn The number of degrees of freedom of two-dimensional turbulence
NI09001-SCH J-H Xue, DM Titterington Comment on "On discriminative vs. generative classifiers: a comparison of logistic regression and naive Bayes"
NI09002-SCH HA Chipman, EI George, RE McCulloch BART: Bayesian Additive Regression Trees
NI09003-SCH RD Cook, L Forzani Covariance reducing models: an alternative to spectral modeling of covariance matrices
NI09004-SCH L Dümbgen, SA van de Geer, JA Wellner Nemirovski's inequalities revisited
NI09005-NPA EF Toro, A Hidalgo, M Dumbser FORCE schemes on unstructured meshes I: conservative hyperbolic systems
NI09006-HOP B Lassen, RVN Melnik, M Willatzen Spurious solutions in the multiband effective mass theory applied to low dimensional nanostructures
NI09007-SIS V Mitev, T Quella, V Schomerus Principal chiral model on superspheres
NI09008-AGA D Grieser Monotone unitary families
NI09009-AGA D Grieser Thin tubes in mathematical physics, global analysis and spectral geometry
NI09010-AGA D Grieser Spectra of graph neighborhoods and scattering
NI09011-MPA A Lytova, L Pastur On asymptotic behaviour of multilinear Eigenvalue statistics of random matrices
NI09012-SCH S Kritchman, B Nadler Determining the number of components in a factor model from limited noisy data
NI09013-PLG ES Allman, C Matias, JA Rhodes Identifiability of latent class models with many observed variables
NI09014-CSM JW Essam, FY Wu The exact evaluation of the corner-to-corner resistance of an M×N resistor network: asymptotic expansion
NI09015-CSM G Farr Transforms and minors for binary functions
NI09016-SCH F Bunea Honest variable selection in linear and logistic regression models via l[1] and l[1] + l[2] penalisation
NI09017-MPA M Dumbser, A Hidalgo, M Castro, C Parés, EF Toro FORCE schemes on unstructured meshes II: nonconservative hyperbolic systems
NI09018-SCH DG Dritschel, RK Scott, C Macaskill, GA Gottwald, CV Tran Late time evolution of unforced inviscid two-dimensional turbulence
NI09019-MPA J Schenker Eigenvector localization for random band width matrices with power law band width
NI09020-MPA Y Kang, J Schenker Diffusion of wave packets in a Markov random potential
NI09021-NPA LF Dinu, MI Dinu Martin's "differential" approach: some classifying remarks
|
{"url":"http://www.newton.ac.uk/preprints2008.html","timestamp":"2014-04-19T17:10:45Z","content_type":null,"content_length":"20167","record_id":"<urn:uuid:f3891da6-88bf-4e5e-abbf-ef1b8881ae2c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Category:Walsh matrix
A Walsh matrix is a special square matrix, that contains only 1 and -1 entries.
This category contains also matrices, that share only the pattern of a Walsh matrix,
especially binary Walsh matrices, where 1 and -1 are replaced by 0 and 1.
This category has the following 3 subcategories, out of 3 total.
Media in category "Walsh matrix"
The following 4 files are in this category, out of 4 total.
Last modified on 30 March 2014, at 17:47
|
{"url":"https://commons.m.wikimedia.org/wiki/Category:Walsh_matrix","timestamp":"2014-04-20T14:59:02Z","content_type":null,"content_length":"20047","record_id":"<urn:uuid:069c2c9a-9f29-4d16-95ca-6c36d7270178>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fort Washington, MD ACT Tutor
Find a Fort Washington, MD ACT Tutor
...Though I am located in Arlington, Virginia, I am happy to travel to meet students, particularly to areas that are easily accessible via Metro.I work as a professional economist, where I
utilize econometric models and concepts regularly using both STATA and Excel. I have also had extensive course...
16 Subjects: including ACT Math, calculus, statistics, geometry
...TJ. I tutor many TJ students in chemistry and biology (honors and AP). I also tutor high school classes at the opposite end of the spectrum, with substantial numbers of at-risk students, from
troubled backgrounds. I won two teaching awards as a graduate student teaching assistant in Genetics l...
25 Subjects: including ACT Math, chemistry, writing, reading
...While in high school, I also tutored classmates in various math courses. I am an active volunteer in the community, working with children to teach them about science concepts. I have
experience working with children ages 10 and up, so I can work with those who are younger or those who are older and need assistance with more advanced coursework.
25 Subjects: including ACT Math, chemistry, physics, calculus
...I have also been a math tutor through college, teaching up to Calculus-level classes. My tutoring style can adapt to individual students and will teach along with class material so that
students can keep their knowledge grounded. I have a Master's degree in Chemistry and I am extremely proficient in mathematics.
11 Subjects: including ACT Math, chemistry, geometry, algebra 2
...I am very patient, flexible, and have mentored and tutored students for 25+ years. If you are interested, I can meet with you at a local library Monday - Sunday, except Wednesdays. I look
forward to working with you!
32 Subjects: including ACT Math, reading, GRE, English
Related Fort Washington, MD Tutors
Fort Washington, MD Accounting Tutors
Fort Washington, MD ACT Tutors
Fort Washington, MD Algebra Tutors
Fort Washington, MD Algebra 2 Tutors
Fort Washington, MD Calculus Tutors
Fort Washington, MD Geometry Tutors
Fort Washington, MD Math Tutors
Fort Washington, MD Prealgebra Tutors
Fort Washington, MD Precalculus Tutors
Fort Washington, MD SAT Tutors
Fort Washington, MD SAT Math Tutors
Fort Washington, MD Science Tutors
Fort Washington, MD Statistics Tutors
Fort Washington, MD Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Fort_Washington_MD_ACT_tutors.php","timestamp":"2014-04-19T15:09:39Z","content_type":null,"content_length":"24102","record_id":"<urn:uuid:b8bb85c5-99ff-4805-8f90-9058714fcb97>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Game Theory of The Price is Right: Part 2
In my
last post
, I discussed some game theory behind the Cliffhangers game on
The Price is Right
. And while watching the stupidity of Cliffhangers players got me angry, what really got me thinking was the bidding itself.
To get up on stage (which comes with a chance to win bigger prizes), four contestants take turns guessing the value of an item (usually several hundred dollars). The one with the closest guess to the
item's actual value, without going over, wins the item and gets to go up on stage.
Frequently, people make really dumb guesses. Someone will bid, say, $420 for an item, and another contestant will subsequently bid $415, giving themselves a $5 window in which to win the item. But
more interesting is whether or not contestants decide to bid $1 above someone else. Frequently, someone will bid, say, $475 after another contestant has bid $420. They could have bid $421 (and indeed
this does happen, as in the video below), but in the unwritten etiquette of
The Price is Right
, it's viewed as a low blow.
Despite the evilness of the $1-top-up, the game theorist in me wondered why contestants — specifically, the fourth contestant — don't do it more often. I thought, perhaps, that it was because they
are playing a repeated game (the three contestants who do not win get to bid again on a new item against a new fourth contestant, except after the sixth and final round of the show). If I'm playing
against the same people again, then perhaps I don't want to be mean to them, since then they might be mean to me later.
But after further pondering, I realized this does not make sense. In the last round, repetition is not an issue, so the fourth contestant would want to bid $1 above someone else (or bid $1, if they
think everyone else has overbid). Given that there is no incentive for contestants to cooperate in the last round, contestants shouldn't coopoerate in the fifth round (since there's no point to
endearing themselves to their opponents for the sixth round). And if contestants shouldn't cooperate in the fifth round, there's no incentive to cooperate in the fourth round either, and so on. For
game theorists, this is an example of a
subgame perfect Nash equilibrium
(solved using
backwards induction
So if fourth player's optimal bid is to do the evil $1-top-up, I found myself asking why this doesn't happen more often. Here are the possible reasons I though of:
1. Some contestants are stupid. Economists don't like this answer, since it's easier to assume everyone is rational, but if you watch The Price is Right regularly, it's hard to dismiss this
2. The show is televised; as much as contestants would love to win a karaoke set, contestants don't want to look like a huge jerk on national TV. This strikes me as a somewhat plausible explanation.
3. Contestants don't like the item being presented (remember that if you lose, in most cases you'll be able to try again. So if I have no interest in winning the karaoke set, I may have an incentive
to intentionally bid poorly and hope that the next item up for grabs is more appealing). This, however, is not a very convincing explanation, since contstants are not guaranteed another shot at
winning, and the main attraction of winning is not getting the karaoke set, but getting the chance to win something bigger once the contestant gets up on stage.
4. Contestants want to guess the exact price of the item. Contestants receive a bonus ($500, I think) for guessing the price of the item exactly. It may be that this incentive strongly influences
bids. In the extreme case, if all I care about is the $500, it does not matter what my opponents bid, since whether I bid $419, $420 or $421, I would have the same odds, in theory, of guessing
the exact price. But I'm not convinced by this explanation either, since the real prospect of winning bigger prizes should outweight the miniscule chance of obtaining the $500 incentive.
Reading the article, one can't help but gain an appreciation for
the real beauty of the game theory
The Price is Right
. The math is somewhat complex, but it finishes in a clean result: in a world with identical contestants who act rationally, the fourth contestant wins one third (i.e. 3/9ths) of the time, and
everyone else wins 2/9ths of the time.
In this perfect world, the first contestant bids highest, the second the next highest, the third the next highest, and the fourth bids $1. Players 1 through 3 evenly space their bids out across the
probability distribution for the prize (think of this as the range of values that they think the price could be, taking into account how likely each value is to arise — for example, it's very likely
that it's $500, somewhat possible it's $250 or $750, and very unlikely that it's $0 or $1,000). The study includes this graph, which may help visually-inclined readers:
My intuition was that, in an equilibrium, all players would evenly space out their bids, so I was confused by the theory at first. But the reason contestants don't evenly space out their bid is that
the last player gets the trump card: he or she could bid $1 more than someone else, without having to worry about someone else doing the same to them (this round, at least). So if the first three
contestants bid at 0.75, 0.5, and 0.25 in the graph above, then the fourth contestant could bid 0, 0.2500001, 0.5000001 or 0.7500001, and have for all intents and purposes an equal chance of winning
that sequence of bidding.
But bidding 0 isn't the best strategy, since if the fourth contestant bids $1 more than one of the previous contestants, he or she has the same chance of winning the round, but there is also the
chance that everyone overbids, in which case the players have to bid again and get a shot at winning. In other words, the fourth bidder is not indifferent between bidding a dollar and bidding a
dollar more than someone else when bids are evenly spaced, since if they bid a dollar more than someone else, they get two chances at winning (once the first time, and again if everyone overbids).
They only get one shot at winning when they bid $1.
Thus, the other players have to concede a little bit to the fourth contestant in order to coax him or her into bidding $1, thereby preventing the fourth contestant from doing an evil top-up bid.
But does the theory play out in real life? Not at all. The study examined dozens of Price is Right episodes and basically concluded that the average contestant is stupid:
"Our results indicate that rational decision theory cannot explain contestant behavior on The Price Is Right. Even when faced with relatively simple problems, we demonstrate that some (indeed
most) contestants do not deduce the optimal strategy."
follow-up study
Paul Healy
Charles Noussair
found similar results, namely that the
Price is Right
is to complicated for people to figure out (although if you cut it down to three contestants, remove the possibility of rebidding if everyone overbids, and let people play a whole bunch of times,
they start to get the hang of it).
So the venerable
Happy Gilmore
might have summed it up best: when it comes to contestants' use of game theory on the
Price is Right
, more often than not, "
The price is wrong, b@#$%
|
{"url":"http://dmkarp.blogspot.com/2011/12/game-theory-of-price-is-right-part-2.html","timestamp":"2014-04-18T08:57:59Z","content_type":null,"content_length":"89953","record_id":"<urn:uuid:8031bc3d-8dd8-4095-964f-a987c1786886>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Using Tan to Find Slope & Equation
May 22nd 2012, 08:38 AM #1
May 2012
Using Tan to Find Slope & Equation
I need to know the answer to this question for an upcoming exam. It goes like this:
"Using tan of an angle to find slope of the line and then the equation of the line."
All help is much appreciated.
Re: Using Tan to Find Slope & Equation
note the similarity between the two diagrams ...
May 22nd 2012, 10:16 AM #2
|
{"url":"http://mathhelpforum.com/trigonometry/199097-using-tan-find-slope-equation.html","timestamp":"2014-04-17T04:15:52Z","content_type":null,"content_length":"33198","record_id":"<urn:uuid:f227739b-5407-46c3-b0b6-750c6d127db3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Coordinate Plane
4.1: The Coordinate Plane
Created by: CK-12
In Lesson 1.6, you graphed ordered pairs. This lesson will expand upon your knowledge of graphing ordered pairs to include vocabulary and naming of specific items related to ordered pairs.
An ordered pair is also called a coordinate. The $y-$$x-$
A two-dimensional (2-D) coordinate has the form $(x,y)$
The 2-D plane that is used to graph coordinates or equations is called a Cartesian plane or a coordinate plane. This 2-D plane is named after its creator, Rene Descartes. The Cartesian plane is
separated into four quadrants by two axes. The horizontal axis is called the $x-$ and the vertical axis is called the $y-$. The quadrants are named using Roman Numerals. The image below illustrates
the quadrant names.
The first value of the ordered pair is the $x-$. This value moves along the $x-$$y-$. This value moves along the $y-$
Multimedia Link: For more information on the Cartesian plane and how to graph ordered pairs, visit Purple Math’s - http://www.purplemath.com/modules/plane.htm - website.
Example 1: Find the coordinates of points $Q$$R$
Solution: In order to get to $Q$$x$negative $y$$x$$Q$$y$$Q$
The coordinates of $R$$x-$$x$$y-$$y$
Words of Wisdom from the Graphing Plane
Not all axes will be labeled for you. There will be many times you are required to label your own axes. Some problems may require you to graph only the first quadrant. Others need two or all four
quadrants. The tic marks do not always count by ones. They can be marked in increments of 2, 5, or even $\frac{1}{2}$
The increments by which you count your axes should MAXIMIZE the clarity of the graph.
In Lesson 1.6, you learned the vocabulary words relation, function, domain, and range.
A relation is a set of ordered pairs.
A function is a relation in which every $x-$exactly one $y-$
The set of all possible $x-$domain.
The set of all possible $y-$range.
Graphing Given Tables and Rules
If you kept track of the amount of money you earned for different hours of babysitting, you created a relation. You can graph the information in this table to visualize the relationship between these
two variables.
$& \text{Hours} && 4 && 5 && 10 && 12 && 16 && 18\\& \text{Total \} && 12 && 15 && 30 && 36 && 48 && 54$
The domain of the situation would be all positive real numbers. You can babysit for a fractional amount of time but not a negative amount of time. The domain would also be all positive real numbers.
You can earn fractional money, but not negative money.
If you read a book and can read twenty pages an hour, there is a relationship between how many hours you read and how many pages you read. You may even know that you could write the formula as
$n & = 20 \cdot h && n = \text{number of pages;} && h = \text{time measured in hours. OR...}\\h & = \frac{n}{20}$
To graph this relation, you could make a chart. By picking values for the number of hours, you can determine the number of pages read. By graphing these coordinates, you can visualize the relation.
Hours Pages
1.5 30
3.5 70
This relation appears to form a straight line. Therefore, the relationship between the total number of read pages and the number of hours can be called linear. The study of linear relationships is
the focus of this chapter.
Practice Set
Sample explanations for some of the practice exercises below are available by viewing the following video. Note that there is not always a match between the number of the practice exercise in the
video and the number of the practice exercise listed in the following exercise set. However, the practice exercise is the same in both. CK-12 Basic Algebra: The Coordinate Plane (6:50)
In questions 1 – 6, identify the coordinate of the given letter.
1. D
2. A
3. F
4. E
5. B
6. C
Graph the following ordered pairs on one Cartesian plane. Identify the quadrant in which each ordered pair is located.
7. (4, 2)
8. (–3, 5.5)
9. (4, –4)
10. (–2, –3)
11. $\left (\frac{1}{2}, –\frac{3}{4}\right )$
12. (–0.75, 1)
13. $\left (-2\frac{1}{2}, -6\right )$
14. (1.60, 4.25)
In 15 – 22, using the directions given in each problem, find and graph the coordinates on a Cartesian plane.
15. Six left, four down
16. One-half right, one-half up
17. Three right, five down
18. Nine left, seven up
19. Four and one-half left, three up
20. Eight right, two up
21. One left, one down
22. One right, three-quarter down
23. Plot the vertices of triangle $ABC:(0, 0),(4, -3),(6, 2)$
24. The following three points are three vertices of square $ABCD$$D$$A(-4,-4) \ B(3,-4) \ C(3,3)$
25. Does the ordered pair (2, 0) lie in a quadrant? Explain your thinking.
26. Why do you think (0, 0) is called the origin?
27. Becky has a large bag of M&Ms that she knows she should share with Jaeyun. Jaeyun has a packet of Starburst candy. Becky tells Jaeyun that for every Starburst he gives her, she will give him
three M&Ms in return. If $x$$y$
1. Write an algebraic rule for $y$$x$
2. Make a table of values for $y$$x$
3. Plot the function linking $x$$y$$0 \le x \le 10,0 \le y \le 10$
28. Consider the rule: $y=\frac{1}{4} x+8$
29. Ian has the following collection of data. Graph the ordered pairs and make a conclusion from the graph.
Year % of Men Employed in the United States
1973 75.5
1980 72.0
1986 71.0
1992 69.8
1997 71.3
2002 69.7
2005 69.6
2007 69.8
2009 64.5
Mixed Review
30. Find the sum: $\frac{3}{8}+\frac{1}{5}-\frac{5}{9}$
31. Solve for $m: 0.05m+0.025(6000-m)=512$
32. Solve the proportion for $u: \frac{16}{u-8}=\frac{36}{u}$
33. What does the Additive Identity Property allow you to do when solving an equation?
34. Shari has 28 apples. Jordan takes $\frac{1}{4}$
35. The perimeter of a triangle is given by the formula $Perimeter=a+b+c$$a, b,$$c$$\triangle ABC$
36. Evaluate $\frac{y^2-16+10y+2x}{2}$for $x=2$and $y=-2.$
Files can only be attached to the latest version of None
|
{"url":"http://www.ck12.org/book/CK-12-Algebra-Basic/r1/section/4.1/","timestamp":"2014-04-19T00:52:26Z","content_type":null,"content_length":"121082","record_id":"<urn:uuid:16afd88c-be29-45bc-ab5a-b1c8214cd117>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gini index and Lorenz curve with R
February 23, 2012
By tuxettechix
You can do anything pretty easily with R, for instance, calculate concentration indexes such as the Gini index or display the Lorenz curve (dedicated to my students).
Although I did not explain it during my lectures, calculating a Gini index or displaying the Lorenz curve can be done very easily with R. All you have to do is to figure out which of the billions
packages available on CRAN (ok, only 3,629 packages to be honest) will give you the answer (and for that, Google can help you: just try to google “r cran gini” and you should be able to find by
yourself a few answers).
One of the packages that can do it is ineq that you can install in R by using the command line (or by whichever alternative method you want):
The package should be loaded in R by
and then, you can start to use it. I’ll show a very simple example of its use for the concepts that I have taught during the first year lectures. The example is based on the data AirPassengers that
you may load by simply typing:
(these data are the monthly totals of international airline passengers, from 1949 to 1960 and are thus relevant enough for a concentration analysis).
Gini index
The Gini index of the distribution can be calculated by:
[1] 0.2407563
(see also help(ineq) for more advanced features)
Lorenz curve
The Lorenz curve is displayed by
or with
(if you want to change color and line width but see also help(Lc) for an advanced use). The resulting picture is given below:
for the author, please follow the link and comment on his blog:
tuxettechix » R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or
|
{"url":"http://www.r-bloggers.com/gini-index-and-lorenz-curve-with-r/","timestamp":"2014-04-19T14:42:34Z","content_type":null,"content_length":"41696","record_id":"<urn:uuid:87e376b4-67fc-462e-8456-e2e3ca36c0b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
|
"Determine whether the series is convergent or divergent."
April 14th 2010, 04:37 PM
"Determine whether the series is convergent or divergent."
Determine whether the series is convergent or divergent.
The answer is Divergent.
I'd appreciate it if someone could explain to me why this is divergent. Does it have anything do with the same summation of 1/n ?
Thanks in advance!
April 14th 2010, 04:54 PM
Determine whether the series is convergent or divergent.
The answer is Divergent.
I'd appreciate it if someone could explain to me why this is divergent. Does it have anything do with the same summation of 1/n ?
Thanks in advance!
what can you say about the series ...
$-10 \sum{\frac{1}{n^{\frac{3}{5}}}}$ ?
April 14th 2010, 05:16 PM
Does it diverge BECAUSE OF the comparison test with 1/n and since 1/n diverges and is smaller that's what makes what you said divergent and therefore the whole thing divergent?
April 14th 2010, 05:21 PM
|
{"url":"http://mathhelpforum.com/calculus/139215-determine-whether-series-convergent-divergent-print.html","timestamp":"2014-04-20T02:49:48Z","content_type":null,"content_length":"6532","record_id":"<urn:uuid:4df17732-f790-4bfa-867d-27fee2df5587>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: 2-Source Dispersers for no(1)
Entropy, and Ramsey Graphs Beating
the Frankl-Wilson Construction
Boaz Barak
Anup Rao
Ronen Shaltiel
Avi Wigderson§
July 22, 2008
The main result of this paper is an explicit disperser for two independent sources on n bits,
each of min-entropy k = 2log1-0 n
, for some small absolute constant 0 > 0). Put differently,
setting N = 2n
and K = 2k
, we construct an explicit N × N Boolean matrix for which no K × K
sub-matrix is monochromatic. Viewed as the adjacency matrix of a bipartite graph, this gives an
explicit construction of a bipartite K-Ramsey graph of 2N vertices.
This improves the previous the previous bound of k = o(n) of Barak, Kindler, Shaltiel, Sudakov
and Wigderson [BKS+
05]. As a corollary, we get a construction of a 22log1-0 n
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/097/1308671.html","timestamp":"2014-04-21T13:28:16Z","content_type":null,"content_length":"8108","record_id":"<urn:uuid:23cb49c1-fb74-409f-aac5-55d69aaa0f94>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hierarchical Inference of Unicast Network Topologies Based on End-to-End Measurements
Download Links
by Meng-fu Shih , Alfred O. Hero
author = {Meng-fu Shih and Alfred O. Hero},
title = {Hierarchical Inference of Unicast Network Topologies Based on End-to-End Measurements},
year = {}
Abstract—In this paper, we address the problem of topology discovery in unicast logical tree networks using end-to-end measurements. Without any cooperation from the internal routers, topology
estimation can be formulated as hierarchical clustering of the leaf nodes based on pairwise correlations as similarity metrics. Unlike previous work that first assumes the network topology is a
binary tree and then tries to generalize to a nonbinary tree, we provide a framework that directly deals with general logical tree topologies. A hierarchical algorithm to estimate the topology is
developed in a recursive manner by finding the best partitions of the leaf nodes level by level. Our simulations show that the algorithm is more robust than binary-tree based methods. Index
Terms—Graph-based clustering, mixture models, network tomography, topology estimation. I.
1283 Data clustering: a review - Jain, Murty, et al. - 1999
873 Finite mixture models - McLachlan, Peel - 2000
314 Simple fast algorithms for the editing distance between trees and related problems - Zhang, Dennis - 1989
267 Unsupervised learning of finite mixture models - Figueiredo, Jain
123 A simple min-cut algorithm - Stoer, Wagner - 1997
99 A clustering algorithm based on graph connectivity - Hartuv, Shamir - 2000
91 Inference of multicast routing trees and bottleneck bandwidths using end-to-end measurements - Ratnasamy, McCanne - 1999
72 Maximum likelihood network topology identification from edge-based unicast measurements - Coates, Castro, et al. - 2002
60 Multicast topology inference from measured end-to-end loss - Duffield, Horowitz, et al. - 2002
49 Linear time algorithms for finding a sparse k-connected spanning subgraph of a k-connected graph”, Algorithmica 7 - Nagamochi, Ibaraki - 1992
44 Inference and Labeling of Metric-Induced Network Topologies - Bestavros, Byers, et al. - 2005
41 Unsupervised Learning Using MML - Oliver, Baxter, et al. - 1996
39 Multiple source, multiple destination network tomography - Rabbat, Nowak, et al. - 2004
27 Rissanen: Intertwining themes in theories of model order estimation - Lanterman, “Schwarz - 2000
21 Multicast topology inference from end-to-end measurements - Duffield, Horowitz, et al. - 2000
14 Likelihood based hierarchical clustering - Castro, Coates, et al. - 2004
14 Network tomography from measured end-to-end delay covariance - Duffield, Presti - 2004
8 Network topology discovery using finite mixture models - Shih, Hero - 2004
7 Internet Tomography: Recent Development - Castro, Coates, et al. - 2003
6 Communities in graphs - Brinkmeier - 2003
2 Topology discovery on unicast networks: A hierarchical approach based on end-to-end measurements,” CSPL - Shih, Hero
1 Adaptive multicast topology inference,” presented at the - Duffield, Horowitz, et al. - 2001
1 Hero III, “Unicast-based inference of network link delay distributions with finite mixture models - Shih, O - 2003
1 Unicast Internet tomography - Shih - 2005
|
{"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.158.535","timestamp":"2014-04-20T17:33:37Z","content_type":null,"content_length":"25831","record_id":"<urn:uuid:84612c30-5211-443b-819f-caf9e0b59e98>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
|
finding the area of the shaded part.
October 19th 2012, 11:56 AM #1
finding the area of the shaded part.
there some parts of the solution on the book where i found this problem.
Area of shaded region = A sector + [ A triangle - 2 (A sector 2)]
= 5/6 A circle + [ (s^2 * sqrt of 3)/4 - 2(1/6 A of circle)]
... i understand the 5/6 A circle and the 2(1/6 Area of circle, but i dont understand the (s^2*sqrt of 3)/4 for the Area of triangle.... how was that being done? please help me understand this.
Thank you.
Re: finding the area of the shaded part.
I am not looking at what extra info you provide but try this
area 8 radius circle +equilateral triangle side lenght 8 - 3 times the segment of a sector of 8 radius where sector is 1/6 of circle
Re: finding the area of the shaded part.
The area $A$ of an equilateral triangle having sides $s$ may be computed as follows:
$A=\frac{1}{2}\cdot s\cdot s\cdot\sin(60^{\circ})=\frac{1}{2}s^2\cdot\frac{ \sqrt{3}}{2}=\frac{s^2\sqrt{3}}{4}$
Re: finding the area of the shaded part.
The area of a circle of radius 8 is, of course $64\pi$. Every angle of an equilateral triangle has measure 60 degrees which is 60/360= 1/6 of the entire circle. The portion of the circle outside
the triangle has area $(5/6)(64\pi)$. If you draw a horizontal line where the triangle cuts the circle, you have an equilateral triangle with side length 8 and can use the formulas given above.
The area of the trapezoid below that has bases 8 and 16 and height $4\sqrt{3}$ and the the area of such a trapezoid is (h/2)(b1+ b2). Now you have to remove unshaded area in that trapezoid which
is again the area of 1/6 of a circle of radius 8.
Re: finding the area of the shaded part.
thank you guys
October 19th 2012, 12:57 PM #2
Super Member
Nov 2007
Trumbull Ct
October 19th 2012, 12:59 PM #3
October 19th 2012, 01:12 PM #4
MHF Contributor
Apr 2005
October 20th 2012, 06:56 PM #5
|
{"url":"http://mathhelpforum.com/geometry/205678-finding-area-shaded-part.html","timestamp":"2014-04-16T11:28:08Z","content_type":null,"content_length":"44235","record_id":"<urn:uuid:f92bbbc5-6b9a-4937-bc7f-ce4bc4885b05>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to be sure that a serie isn't a fourier series of a derivable function
May 23rd 2011, 07:53 AM #1
May 2010
How to be sure that a serie isn't a fourier series of a derivable function
Hi there. I have this interesting problem which I don't know how to solve. I'll post it here because I think more people will se it, but I'm not sure if this is the proper subforum.
The problem says: How can be sure that $\sum_{n = 1}^\infty \frac{1}{n}\sin (nx)$ isn't the Fourier series of a derivable function?
I thought that it doesn't accomplish the Diritchlet postulates, but it actually doesn't mean that it isn't a fourier series.
Does anyone know how to solve this?
Bye there and thanks.
If $f$ is the function which is represented by this series and derivable, use an integration by parts to compute the Fourier coefficients of $f'$. Use Parseval equality to show a contradiction.
May 23rd 2011, 08:37 AM #2
May 23rd 2011, 08:46 AM #3
|
{"url":"http://mathhelpforum.com/calculus/181397-how-sure-serie-isn-t-fourier-series-derivable-function.html","timestamp":"2014-04-16T05:02:49Z","content_type":null,"content_length":"37576","record_id":"<urn:uuid:987d3d78-81b7-482e-a36f-877a3f036940>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
|
pan xhorn
432 followers|96,970 views
Pascal's triangle and the number ePascal's triangle
is a triangular array of integers, the first few rows of which are shown in the picture. Apart from the flanking strips of 1s, each entry is the sum of the two entries above it. There are many well
known patterns among the entries of Pascal's triangle. Two of these patterns are (a) that the sum of the entries in the n-th row is equal to 2^n, and (b) that the alternating sum of entries in the
n-th row is equal to 0. For example, if n = 4, this means (a) that 1+4+6+4+1 = 2^4 = 16, and (b) that 1-4+6-4+1 = 0.
A less studied sequence associated with Pascal's triangle is the
of the entries in each row. This sequence, which appears as A001142 in
The On-Line Encyclopedia of Integer Sequences
) begins
1, 1, 2, 9, 96, 2500, 162000, 26471025, ...
Let us denote the entries of this sequence by p(1), p(2), p(3), and so on. The remarkable property of the sequence, illustrated in the picture, is that as n becomes large, the value of p(n-1)p(n+1)/
(p(n)^2) converges, albeit slowly, to the number e. The number e, which is approximately 2.718281828459045, is a famous irrational number, and is important because it is the base of the natural
This post is based on a post by
+Richard Elwes
, which you can find here (
). The result itself is due to
+Harlan Brothers
+Alexander Bogomolny
has written up a nice proof of the result here (
Might be handy for some of you: iptables mitigation (using u32 matching) for
; logs then drops all TLS heartbeat handshakes.
# Log rules
iptables -t filter -A INPUT -p tcp --dport 443 -m u32 --u32 "52=0x18030000:0x1803FFFF" -j LOG --log-prefix "BLOCKED: HEARTBEAT"
# Block rules
iptables -t filter -A INPUT -p tcp --dport 443 -m u32 --u32 "52=0x18030000:0x1803FFFF" -j DROP
commented on a video on YouTube.
Shared publicly -
死理性派,linux, emacs user,python and qt programmer,learning functional programming
• 四川大学
2003 - 2013
• 中国石油大学
2009 - 2012
Other names
xhorn, pronghorn, 南风
|
{"url":"https://plus.google.com/+panxhorn","timestamp":"2014-04-17T20:36:13Z","content_type":null,"content_length":"249769","record_id":"<urn:uuid:5014c301-6894-4e89-bf12-c9fdd8d68162>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hawthorne, NJ SAT Math Tutor
Find a Hawthorne, NJ SAT Math Tutor
...Spanish is completely different-everyone who majors in Hispanic Studies is very passionate about all things Hispanic-the culture, the language, the dancing, the food, etc. I find that I share
this passion, and it's refreshing to be around people that do more than just crunch numbers all day! I ...
11 Subjects: including SAT math, Spanish, statistics, geometry
...Hey, what else do you need to know? Well... maybe you'd like to know that I have been freelance tutoring for over ten years, and that I specialize in SAT and ACT math and science sections?
That means I can help with both the content and the strategy of those tests.
17 Subjects: including SAT math, calculus, geometry, biology
...This ability enables me to prepare students for their SAT, ACT Math tests and help them understand Algebra, Geometry, and basic math.I started playing chess in high-school and was president of
the chess club and class champion. I have continued to play for fun, relaxation and mental stimulation. I both enjoy and love the game.
17 Subjects: including SAT math, geometry, GRE, ASVAB
I recently completed a Master's degree in Education at the Concordia University in Curriculum and Instruction. I have over 10 years experience in tutoring, and I am a mentor to a lovely group of
youth aged 4 through 19. I have experience in tutoring Mathematics, Science, and physics.
21 Subjects: including SAT math, reading, English, geometry
...I started tutoring when I was in High School and have been tutoring ever since. I love helping students achieve their goals. With 15 years of experience, I understand that everyone learns
differently and I try to find the best way with each individual student to make that breakthrough.
12 Subjects: including SAT math, geometry, algebra 1, ACT Math
Related Hawthorne, NJ Tutors
Hawthorne, NJ Accounting Tutors
Hawthorne, NJ ACT Tutors
Hawthorne, NJ Algebra Tutors
Hawthorne, NJ Algebra 2 Tutors
Hawthorne, NJ Calculus Tutors
Hawthorne, NJ Geometry Tutors
Hawthorne, NJ Math Tutors
Hawthorne, NJ Prealgebra Tutors
Hawthorne, NJ Precalculus Tutors
Hawthorne, NJ SAT Tutors
Hawthorne, NJ SAT Math Tutors
Hawthorne, NJ Science Tutors
Hawthorne, NJ Statistics Tutors
Hawthorne, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Hawthorne_NJ_SAT_Math_tutors.php","timestamp":"2014-04-20T20:56:58Z","content_type":null,"content_length":"24102","record_id":"<urn:uuid:010822e2-41d7-4d77-a6f2-112a87cdbd07>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Roman abacus
From Wikipedia, the free encyclopedia
The Romans developed the Roman hand abacus, a portable, but less capable, base-10 version of the previous Babylonian abacus. It was the first portable calculating device for engineers, merchants and
presumably tax collectors. It greatly reduced the time needed to perform the basic operations of arithmetic using Roman numerals.
As Karl Menninger says on page 315 of his book,^1 "For more extensive and complicated calculations, such as those involved in Roman land surveys, there was, in addition to the hand abacus, a true
reckoning board with unattached counters or pebbles. The Etruscan cameo and the Greek predecessors, such as the Salamis Tablet and the Darius Vase, give us a good idea of what it must have been like,
although no actual specimens of the true Roman counting board are known to be extant. But language, the most reliable and conservative guardian of a past culture, has come to our rescue once more.
Above all, it has preserved the fact of the unattached counters so faithfully that we can discern this more clearly than if we possessed an actual counting board. What the Greeks called psephoi, the
Romans called calculi. The Latin word calx means 'pebble' or 'gravel stone'; calculi are thus little stones (used as counters)."
Both the Roman abacus and the Chinese suanpan have been used since ancient times. With one bead above and four below the bar, the systematic configuration of the Roman abacus is coincident to the
modern Japanese soroban, although the soroban is historically derived from the suanpan.
The Late Roman hand abacus shown here as a reconstruction contains seven longer and seven shorter grooves used for whole number counting, the former having up to four beads in each, and the latter
having just one. The rightmost two grooves were for fractional counting. The abacus was made of a metal plate where the beads ran in slots. The size was such that it could fit in a modern shirt
| | | | | | | | | | | | | | | |
| | | | | | | | | | | | | | | |
|O| |O| |O| |O| |O| |O| |O| |O|
MM CM XM M C X I Ө Ɛ
--- --- --- --- --- --- --- --- ---
| | | | | | | | | | | | | | | | | |
| | | | | | | | | | | | | | | | | | Ɔ
|O| |O| |O| |O| |O| |O| |O| |O| | |
|O| |O| |O| |O| |O| |O| |O| |O| | |
|O| |O| |O| |O| |O| |O| |O| |O| | |
|O| |O| |O| |O| |O| |O| |O| |O| |O| 2
|O| |O|
The diagram is based on the Roman hand abacus at the London Science Museum.
The lower groove marked I indicates units, X tens, and so on up to millions. The beads in the upper shorter grooves denote fives—five units, five tens, etc., essentially in a bi-quinary coded decimal
place value system.
Computations are made by means of beads which would probably have been slid up and down the grooves to indicate the value of each column.
The upper slots contained a single bead while the lower slots contained four beads, the only exceptions being the two rightmost columns, column 2 marked Ө and column 3 with three symbols down the
side of a single slot or beside three separate slots with Ɛ, 3 or S or a symbol like the £ sign but without the horizontal bar beside the top slot, a backwards C beside the middle slot and a 2 symbol
beside the bottom slot, depending on the example abacus and the source which could be Friedlein,^2 Menninger^1 or Ifrah.^3 These latter two slots are for mixed-base math, a development unique to the
Roman hand abacus^4 described in following sections.
The longer slot with five beads below the Ө position allowed for the counting of 1/12 of a whole unit called an uncia (from which the English words inch and ounce are derived), making the abacus
useful for Roman measures and Roman currency. The first column was either a single slot with 4 beads or 3 slots with one, one and two beads respectively top to bottom. In either case, three symbols
were included beside the single slot version or one symbol per slot for the three slot version. Many measures were aggregated by twelfths. Thus the Roman pound ('libra'), consisted of 12 ounces (
unciae) (1 uncia = 28 grams). A measure of volume, congius, consisted of 12 heminae (1 hemina = 0.273 litres). The Roman foot (pes), was 12 inches (unciae) (1 uncia = 2.43 cm). The actus, the
standard furrow length when plowing, was 120 pedes. There were however other measures in common use - for example the sextarius was two heminae.
The as, the principal copper coin in Roman currency, was also divided into 12 unciae. Again, the abacus was ideally suited for counting currency.
Symbols and usage
The first column was arranged either as a single slot with three different symbols or as three separate slots with one, one and two beads or counters respectively and a distinct symbol for each slot.
It is most likely that the rightmost slot or slots were used to enumerate fractions of an uncia and these were, from top to bottom, 1/2 s, 1/4 s and 1/12 s of an uncia. The upper character in this
slot (or the top slot where the rightmost column is three separate slots) is the character most closely resembling that used to denote a semuncia or 1/24. The name semuncia denotes 1/2 of an uncia or
1/24 of the base unit, the As. Likewise, the next character is that used to indicate a sicilicus or 1/48 of an As, which is 1/4 of an uncia. These two characters are to be found in the table of Roman
fractions on page 75 of Graham Flegg's^5 book. Finally, the last or lower character is most similar but not identical to the character in Flegg's table to denote 1/144 of an As, the dimidio sextula,
which is the same as 1/12 of an uncia.
This is however even more strongly supported by Gottfried Friedlein^2 in the table at the end of the book which summarizes the use of a very extensive set of alternative formats for different values
including that of fractions. In the entry in this table numbered 14 referring back to (Zu) 48, he lists different symbols for the semuncia (^1/[24]), the sicilicus (^1/[48]), the sextula (^1/[72]),
the dimidia sextula (^1/[144]), and the scriptulum (^1/[288]). Of prime importance, he specifically notes the formats of the semuncia, sicilicus and sextula as used on the Roman bronze abacus, "auf
dem chernan abacus". The semuncia is the symbol resembling a capital "S", but he also includes the symbol that resembles a numeral three with horizontal line at the top, the whole rotated 180
degrees. It is these two symbols that appear on samples of abacus in different museums. The symbol for the sicilicus is that found on the abacus and resembles a large right single quotation mark
spanning the entire line height.
The most important symbol is that for the sextula, which resembles very closely a cursive digit 2. Now, as stated by Friedlein, this symbol indicates the value of ^1/[72] of an As. However, he stated
specifically in the penultimate sentence of section 32 on page 23, the two beads in the bottom slot each have a value of ^1/[72]. This would allow this slot to represent only ^1/[72] (i.e. ^1/[6] × ^
1/[12] with one bead) or ^1/[36] (i.e. ^2/[6] × ^1/[12] = ^1/[3] × ^1/[12] with two beads) of an uncia respectively. This contradicts all existing documents that state this lower slot was used to
count thirds of an uncia (i.e. ^1/[3] and ^2/[3] × ^1/[12] of an As.
This results in two opposing interpretations of this slot, that of Friedlein and that of many other experts such as Ifrah,^3 and Menninger^1 who propose the one and two thirds usage. There is however
a third possibility.
If this symbol refers to the total value of the slot (i.e. ^1/[72] of an as), then each of the two counters can only have a value of half this or ^1/[144] of an as or ^1/[12] of an uncia. This then
suggests that these two counters did in fact count twelfths of an uncia and not thirds of an uncia. Likewise, for the top and upper middle, the symbols for the semuncia and sicilicus could also
indicate the value of the slot itself and since there is only one bead in each, would be the value of the bead also. This would allow the symbols for all three of these slots to represent the slot
value without involving any contradictions.
A further argument which suggests the lower slot represents twelfths rather than thirds of an uncia is best described by the figure below. The diagram below assumes for ease that one is using
fractions of an uncia as a unit value equal to one (1). If the beads in the lower slot of column I represent thirds, then the beads in the three slots for fractions of ^1/[12] of an uncia cannot show
all values from ^1/[12] of an uncia to ^11/[12] of an uncia. In particular, it would not be possible to represent ^1/[12], ^2/[12] and ^5/[12]. Furthermore, this arrangement would allow for seemingly
unnecessary values of ^13/[12], ^14/[12] and ^17/[12]. Even more significantas as stated by this author, a graduate of mathematics (OPEN University), it is logically impossible for there to be a
rational progression of arrangements of the beads in step with unit increasing values of twelfths. Likewise, if each of the beads in the lower slot is assumed to have a value of ^1/[6] of an uncia,
there is again an irregular series of values available to the user, no possible value of ^1/[12] and an extraneous value of ^13/[12]. It is only by employing a value of ^1/[12] for each of the beads
in the lower slot that all values of twelfths from ^1/[12] to ^11/[12] can be represented and in a logical ternary, binary, binary progression for the slots from bottom to top. This can be best
appreciated by reference to the figure below.
It can be argued that the beads in this first column could have been used as originally believed and widely stated, i.e. as ½, ¼ and ⅓ and ⅔, completely independently of each other. However this is
more difficult to support in the case where this first column is a single slot with the three inscribed symbols. To complete the known possibilities, in one example found by this author, the first
and second columns were transposed. It would not be unremarkable if the makers of these instruments produced output with minor differences, since the vast number of variations in modern calculators
provide a compelling example.
What can be deduced from these Roman abacuses, is the undeniable proof that Romans were using a device that exhibited a decimal, place-value system, and the inferred knowledge of a zero value as
represented by a column with no beads in a counted position. Furthermore, the biquinary nature of the integer portion allowed for direct transcription from and to the written Roman numerals. No
matter what the true usage was, what cannot be denied by the very format of the abacus is that if not yet proven, these instruments provide very strong arguments in favour of far greater facility
with practical mathematics known and practised by the Romans in this authors view.
The reconstruction of a Roman hand abacus in the Cabinet des Médailles, Bibliothèque nationale, supports this. The replica Roman hand abacus at Abacus-Online-Museum of Jörn Lütjens, shown alone here
Replica Roman Hand Abacus, provides even more evidence.
Inference of zero and negative numbers
When using a counting board or abacus the rows or columns often represent nothing, or zero. Since the Romans used Roman numerals to record results, and since Roman numerals were all positive, there
was no need for a zero notation. But the Romans clearly knew the concept of zero occurring in any place value, row or column.
It may be also possible to infer that they were familiar with the concept of a negative number as Roman merchants needed to understand and manipulate liabilities against assets and loans versus
Further reading
• Stephenson, Stephen K. (July 7, 2010), Ancient Computers, IEEE Global History Network, retrieved 2011-07-02
• Stephenson, Stephen K. (2011), Ancient Computers, Part I - Rediscovery, Amazon.com, ASIN B004RH3J7S
External links
|
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Roman_abacus","timestamp":"2014-04-20T01:55:16Z","content_type":null,"content_length":"83714","record_id":"<urn:uuid:007299dd-25fd-4d1a-89b5-7d7a42f42b81>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Download the GUT-CQM
Please note: the book is now in DjVu format. To view it, download the
DjVu browser plug-in.
Entire Book (37.6 MB) - In this July 2010 edition, the Introduction and Chapters 1, 2, 5, 6, 11, 13, 34 and Appendix I have been updated.
Volume 1: Atomic Physics (7.23 MB) - Chapter 1 Updated 08/02/10, Chapter 2 Updated 09/20/10
Classical Physics (CP) model of the structure of the electron and the photon used to solve atoms and their states and the subsequent closed-form solutions of the fundamental experiments of atomic
Volume 2: Molecular Physics (23.3 MB) - Chapter 13 Updated 08/27/10
The solution of the 26 parameters of hydrogen molecular ions and molecules from two basic equations, one to calculate geometric parameters and the other to calculate energies, and the extension of
these results to solve the majority of the important functional groups of chemistry that serve as building blocks to give the exact solutions of the majority of possible molecules and compositions of
Volume 3: Collective Phenomena, High-Energy Physics, & Cosmology (7.01 MB) (Includes Appendices - Appendix I updated 10/13/10, Chapter 34 updated 08/19/10- Absolute space is confirmed experimentally
by the absence of time dilation in redshifted quasars.)
Collective phenomena such as the basis of the statistical thermodynamic relationships and superconductivity, the basic forces and structure of matter on the nuclear scale and the cosmological
ramifications of CP such as the identity of absolute space that unifies all frames of reference, solves the nature of the gravitational and inertial masses and their equivalence, gives the derivation
of Newton's second law, and solves the origin of gravity, the families and masses of fundamental particles, and large-scale features and dynamics of the universe including the prediction of the
current acceleration of the cosmic expansion. The central enigmas of quantum mechanics mainly regarding the wave-particle duality are also resolved classically.
|
{"url":"http://www.millsian.com/bookdownload.shtml","timestamp":"2014-04-16T10:12:57Z","content_type":null,"content_length":"14441","record_id":"<urn:uuid:23eb1ca6-5dc2-4d42-bcc1-d451ecf127ec>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Representation of Spinors by Two-Dimensional Complex Rotations
This Demonstration shows rotation in two complex dimensions and how this can represent a spinor. The matrix on the left shows the action of a spinor transform of the given angle on a unit vector.
Remember that this is a unit vector in 2D complex space, so it has two complex components that can be represented by a two-row vector. The two circles on the left are graphical representations of
these complex components. The circle at the far right shows the length of each complex component as one of the projections of a unit vector in the 2D real plane. This circle represents the
constraints on the values of the complex components. Each component may have a different value, but the sum of the squares of their two lengths must equal 1.
You can change the overall angle of the system, which changes the lengths of both components but preserves the constraint. You can also change the length of each component independently, but because
of the constraint, the other will also change. Finally, you can change the phase of each component, which does not change the length of either.
To see the spinorial qualities of the system, set the angle and phases to zero. Note the blue complex number and constraint are pointing to the right. Now slowly increase the angle and watch the
interplay between the complex numbers. When you get to 360°, note that the blue complex number and constraint are now pointing left. The system is in the opposite state. You must move the angle to
720° to return it to its original state.
The directions of the complex numbers and constraint have no simple relation to real directions in space. Just as every point in space can have a temperature, a single quantity, there are some
qualities that a point in space can have that require a more complex description than a single number. A quantum wave function (a Pauli 2-spinor) is like this, where every point in space can be
represented by two complex numbers under the above constraint. These directions are represented in an abstract four-dimensional space.
For a 2D real rotation on a unit vector, there is a similar constraint. The squares of the lengths of the projections of the vector onto the and axes must sum to 1. The 2D complex rotation has the
same constraint except the projections are now the lengths of the two underlying complex components. If either of these lengths change, the other must also to keep the constraint valid. These two
complex numbers can also change independently via a change of phase, which does not change their length. This phase change is what gives 2D complex rotation more degrees of freedom than a 2D real
rotation. It can exhibit behavior similar to a 2D real rotation, but it can also change its underlying components in more ways than a real rotation.
B. A. Schumm,
Deep Down Things: The Breathtaking Beauty of Particle Physics
, Baltimore, MD: Johns Hopkins University Press, 2004.
G. Arfken,
Mathematical Methods for Physicists
, 3rd ed., Orlando, FL: Academic Press, 1985.
|
{"url":"http://demonstrations.wolfram.com/RepresentationOfSpinorsByTwoDimensionalComplexRotations/","timestamp":"2014-04-17T06:46:13Z","content_type":null,"content_length":"44604","record_id":"<urn:uuid:13a9f86f-fb80-4abe-a72c-e18a0becbd48>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Radicals and Absolute Values - Problem 2
Simplifying a higher degree square root with absolute values. The first problem we have is a 4th root and we are trying to simplify this down. So I’m going to do this all in one big step and walk you
through what I’m thinking as I go through it.
So let’s go start with the 4th root of 32 so I know that 32 is a multiple of 2 so I just want to figure out how many 2’s go into it. So we have 2, 4, 8, 16 and then 32 makes 5. So we have 5 twos in
32, we need 4 in order to just take it out to the 4th root. So we can take out 1, 2 and we are left out with 1,2 left behind okay I’m going to make sure I throw my little 4 up there as well.
So moving down the road, 4th root of x to the 5th. X to the 5th is very similar to this which is with 2 the 5th, so we can take out a single x and we are left with a single x left behind. This one
now we don’t know if x is positive or negative. So what we have to do is put around our absolute value making sure that this x is positive. We have an even route everything that comes out of it has
to be positive.
4th root of y to the 10th. Y² to the 4th is y to the 8th which is close to y of 10ths so what we actually put out here is a y² and we are left with a y² on the inside. We don’t need an absolute value
in this case because y² is always going to be positive, so no absolute value needed
Lastly is z to the 15th. Z³ to the 4th is z to the 12th with 3 left over, so we know that this is z³ leaving us with another z³. Z³ has to be positive, it's coming out of an even route in order to
make sure it’s positive we have to throw around our absolute values.
So in this sense since we need absolute values around the x and the z because they are both odd powers coming out of an even route the y² is fine because it’s always going to be positive.
So whenever you are dealing with even routes no matter what degree it is 2, 4, 6, 8 so on and so forth, you're always going to have to think about what needs an absolute value when you take it up.
absolute value square roots even root odd power
|
{"url":"https://www.brightstorm.com/math/algebra-2/roots-and-radicals/radicals-and-absolute-values-problem-2/","timestamp":"2014-04-21T10:01:00Z","content_type":null,"content_length":"67513","record_id":"<urn:uuid:674e3a9f-98af-4ba8-8271-d9896e838633>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculating steady-state values in series RC circuit
1. The problem statement, all variables and given/known data
Calculate the steady state values of the current, voltages V1 and V2 (defined by a period of time greater than five time constants). (Circuit in ASCII below).
+ V1 -
_________Res (1.2 kΩ)___________
| I => |
| | +
| + |
E 10V Cap (100 μF) V2
| - |
| | -
2. Relevant equations
τ=RC, E=IR, V= Vsource(1-e^-t/RC)
3. The attempt at a solution
I'm a novice at electrical/electronics engineering, but I'm eager to learn all I can. Correct me if I'm wrong with the following:
I determined the time constant of this circuit is .12 seconds. As I understand correctly, it takes approximately five time constants for a capacitor to charge up to its steady-state voltage, which in
this case equates to .6 seconds.
The current after it crosses the 1.2 kΩ resistor is calculated as 8.3 mA.
I'm very much stuck at this point... any advice?
|
{"url":"http://www.physicsforums.com/showthread.php?t=551334","timestamp":"2014-04-17T12:31:50Z","content_type":null,"content_length":"25661","record_id":"<urn:uuid:31724380-ef13-40cd-839d-b4c0eec9f07c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
find the lim as theta approaches zero from the right of s/d where s is the arc of a circle, theta is the angle, and d is the chord line connecting where the angle subtends s...I believe I noted that
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I did it by using L'hopitals rule... anyone can do it without?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fd6bfb0e4b04bec7f17dc9f","timestamp":"2014-04-20T08:16:39Z","content_type":null,"content_length":"31087","record_id":"<urn:uuid:009ef9f8-f4a6-4b10-872f-f5cce92e5175>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This entry is about coproducts coinciding with products. For the notion of biproduct in the sense of bicategory theory see at 2-limit. See at bilimit for general disambiguation.
Additive and abelian categories
Category theory
Universal constructions
Limits and colimits
A biproduct in a category $\mathcal{C}$ is an operation that is both a product and a coproduct, in a compatible way. Morphisms between finite biproducts are encoded in a matrix calculus.
Finite biproducts are best known from additive categories. A category which has biproducts but is not necessarily enriched in Ab, hence not necessatily additive, is called a semiadditive category.
Let $\mathcal{C}$ be a category with zero morphisms; that is, $C$ is enriched over pointed sets (for example, $C$ might have a zero object). For $c_1, c_2$ two objects in $C$, suppose a product $c_1
\times c_2$ and a coproduct $c_1 \sqcup c_2$ both exist.
$r_{c_1,c_2} : c_1 \sqcup c_2 \to c_1 \times c_2$
for the morphism which is uniquely defined (via the universal property of coproduct and product) by the condition that
$\left( c_i \to c_1 \sqcup c_2 \stackrel{r}{\to} c_1 \times c_2 \to c_j \right) = \left\{ \array{ Id_{c_i} & if \; i = j \\ 0_{i,j} & if \; i eq j } \right. \,$
where the last and first morphisms are the projections and co-projections, respectively, and where $0_{i,j}$ is the zero morphism from $c_i$ to $c_j$.
If the morphism $r_{c_1,c_2}$ in def. 1, is an isomorphism, then the isomorphic objects $c_1 \times c_2$ and $c_1 \sqcup c_2$ are called the biproduct of $c_1$ and $c_2$. This object is often denoted
$c_1 \oplus c_2$, alluding to the direct sum (which is often an example).
If $r_{c_1,c_2}$ is an isomorphism for all objects $c_1, c_2 \in \mathcal{C}$ and hence a natural isomorphism
$r \;\colon\; (-)\coprod (-) \stackrel{\simeq}{\longrightarrow} (-) \times (-)$
then $\mathcal{C}$ is called a semiadditive category.
Semiadditive categories
A category $C$ with all finite biproducts is called a semiadditive category. More precisely, this means that $C$ has all finite products and coproducts, that the unique map $0\to 1$ is an isomorphism
(hence $C$ has a zero object), and that the canonical maps $c_1 \sqcup c_2 \to c_1 \times c_2$ defined above are isomorphisms.
Amusingly, for $C$ to be semiadditive, it actually suffices to assume that $C$ has finite products and coproducts and that there exists any natural family of isomorphisms $c_1 \sqcup c_2 \cong c_1 \
times c_2$ — not necessarily the canonical maps constructed above. A proof can be found in (Lack 09).
An additive category, although normally defined through the theory of enriched categories, may also be understood as a semiadditive category with an extra property, as explained below at Properties –
Biproducts imply enrichment.
Semiadditivity as structure/property
Given a category $\mathcal{C}$ with zero morphism, one may imagine equipping it with the structure of a chosen natural isomorphism
$(-)\coprod (-) \stackrel{\simeq}{\longrightarrow} (-)\times(-) \,.$
Biproducts imply enrichment – Relation to additive categories
A semiadditive category is automatically enriched over the monoidal category of commutative monoids with the usual tensor product, as follows.
Given two morphisms $f, g: a \to b$ in $C$, let their sum $f + g: a \to b$ be
$a \to a \times a \cong a \oplus a \overset{f \oplus g}{\to} b \oplus b \cong b \sqcup b \to b .$
One proves that $+$ is associative and commutative. Of course, the zero morphism $0: a \to b$ is the usual zero morphism given by the zero object:
$a \to 1 \cong 0 \to b .$
One proves that $0$ is the neutral element for $+$ and that this matches the $0$ morphism that we began with in the definition. Note that in addition to a zero object, this construction actually only
requires biproducts of an object with itself, i.e. biproducts of the form $a\oplus a$ rather than the more general $a\oplus b$.
If additionally every morphism $f: a \to b$ has an inverse $-f: a \to b$, then $C$ is enriched over the category $Ab$ of abelian groups and is therefore (precisely) an additive category.
If, on the other hand, the addition of morphisms is idempotent ($f+f=f$), then $C$ is enriched over the category $SLat$ of semilattices (and is therefore a kind of 2-poset).
Biproducts as enriched Cauchy colimits
Conversely, if $C$ is already known to be enriched over abelian monoids, then a binary biproduct may be defined purely diagrammatically as an object $c_1\oplus c_2$ together with injections $n_i:c_i\
to c_1\oplus c_2$ and projections $p_i:c_1\oplus c_2 \to c_i$ such that $p_j n_i = \delta_{i j}$ (the Kronecker delta) and $n_1 p_1 + n_2 p_2 = 1_{c_1\oplus c_2}$. It is easy to check that makes $c_1
\oplus c_2$ a biproduct, and that any binary biproduct must be of this form. Similarly, an object $z$ of such a category is a zero object precisely when $1_z= 0_z$, its identity is equal to the zero
morphism. It follows that functors enriched over abelian monoids must automatically preserve finite biproducts, so that finite biproducts are a type of Cauchy colimit. Moreover, any product or
coproduct in a category enriched over abelian monoids is actually a biproduct.
For categories enriched over suplattices, this extends to all small biproducts, with the condition $n_1 p_1 + n_2 p_2 = 1_{c_1\oplus c_2}$ replaced by $\bigvee_{i} n_i p_i = 1_{\bigoplus_i c_i}$. In
particular, the category of suplattices has all small biproducts.
Biproducts from duals
The existence of dual objects tends to imply (semi)additivity; see ({Houston 06}, MO discussion).
Categories with biproducts include:
• Stephen Lack, Non-canonical isomorphisms, (arXiv:0912.2126).
• Robin Houston, Finite Products are Biproducts in a Compact Closed Category, Journal of Pure and Applied Algebra, Volume 212, Issue 2, February 2008, Pages 394-400 (arXiv:math/0604542)
A related discussion is archived at $n$Forum.
|
{"url":"http://nlab.mathforge.org/nlab/show/biproduct","timestamp":"2014-04-17T12:29:40Z","content_type":null,"content_length":"56143","record_id":"<urn:uuid:5c5cfda1-ff1c-49b5-98c4-fcd617b2f9fe>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hialeah Gardens, FL Math Tutor
Find a Hialeah Gardens, FL Math Tutor
...I have been tutoring students since high school in various subjects and continued in college. I have traveled around the world and met many people so I am able to adapt to all types of
situations and people. I would like to conduct tutoring sessions with open communications.
9 Subjects: including prealgebra, algebra 1, algebra 2, Spanish
Hello everyone, my name is Becky. I have worked as a private tutor for 5 years in a variety of subjects. I am very patient and believe in teaching by example.
30 Subjects: including algebra 1, English, ESL/ESOL, ACT Math
...In addition, I have a Master's of Science in Biomedical Science. I received my undergraduate degree in Biochemistry and the about 75% the curriculum was Chemistry based. I have tutored high
school students in all math subjects.
16 Subjects: including algebra 1, algebra 2, reading, prealgebra
...It is a view of the many discrete applications in our reality. As a professor of applied math at Concordia University, I taught math for the decision sciences. Among the courses taught were
subjects focused heavily on deterministic methods.
24 Subjects: including linear algebra, differential equations, algebra 1, algebra 2
...For those students that need some help with reading, I can help them too, in English and in Spanish. My experience as a customer service person permits me to treat the persons with the golden
rule: "Treat other people the way you you would like to be treated." Didn't you pass your class because...
14 Subjects: including calculus, chemistry, physics, trigonometry
Related Hialeah Gardens, FL Tutors
Hialeah Gardens, FL Accounting Tutors
Hialeah Gardens, FL ACT Tutors
Hialeah Gardens, FL Algebra Tutors
Hialeah Gardens, FL Algebra 2 Tutors
Hialeah Gardens, FL Calculus Tutors
Hialeah Gardens, FL Geometry Tutors
Hialeah Gardens, FL Math Tutors
Hialeah Gardens, FL Prealgebra Tutors
Hialeah Gardens, FL Precalculus Tutors
Hialeah Gardens, FL SAT Tutors
Hialeah Gardens, FL SAT Math Tutors
Hialeah Gardens, FL Science Tutors
Hialeah Gardens, FL Statistics Tutors
Hialeah Gardens, FL Trigonometry Tutors
Nearby Cities With Math Tutor
Coral Gables, FL Math Tutors
Doral, FL Math Tutors
Hialeah Math Tutors
Hialeah Lakes, FL Math Tutors
Medley, FL Math Tutors
Miami Gardens, FL Math Tutors
Miami Lakes, FL Math Tutors
Miami Shores, FL Math Tutors
Miramar, FL Math Tutors
N Miami Beach, FL Math Tutors
North Miami Beach Math Tutors
North Miami, FL Math Tutors
Opa Locka Math Tutors
Pembroke Park, FL Math Tutors
West Miami, FL Math Tutors
|
{"url":"http://www.purplemath.com/hialeah_gardens_fl_math_tutors.php","timestamp":"2014-04-19T05:20:17Z","content_type":null,"content_length":"24007","record_id":"<urn:uuid:4b3ce791-9d39-490b-b68b-6fb9c9a1b402>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
|
July 8th 2006, 07:01 PM
2 math questions:p
Which equation describes a circle?
A: -2y2+y+2x2-5x-3=0
B: -3x2+x-6y2+y=4
C: -3y2+x+y=3
Identify the conic section with the given equation 5x2-6y2-9x+2y+3=0
p.s. the twos are supposed to be raised to that power
July 8th 2006, 10:33 PM
None of those is a circle. In a general quadratic form, if the coefficients of x^2 and y^2 are equal, it's a circle; unequal but the same sign, an ellipse; one zero and one non-zero, a parabola;
of different sign, a hyperbola (rectagular if the coefficients are equal in absolute value). So you have a rectangular hyperbola, an ellipse and a parabola respectively.
July 8th 2006, 10:46 PM
Originally Posted by Lane
Identify the conic section with the given equation 5x2-6y2-9x+2y+3=0
Hello, Lane,
as rgep explained this equation describes a hyperbola.
July 9th 2006, 05:27 AM
The quadradic in form of,
$<br /> ax^2+2hxy+by^2+2gx+2fy+c=0$
can de determined by looking a the relationship between,
$h^2$ and $ab$.
In your case we have #2 which is a hyperbola.
[Note: Some cases where not considered such as the coditions for a general quadradic to represent a line or to represent a point]
|
{"url":"http://mathhelpforum.com/pre-calculus/4047-conics-print.html","timestamp":"2014-04-19T00:52:19Z","content_type":null,"content_length":"6161","record_id":"<urn:uuid:e77a96ab-2b5f-4aec-98f6-4d27fb96a6a5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Graphical Approach
Economists graphically represent the relationship between product price and quantity demanded with a demand curve. Typically, demand curves are downwards sloping, because as price increases, buyers
are less likely to be willing or able to purchase whatever is being sold. Each individual buyer can have their own demand curve, showing how many products they are willing to purchase at any given
price, as shown below. This graph shows what Jim's demand curve for graham crackers might be:
Jim's Demand Curve for Graham Crackers
To find out how many boxes of graham crackers Jim will buy for a given price, extend a perpendicular line from the price on the y-axis to his demand curve. At the point of intersection, extend a line
from the demand curve to the x-axis (perpendicular to the x-axis). Where it intersects the x-axis (quantity) is how many boxes of graham crackers Jim will buy. For instance, in the graph above, Jim
will buy 3 boxes when the price is $2 a box.
Aggregate Demand and Horizontal Addition
Typically, economists don't look at individual demand curves, which can vary from person to person. Instead, they look at aggregate demand, the combined quantities demanded of all potential buyers.
To do this, add the quantities which buyers are willing to buy at different prices. For instance, if Jim and Marvin are the only two buyers in the market for graham crackers, we would add how many
they are willing to buy at price p=1 and record that as aggregate demand for p=1. Then we would add how many they are willing to buy at price p=2 and record that as aggregate demand for p=2, and so
on. This results in the following graph of aggregate demand for graham crackers:
Jim and Marvin's Demand Curves for Graham Crackers
Aggregate Demand Curve for Graham Crackers
This method is called horizontal addition because you look at a price level, and add the separate quantities demanded across that price level, giving you total quantity demanded for that price.
There are many factors that can affect demand quantity, including income, prices, and preferences. Let's look at one good to see how this works. How much are you willing to pay for a cold soda? If
you recently got a raise at your job, you might not mind buying a pricier soda, even if you don't need it. Your friend who has less money, however, might pick a generic brand, or they might stick
with tap water. Below are possible demand curves for you (with your big raise) and your friend (without your big raise). Note that you are willing to buy more soda than your friend is:
What if soda cost a dollar yesterday and costs two dollars today? That might make you think twice about getting the same soda you drank yesterday. Likewise, if it cost two dollars yesterday and a
dollar today, you might be more willing to buy the soda than usual. We can see this on the graph on a single demand curve. When the price is a dollar, the quantity demanded is higher than when the
price is two dollars. What this means in the real world is that if two companies charge different prices for the same good, the company that charges a lower price will get more customers. (Exceptions
to this general rule may occur when there is a real or perceived difference in quality of the goods being sold).
Changes in Demand with Changes in Price
We have been looking at how changes in price can affect buyers' decisions: when price increases, demand decreases, and vice versa. However we have been assuming that when the price changes, all else
is staying the same; this restriction allows us to use the same demand curve, with changes in demand being represented by movements up and down the same curve. This model of a buyer moving up and
down one demand curve is correct if the only thing that is changing is the price of the good. If preferences or income change, however, the demand curve can actually shift.
For example, let's say that Conan's initial demand curve for concert tickets looks like curve 1. If Conan gets a new job, with a permanently higher income, however, his demand curve will shift
outwards, to curve 2. Why is this? Conan realizes that he has more money, and that, as long as he doesn't lose his new job, he will always have more money. That means that he can buy more of what he
likes, and he will have a higher demand curve for all normal goods.
Note that for any price level, Conan's demand is now higher than it was before the demand shift. This can also occur with a change in buyer preferences. If Conan suddenly decides that he wants to
collect jazz CDs, and he now likes jazz CDs much more than he did before, his demand curve will shift outwards, reflecting his new appreciation of jazz, and his willingness to pay more for the same
CDs, since they have become more valuable in his eyes. Shifts in demand curves are caused by changes in income (which make the goods seem more or less expensive) or changes in preferences (which make
the goods seem more or less valuable).
The Algebraic Approach
It is also possible to model demand using equations, known as demand equations or demand functions. While these equations can be very complex, for now we will use simple algebraic equations. We have
been showing demand as straight, downward-sloping lines, which can easily be translated into mathematical equations, and vice versa. Just as the graphs provide a visual guide to consumer behavior,
demand functions provide a numerical guide to consumer behavior. For example, if Sean's demand curve for T-shirts looks like this:
Figure %: Sean's Demand Curve for T-Shirts
The corresponding equation that describes Sean's demand for T-shirts is simply the equation for the line on the graph, or:
Q = 25 - 2P
If we want to see how much Sean will buy if the price is 10, we plug 10 in for P and solve for Q. In this case, [25 - 2(10)] = 5 T-shirts. When we want to find aggregate demand using the algebraic
approach instead of the graphical approach, we just add the demand equations together. So, if we're adding Sean's demand for T-shirts to Noah's demand for T-shirts, it looks like this:
Figure %: Aggregate Demand
If price for T-shirts is still equal to 10, we find out that together, Sean and Noah will buy
[65 - 5(10)] = 15 T-shirts.
One caveat in this method is that you can only add the equations together when both will result in positive demand. For example, if the price of a T-shirt is $13, Sean would supposedly want to buy
[25 - 2(13)] = -1 T-shirts. Obviously that is impossible, and Sean will buy 0 T-shirts. But because Sean's demand equation would yield the answer 1, adding the demand equations together would result
in a wrong answer. When using this method, always check to make sure that there will be no negative demand for the given price before adding equations together. To find how many T-shirts Sean and
Noah would buy in this case, you would only look at Noah's demand,
[40 - 3(13)] = 1 T-shirt.
|
{"url":"http://www.sparknotes.com/economics/micro/supplydemand/demand/section1.rhtml","timestamp":"2014-04-16T19:31:43Z","content_type":null,"content_length":"59325","record_id":"<urn:uuid:b011fb27-e51d-467e-859f-698e6fda9da1>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Does a power series converging everywhere on its circle of convergence define a continuous function?
up vote 45 down vote favorite
Consider a complex power series $\sum a_n z^n \in \mathbb C[[z]]$ with radius of convergence $0\lt r\lt\infty$ and suppose that for every $w$ with $\mid w\mid =r$ the series $\sum a_n w^n $ converges
We thus obtain a complex-valued function $f$ defined on the closed disk $\mid z\mid \leq r$ by the formula $f(z)=\sum a_n z^n$.
My question: is $f$ continuous ?
This is a naïve question which looks like it should be answered in any book on complex analysis.
But I checked quite a few books, among which the great treatises : Behnke-Sommer, Berenstein-Gay, Knopp, Krantz, Lang, Remmert, Rudin, Stein-Shakarchi, Titchmarsh, ... .
I couldn't find the answer, and yet I feel confident that it was known in the beginning of the twentieth century.
Many thanks to Julien who has answered my question: Sierpinski proved (in 1916) that there exists such a power series $\sum a_n z^n $ with radius of convergence $r=1$ and associated function $f(z)=\
sum a_n z^n $ not bounded on the closed unit disk and thus certainly not continuous.
It is strange that not a single book on complex functions seems to ever have mentioned this example.
On the negative side, I must confess that I don't understand Sierpinski's article at all!
He airdrops a very complicated, weird-looking power series and proves that it has the required properties in a sequence of elementary but completely obscure computations.
I would be very grateful to anybody who would write a new answer with a few lines of explanation as to what Sierpinski is actually doing.
You are right, Abel's theorem only give the convergence $\lim\limits_{t \rightarrow 1-}f( \alpha t) = f(\alpha)$ for $|\alpha|=1$, not continuity on the circle. Excuse my quick fire:) –
plusepsilon.de Oct 22 '12 at 16:51
Dear @Mrc, there is nothing for me to excuse: on the contrary, I want to thank you for your so quickly trying to help. In the books I browsed there are indeed many results in the style of Abel's
theorem, but they seem not to be sufficient to answer my question. – Georges Elencwajg Oct 22 '12 at 17:02
2 Did you try a trigonometric series $\sum c_n e^{inx}$ converging to a discontinuous function ? (The problem is that a priori the sum is over $n \in \mathbf{Z}$.) – François Brunault Oct 22 '12 at
This is a very interesting question! I've also wondered about this myself while thinking about this question on M.SE : math.stackexchange.com/questions/209171/… I, too, did not find anything about
this in any complex analysis textbook I know. I find it quite impressive that Sierpinski seemed to know the answers to a very large quantity of these simple-sounding, natural, yet quite difficult
questions. – Malik Younsi Oct 23 '12 at 13:54
1 Here is a complex analysis book that mentions (yet just a note it seems) the result: An introduction to classical complex analysis, vol 1, by R. B. Burckel (Birkhäuser, 1979), see page 81. (Note:
I did not know this either, I did not even know the book; it merely turned up in a search for the title of the mentioned paper.) – quid Oct 23 '12 at 15:50
show 3 more comments
3 Answers
active oldest votes
I searched all over for an answer to this question back in my student days. I found the answer in a paper by Sierpinski, "Sur une série potentielle qui, étant convergente en tout point
de son cercle de convergence,représente sur ce cercle une fonction discontinue ", which is featured in his collected works, see here, p282) and apparently was published in 1916.
It does confirm your expectation that this was known in the beginning of the twentieth century (I don't know whether it's the first proof or not, but from the paper it's clear that
up vote 48 Sierpinski thought the result to be new).
down vote
accepted EDIT: I just realized that not everybody speaks French ;-) so, to be clear: Sierpinski produces an example where the function converges everywhere on the unit circle but is discontinuous
on the circle.
@quid: thanks for fixing the link! I thought my url was OK but it was adding %20 instead of spaces - is that just a copy/paste issue? – Julien Melleray Oct 22 '12 at 17:44
2 here is a Zentralblatt/JFM review (of course, in German :)) of the paper in question: zentralblatt-math.org/zmath/en/search/?q=an:46.1466.03 "Beispiel einer Potenzreihe, die den
Einheitskreis als Konvergenzkreis hat, überall auf demselben konvergiert und dortselbst unstetig ist in der Art, dass sie in der Umgebung des Punktes 1 unbeschränkt ist. (Prof.
Rademacher)" – Dima Pasechnik Oct 23 '12 at 6:31
Thanks a lot for your perfect answer, Julien! By the way, what made you think of this problem in your student days? Are you still interested in these questions? – Georges Elencwajg
Oct 23 '12 at 11:53
You're welcome! It was just idle curiosity that got me and a friend interested in this question (we were preparing for the "agregation" - meaning that for a few months we were going
2 over undergraduate material and trying to think about what kind of questions one could ask about it). At this time I discovered Sierpinski's work and was fascinated -I still am, in
some ways; he was an incredible mathematician) . As it turns out, my research interests are close to some of Sierpinski's (related to descriptive set theory) but nowhere near complex
analysis (though I still find it a beautiful topic). – Julien Melleray Oct 23 '12 at 12:42
1 Cher Julien: je suis heureux que les jurys n'aient jamais posé cette question à mes agrégatifs. Et que ces derniers ne me l'aient jamis posée non plus... :-) – Georges Elencwajg Oct
23 '12 at 12:56
show 7 more comments
This answer is in response to final sentence, "I would be very grateful to anybody who would write a new answer with a few lines of explanation as to what Sierpinski is actually doing". In
fact, it is easy to construct power series converging on the circle of convergence, but are unbounded. For example, $$ f(z)=\sum_{n=1}^\infty\frac1{n^5(1+in^{-3}-z)} $$ defines a function
whose power series expansion has radius of convergence 1 and converges everywhere on the unit circle, but is unbounded in a neighbourhood of 1.
A method of constructing such functions is as an infinite sum $$ f(z)=\sum_{n=1}^\infty f_n(z). $$ Here, $f_n(z)$ are chosen to have a power series expansion converging everywhere on the
closed unit ball. Let $f^{(r)}_n(z)$ denote the sum of the first $r$ terms in the power series expansion of $f_n$. We need to arrange it so that $f^{(r)}(z)\equiv\sum_nf_n^{(r)}(z)$ converges
on the closed unit ball, and that $f(z)=\lim_{r\to\infty}f^{(r)}(z)$ holds. That is, we need to be able to commute the limit $r\to\infty$ with the summation over $n$. A sufficient condition
to be able to do this is that $\sum_n\sup_r\lvert f^{(r)}_n(z)\rvert < \infty$, for all $\lvert z\rvert\le1$. That this allows us to commute the summation with the limit is just a special
case of dominated convergence.
Next, to ensure that $f(z)$ is unbounded on the unit ball, we want to choose $f_n$ such that there exists $q_n$ in the closed unit ball with $f_n(q_n)$ large, and such that it does not get
cancelled out in the summation, so that $f(q_n)$ is large and diverges as $n\to\infty$.
For example, choose positive reals $\delta_n,\epsilon_n$ tending to zero, and setting $a_n=1+i\epsilon_n$, and $$ f_n(z)=\frac{\delta_n}{a_n-z}=\sum_{m=0}^\infty \delta_na_n^{-m-1}z^m. $$
These are all well-defined as power series with radius of convergence greater than 1. Furthermore, the partial sums are $$ f^{(r)}_n(z)=\delta_n\frac{1-(z/a_n)^r}{a_n-z}, $$ which are bounded
up vote by $2\delta_n/\lvert a_n-z\rvert$. As $a_n\to1$, this is bounded by a multiple of $\delta_n$ for each fixed $z\not=1$, so the dominated convergence condition is satisfied when $\sum_n\
25 down delta_n$ is finite. On the other hand, if $z=1$, then $\lvert a_n-z\rvert=\epsilon_n$, so the dominated convergence condition is satisfied everywhere whenever $\sum_n\delta_n/\epsilon_n$ is
vote finite. Next, $f_n(z)$ achieves its largest value on the unit ball at $q_n=a_n/\lvert a_n\rvert$, and its real part there is given by $$ \Re f_n(q_n)=\frac{\delta_n}{\sqrt{1+\epsilon_n^2}(\
sqrt{1+\epsilon_n^2}-1)}\ge\frac{2\delta_n}{\epsilon_n^2\sqrt{1+\epsilon_n^2}}. $$ As $f_m(z)$ has positive real part for all $m$, this bound also holds for $f(q_n)$, and we get that $f$ is
unbounded whenever $\delta_n/\epsilon_n^2\to\infty$. These conditions are satisfied by taking $\epsilon_n=n^{-3}$ and $\delta_n=n^{-5}$.
Alternatively, for an example closer to Sierpinski's, consider choosing a sequence $a_n\to1$ on the unit circle and positive reals $K_n$, and set $$ f_n(z)=K_n2^{-n}\sum_{k=0}^{2^n-1}a_n^{2^
n-1-k}z^k=2^{-n}K_n\frac{a_n^{2^n}-z^{2^n}}{a_n-z}. $$ The partial sums of the power series expansion of $f_n(z)$ are bounded by $2^{1-n}K_n/\lvert a_n-z\rvert$, so the dominated convergence
condition is satisfied for $z\not=1$ so long as $\sum_n2^{1-n}K_n$ is finite. Sierpinski chooses $a_n=(n^2-1+2ni)/(n^2+1)$ so that $a_n-1$ goes to zero at rate $1/n$. The dominated
convergence condition is therefore satisfied whenever $\sum_n2^{-n}K_nn$ is finite.
Now, $f_n(z)$ is maximized at $z=a_n$ where $\lvert f_n(a_n)\rvert=K_n$. So, $$ \lvert f(a_n)\rvert\ge K_n-\sum_{m\not=n}\frac{2^{1-m}K_m}{\lvert a_m-a_n\rvert}. $$ As $a_m-a_n$ is bounded
below by a multiple of $1/m^2$, the summation on the right is bounded whenever $\sum_m2^{-m}K_mm^2$ is finite, and $f(a_n)$ is unbounded if we also take $K_n$ going to infinity. Sierpinski
takes $K_n=n^2$ here. Finally, in Sierpinski's example, he multiplies $f_n$ by $z^{2^n}$. This changes nothing, except to separate out the non-zero terms of the power series of $f_n(z)$, so
that the power series of $f(z)$ can be written easily term by term.
Thanks, George! – Georges Elencwajg Dec 22 '12 at 9:49
add comment
Just to complete the previous answer, Sierpiński's example is mentioned (without details, though) in at least one book, namely An introduction to classical complex analysis by R. B. Burckel
up vote 6 (Vol. 1, Chap. 3, p. 81).
down vote
2 A power series that converges everywhere on its circle of convergence must necessarily define a discontinuous function. For if D is the closed disk of radius r whose boundary is that
circle, then D is compact and any continuous function defined on D must be bounded everywhere on D. But if the power series is bounded throughout the interior of D then r cannot be its
radius of convergence. So the question should really be "can such a power series exist?" and Sierpinski has given an example of one. – Garabed Gulbenkian Oct 23 '12 at 19:26
5 @Garabed Gulbenkian: Perhaps I am confused, but wouldn't $\sum x^n/n^2$ we a powerseries converging everywhere on its circle of convergence? – quid Oct 23 '12 at 20:24
3 @Garabed Gulbenkian : I'm not sure I understand. It is easy to give an example of a power series that converges everywhere on its circle of convergence, just take $\sum_n z^n/n^2$. Why
does the fact that the series is bounded in $D$ implies that its radius of convergence cannot be $r$? – Malik Younsi Oct 23 '12 at 20:30
2 @quid : Well if you are confused, then you are not the only one ;) – Malik Younsi Oct 23 '12 at 20:32
3 Probably, Gulbenkian thinks that the only reason a series has radius of convergence $r$ is that there must be a pole at this distance. Alas, other kinds of singularity (such as branch
points) exists... – Feldmann Denis Oct 23 '12 at 22:47
show 2 more comments
Not the answer you're looking for? Browse other questions tagged cv.complex-variables or ask your own question.
|
{"url":"https://mathoverflow.net/questions/110345/does-a-power-series-converging-everywhere-on-its-circle-of-convergence-define-a","timestamp":"2014-04-18T10:53:07Z","content_type":null,"content_length":"85825","record_id":"<urn:uuid:3cbe9fbc-a5de-46a4-835e-ce8a400fce57>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lie algebras, volume 7 of London Mathematical Society Monographs. New Series
, 2012
"... Abstract. Following Radford’s proof of Lagrange’s theorem for pointed Hopf algebras, ..."
"... Because they play a role in our understanding of the symmetric group algebra, Lie idempotents have received considerable attention. The Klyachko idempotent has attracted interest from
combinatorialists, partly because its definition involves the major index of permutations. For the symmetric group S ..."
Cited by 1 (0 self)
Add to MetaCart
Because they play a role in our understanding of the symmetric group algebra, Lie idempotents have received considerable attention. The Klyachko idempotent has attracted interest from
combinatorialists, partly because its definition involves the major index of permutations. For the symmetric group Sn, we look at the symmetric group algebra with coefficients from the field of
rational functions in n variables q1,...,qn. In this setting, we can define an n-parameter generalization of the Klyachko idempotent, and we show it is a Lie idempotent in the appropriate sense.
Somewhat surprisingly, our proof that it is a Lie element emerges from Stanley’s theory of P-partitions. The motivation for our work is centered around the search for Lie idempotents in the symmetric
group algebra. In fact, our goal is to give a generalization of the well-known Klyachko idempotent, and to show that important and interesting properties of the Klyachko idempotent carry over to the
extended setting. It turns out that the proof that our
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=9859638","timestamp":"2014-04-21T14:05:03Z","content_type":null,"content_length":"14257","record_id":"<urn:uuid:a21e97b3-6b40-4768-ad65-911c4927e26d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
- User Profile for: boldua_@_racnet.com
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: boldua_@_racnet.com
User Profile for: boldua_@_racnet.com
UserID: 32157
Name: Mike Bolduan
Registered: 12/6/04
Total Posts: 8
Show all user messages
|
{"url":"http://mathforum.org/kb/profile.jspa?userID=32157","timestamp":"2014-04-21T10:54:59Z","content_type":null,"content_length":"12444","record_id":"<urn:uuid:cde64f9d-15cc-4b8b-9301-60bf084af45c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relative humidity and humidity ratio question.
Member Login
Come Join Us!
Are you an
Engineering professional?
Join Eng-Tips now!
• Talk With Other Members
• Be Notified Of Responses
To Your Posts
• Keyword Search
• One-Click Access To Your
Favorite Forums
• Automated Signatures
On Your Posts
• Best Of All, It's Free!
*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.
Donate Today!
Posting Guidelines
Promoting, selling, recruiting, coursework and thesis posting is forbidden.
Link To This Forum!
Forum Search FAQs Links Jobs Whitepapers MVPs
jasno999 (Aerospace) 25 Jul 06 15:53
I am trying to set up a program on excel to look at air conditioning loads and what not. In doing this I would liek to use equations and data fro mcharts as much as possiable so that I do nto
need to reference the psychrometric chart - I want to keep the model I am building dynamic and easy to use.
So my quetion is this. I have a portion of mixed air that is entering my evaporator.
My split is 80% return air and 20% outdoor air.
My outdoor air is 103F and 100%RH
My return air is 80F and 48%RH
I know how to find the mixed air temperature:
Tma = (103 X .2)+(80 X .8)= 84.6F
I also know that I can use the humidity as measured in grains/lb to find humidity:
H(gr/lb)= (75gr/lb X .8) + (325gr/lb X .2) = 125gr/lb
This all makes sense cause if I draw a line on the psyh chart between the points and then use the 80% mark away from the outdoor point I get the point that corresponds with what I jsut found.
Is there any way to look at the humidity measured in grains/lb and use an equation and maybe a chart of values to convert it to a Relative Humidity Ratio measured in % ?
DUMechEng (Mechanical) 25 Jul 06 16:59
Try this....
T=Dry Bulb Temp (deg R)
H=Humidity Ratio (gr/lb)
P=Atmospheric Pressure (psi)
if you wanted to i guess you could substitute k and Psat into the top equation to get a %RH equation based only on T, H, and atmospheric pressure.
Yorkman (Mechanical) 25 Jul 06 19:18
The humidity ratio at saturation for a given Db temperature is proportional to the %Rh for that given Db temperature. For example at 70 degrees Db at saturation is .0158 Lbs/Lbs air when the
ratio is dropped to .008 for that same Db temperature
%Rh = .008 ÷.0158 = 50.6% Or are you looking for some thing a bit more involved.
I'm not a real engineer, but I play one on T.V.
A.J. Gest, York Int./JCI
MintJulep (Mechanical) 25 Jul 06 20:16
Formulas for moist air properties are well documented. ASHRAE fundamentals for starters, although Google will probably turn up both the forumulas and several user functions for excel.
jasno999 (Aerospace) 26 Jul 06 7:26
Here is what I did and the problem I needed to solve.
I was lookign at mixed air. I knew that I could use the equation:
Mixed Humidity (Grains/lb) = [Room Humidity(grains/lb) X % Recirculation air] + [Outdoor Humidity(grains/lb) X (1- % Recirculation air)]
That gave me my mixed air humidity in grains per pound.
I also could use the same equation to find my mixed air temperature.
If I knew those then what I needed to determine was my % Relative Humidity using the charts.
So what I figured out was that the humidity ration (lb/lb) was proportional to the grains per pound.
What I determined (and this may not be exact but it gets the job done) was that if you multiply the humidity ratio time 7000 then you get the grains/lb.
So for example at 80F and at saturation:
Humidity ratio = 0.0223400 (lb/lb)
0.0223400 X 7000 = 156.38 grains/lb
therfore if I have a mixed air condition at 80F and I calculate my humidity to be 112 grains/lb then I can say:
112/156.38 = 0.716 or 72%RH
IS IT OK TO DO IT THIS WAY?
Yorkman (Mechanical) 26 Jul 06 8:35
Yes that will work. Is the conversion to grains nessecary for your problem? The same result will be achieved if you stayed with the hummidity ratios throughout and applied the math the same
way. Just wondering
I'm not a real engineer, but I play one on T.V.
A.J. Gest, York Int./JCI
jasno999 (Aerospace) 26 Jul 06 10:31
I thought it woudl too but when I did some math it did not work out for me. It may be due to the fac that I ahve been pumping out the calculations allw week and my mind went numb.
But let me run thru a problem to prove thsi to myself.
Say I have:
80% recirculation system
Outdoor air Temp = 102F
Outdoor Air Humidity = 95%RH
Return air temp = 80F
Return air Humidity = 45%RH
Mixed temp = (102*.20)+(80*.80)= 84.4F
Mixed Humidity %RH = (95*.20)+(45*.80)= 55%RH
If I had gone to grains per pound
Gr.lb = gr/lb @ saturation X humidity ratio
Outdoor air Grains per pound = 303.597 X .95 = 288.417gr/lb
Return air Grains per pound = 156.38 X .45 = 70.371rg/lb
Using same formula- (288.417*.20)+ (70.371*.80)= 113.98gr/lb
Looking at the chart or using table data this tells me that at 84.4F and 113.98gr/lb the humidity ration will be 62.8%RH
So after doign this math I have come to determine that:
1) I am doign the calculations wrong somehow
2) YOu can't use %RH in that calculation to determine the mixed air humidity. One ways says 55% and the other way says 62% - Coem to think of it you can't use %s because they are just that-
they do nto tell you the full story about the total amount of humidity actually in the air - they only give the %.
Let me knwo if I am incorrect here.
DUMechEng (Mechanical) 26 Jul 06 12:53
yes, that method will work. you just need to setup a lookup tables for saturation humidity ratio instead of using the formulas i gave you above.
you are correct, you cannot use %RH in the mixed air equation, however what yorkman is saying is that you dont need the gr/lb to lb/lb conversion. it just ads an extra step.
the way you did your example in your last post responding to yorkman is exactly correct. you didnt convert back and forth between gr/lb and lb/lb. you were doing the conversion correct
however, 7000 grains = 1 pound.
also, i think you have the theory correct but you mixed up the terms humidity ratio and relative humidity in some instances. humidity ratio is the gr/lb or lb/lb number that you are talking
about (both the same number just different units). its a measure of mass of moisture per mass of dry air. relative humidity is technically the ratio of partial pressure of water vapor to the
partial pressure of water vapor at saturation. however, assuming an ideal gas you can use the ratio of humidity ratio to humidity ratio at saturation.
jasno999 (Aerospace) 27 Jul 06 10:07
Jabba007 (Mechanical) 27 Jul 06 10:12
Wow... where is it 103F and 100% RH?
Not thta I'm doubting that, but I just want to know where NOT to ever go!
jasno999 (Aerospace) 10 Aug 06 14:09
YOrkman you said:
Yes, there is a set of tables in the ASHRAE Fundamentals Handbook, in the Psychrometrics chapter 6 they are called the Thermodynamic Properties of Moist Air Tables. Then with the use of
equation number 32 from that chapter and perhaps a little reading, this may make a little more sense. h = .240t + W(1061 + .444t)
Where: h = specific enthalpy BTU/Lbs of the moist air
t = the dry bulb temperature of the indegrees F
W = the humdity ratio of the moist air Lbs
Lbs water/lbs dry air
Using the table locate the W value and multiply it by the decimal %Rh to get the humidity ratio of the air at that condtion then enter that value into the equation. Solving for h will give you
a very close representation of what the chart is providing.
Now after lookign at that in more detail I have realized that the equation only works if you use the charto find the W value. YOu had told me previously to use %RH as the W value and it woudl
get me close. It does get your close but it is not exact and can really scew the numbers on you. I was hoping to use the table data to build a program that would calculate my enthalpys and my
gr/lb for different temperature and humidity conditions but it appears that this can nto be done. Back
YOu have to use the chart or deal with numbers that are close but not exact. Forum
KiwiMace (Mechanical) 10 Aug 06 17:16
The ideal gas law doesn't save you from the difference between the ratios of partial pressure (pw/pws) and the ratios of absolute humidity (W/Ws). To paraphrase ASHRAE fundamentals Chap 6:
Both start at zero and finish at 1 for completely dry and completely saturated, but the values in the middle differ and significantly at warmer temperatures.
I had to deal with this issue in a computer modelling exercise some years ago. At 20degC or 68degF, the error is about 0.6% at 50%. I was dealing with temps around freezing so i neglected it
in the end.
If you know temp and RH, the process is as follows:
1. pws = f(T) using hyland and wexler empirical eqn#6.
2. pw = pws * RH
3. W can be determined from pw (I only know the metric eqn)
4. Same eqn for Ws from pws (eqns#22,23 in chap 6)
5. degree of saturation = W/Ws
This can be reversed, iterating the pw from W.
ASHRAE have a new book specifically on psychrometrics which is a bit easier to follow.
jasno999 (Aerospace) 10 Aug 06 18:58
I have no idea what you are saying.
Yorkman (Mechanical) 10 Aug 06 19:34
Jasno, re-read the post I said;
Using the table locate the W value and multiply it by the decimal %Rh to get the humidity ratio of the air at that condtion then enter that value into the equation. Solving for h will give you
a very close representation of what the chart is providing. I did not say this YOu had told me previously to use %RH as the W value and it woudl get me close. It does get your close but it is
not exact and can really scew the numbers on you. I was hoping to use the table data to build a program that would calculate my enthalpys and my gr/lb for different temperature and humidity
conditions but it appears that this can nto be done.
Further more I said it was a close representaion of what the (psychrometric) chart is providing. I have know idea what degree of accuracy you where shooting for but the method and formula does
reflect what the sea level psychrometric chart shows and that was all I promised. I've tried several numbers in the formula and compaired them and see very little if any variation them.
So maybe you need to recheck your method?
I'm not a real engineer, but I play one on T.V.
A.J. Gest, York Int./JCI
jasno999 (Aerospace) 10 Aug 06 20:02
YOrkman. I am not trying to be a jerk I am just tryign to understand this. I appreciate all of your help but that is exactly what you said. I copied and pasted that from a post you had on the
other topic were were discussing 3 weeks ago. Here I will copy your post again this is exactly what you said:
Yes, there is a set of tables in the ASHRAE Fundamentals Handbook, in the Psychrometrics chapter 6 they are called the Thermodynamic Properties of Moist Air Tables. Then with the use of
equation number 32 from that chapter and perhaps a little reading, this may make a little more sense. h = .240t + W(1061 + .444t)
Where: h = specific enthalpy BTU/Lbs of the moist air
t = the dry bulb temperature of the indegrees F
W = the humdity ratio of the moist air Lbs
Lbs water/lbs dry air
Using the table locate the W value and multiply it by the decimal %Rh to get the humidity ratio of the air at that condtion then enter that value into the equation. Solving for h will give you
a very close representation of what the chart is providing.
I'm not a real engineer, but I play one on T.V.
A.J. Gest, York Int./JCI
Now can you hlep me in understanding this again. When I look at say a temperature of 112F and 40%RH my enthalpy values and my grains per lb are close but not exact when using those equations.
If I look at the PSY chart it shows the numbers different form what the equations say.
insult2injury (Mechanical) 10 Aug 06 20:59
h = h_air + w*h_water.vapor
so approximately: h = Cp_air*T_air + w*h_sat.water
h_sat.water can be found through an equation of state or interpolation table for best accuracy. Across a small range of temperature, the vapor curve is fairly linear, so:
h_sat.water = 1061.5 + 0.435*T for T in degrees F and h_sat,water in BTU/lbm
This produces minimal error (+/- 0.2 abs) from about 15 to 120 degrees F. I hope that you are not exceeding 120 degrees.
Yorkman (Mechanical) 10 Aug 06 21:55
It is my turn to apologize for getting short with you, and harder still to admit that through my ignorance I have mislead you unintentionally. The few temperatures and humidity values that I
tried using in the formula that I provided did work reasonably well. Unfortunatly the ranges that you are asking for increased the degree of error.
What you need to do is first look at the
post, he/she is correct and it sounds like she/he has done a similiar calculation with success. It's probably a lot more involved than you'd like.
It looks to me like you will need to work with the Hyland and Wexler empirical equations, specifically equation 6 for temperatures between 32°F and 392°F. And then use equations 22 and 23 to
find the actual humidity ratio for the desired temperature based on the partial pressure of dry air and partial pressure of water vapor, (a little bit of Daltons Law of partial pressures).
Then armed with the correct W (humidity ratio) I think you can use the original formula that I gave you. All of this comes from The ASHRAE Fundamentals Handbook Chapter 6
I would like to guide you further but I have exceeded my comfort zone by about 20°. I truely am sorry for any difficulty I've caused you.
I'm not a real engineer, but I play one on T.V.
A.J. Gest, York Int./JCI
Yorkman (Mechanical) 10 Aug 06 22:22
You posted while I was typing my retraction (man I hate that)! I had offered that same equation not realizing, the error that occured as temperatures and %Rh became higher. Thanks to CinciMace
for pointing me in the right direction. I'am going to have to dust of the text book and explore this topic a bit further.
I'm not a real engineer, but I play one on T.V.
A.J. Gest, York Int./JCI
insult2injury (Mechanical) 10 Aug 06 22:39
If that's the equation he was using, he should only be seeing errors of a fractions of a percent, which is more accurate than most psychrometric evaluations.
Yorkman (Mechanical) 10 Aug 06 23:09
This is the formula from The ASHRAE Fundamentals Handbook, like I said there is some errors when I used higher temps in the equation. As you approach 100°F + the enthalpy values go a bit
astray,like you said. At temperatures in the "comfort zone" the accuracy is pretty close.
h = .240t + W(1061 + .444t)
h = specific enthalpy BTU/Lbs of the moist air
t = dry bulb temperature of the indegrees F
W = humdity ratio of the moist air Lbs water/lbs dry air
I'm not a real engineer, but I play one on T.V.
A.J. Gest, York Int./JCI
insult2injury (Mechanical) 10 Aug 06 23:52
Okay, so I just curve-fit the saturation curve up to 200F
[h in BTU/lbm, T in degrees F]
A linear fit with a maximum deviation of 0.073%:
h = 1062.9855 + 0.42175562*T
Better equation fit with a maximum deviation of 0.04%:
h = (1127984.6 + 937.80641*T)^(0.5)
Best equation fit with maximum deviation of 0.02%:
h = 1059.8474 + 0.63214839*T^(0.92927487)
Note that the coefficients of the linear equation with increased range vary only slightly from my previous post. In my opinion, you shouldn't really see a noticeable difference by using any of
the 4 correlations I posted to calculate the saturation enthalpy.
Yorkman (Mechanical) 10 Aug 06 23:57
When you say I just curve-fit the saturation curve up to 200F. How did you do tha? Is this done on a spread sheet? Or did you manually crunch the numbers? Just curious and impressed.
I'm not a real engineer, but I play one on T.V.
A.J. Gest, York Int./JCI
insult2injury (Mechanical) 11 Aug 06 0:06
Generally done by a least-squares method which can be done by hand but is most efficiently done by software. I've had to program the method into several software applications such as fitting
fan curves, etc... All you really need to know is the form of the equation you want to map the data to. A spreadsheet could probably be made to do it. I used a software package called
TableCurve2D to find those. It tries throusands of equation forms simultaneously and then ranks them based on R^2 values. There is a 3D version as well.
Yorkman (Mechanical) 11 Aug 06 1:20
Well thank you very much for the reveal, and the equation(s). I'll keep those handy when I'm teaching that psychrometrics class to my 5th year a/c apprentices this fall. It will give me some
wiggle room when the topic of enthalpy and what to do when the charts range runs out;
I'm not a real engineer, but I play one on T.V.
A.J. Gest, York Int./JCI
KiwiMace (Mechanical) 14 Aug 06 13:12
Following through after a few days away from the machine, I see that perhaps my response was a bit too precise for your needs, but I guess still nice to know where the numbers come from.
I have tried curve-fitting the saturation curve before and found that it only works for short segments with polynomial fits - I wouldn't rely too heavily on these for extrapolation. The
appropriate equations are fairly well documented anyway.
Excel will fit curves to data with the Trendline function.
|
{"url":"http://www.eng-tips.com/viewthread.cfm?qid=160848","timestamp":"2014-04-20T08:21:58Z","content_type":null,"content_length":"57802","record_id":"<urn:uuid:1ce87fb1-bdbb-4fd5-b0fe-0c85767f83dc>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Partial linearization near a hyperbolic fixed point--Classical scattering
up vote 1 down vote favorite
I am currently reading the famous article "Universal Properties of Maps on an Interval" by Collet, Eckmann and Lanford related to the Feigenbaum-Coullet-Tresser universality. I am in particular
interested in Theorem 6.3 page 236 in that article. See the article for the precise statement, but very roughly the theorem says:
Consider a transformation $T$ on an infinite dimensional space which has a hyperbolic fixed point with one unstable direction with eigenvalue $\delta$ and a codimension one stable manifold. Then
there is a change of coordinates to a new system (x,y) where:
• the stable manifold is given by $y=0$
• the unstable manifold is given by $x=0$
• the transformation T takes the form $$ (x,y)\longmapsto (M(x,y), \delta \ y) $$ in this new coordinate system.
In other words, this realizes a linearization in the unstable direction only.
I would like to know about similar/related theorems, follow-ups, improvements, etc. that exist in the literature.
Using keyword searches etc. has been quite disappointing and I can definitely use the help of people with expertise in the area. For instance, I did not know the above paper contained such a theorem
until a chance discussion with one of the authors.
Edit with some context:
The method used in the CEL article is as follows. They first do some prep work in order to have a coordinate system $(x,y)$ satisfying the first two properties, i.e., such that the stable and
unstable manifolds are straight. Then they construct the partial conjugation as $$ z(x,y)=\lim_{n\rightarrow \infty} \delta^{-n} y_n(x,y) $$ where $y_n$ denotes the $y$ coordinate of the $n$-th
iterate of the point $(x,y)$ by a suitable cut-off modifcation of $T$.
This is very similar to the construction of wave operators in scattering theory.
The reason I am interested in this is because in recent joint work (see this paper) we proved the following:
Assume $T$ is analytic and has a hyperbolic fixed point $v_{*}$ with only one expanding direction with eigenvalue $\delta$. Then $$ \Psi(v,w)=\lim_{n\rightarrow \infty} T^n(v+\delta^{-n}w) $$ exists
and is analytic (jointly in $w$ and the component of $v$ along the stable tangent space used in the analytic parametrization of the stable manifold). Here $v$ belongs to the stable manifold and $w$
is arbitrary but not too big. This function $\Psi$ is not a true linearization, not even a partial one such as the $z$ function of CEL but it shares some of that flavor. Namely, it satisfies the
1. $T\circ\Psi(v,w)=\Psi(v,\delta\ w)$.
2. $\Psi$ takes its values in the unstable manifold.
3. $\Psi(v,w)=\Psi(v_{*},L_{v}(w))$ where $L_v$ is a $v$-dependent linear map onto the unstable tangent space.
The $\Psi$ function can be seen as the $w$ directional derivative of the $z$ function on the stable manifold. It is a "true linearization" on the unstable manifold only. I would like to know if
similar results exist in the literature.
ds.dynamical-systems reference-request renormalization mp.mathematical-physics
This looks strikingly familiar to the Poincare Linearization theorem and Normal Forms in the theory of ODEs. – Peter Cudmore Mar 20 '13 at 0:14
@Peter: I would not say "strikingly". My question is quite unsurprisingly related to this kind of questions simply because it explores extensions of normal form theory in the direction of weaker
statements than e.g. full linearizations. – Abdelmalek Abdesselam Mar 20 '13 at 0:52
add comment
1 Answer
active oldest votes
From a complex analytic standpoint, an updated discussion is embedded in Lyubich's paper "Feigenbaum-Coullet-Tresser universality and Milnor's Hairiness Conjecture":
up vote 2 http://arxiv.org/abs/math/9903201
down vote
@Adam: I was looking at this paper yesterday but it is quite long and I have yet to see where the kind of theorem I am interested in is embedded. – Abdelmalek Abdesselam Mar 20 '13 at
1 I'd look at Section 6. For example, Theorem 6.3 gives a hyperbolic splitting of the tangent space. Subsequent results discuss stable and unstable manifolds. If you want an actual
reduction to normal form, this does not seem to be considered, and on reflection I am not aware that such a result has been formally claimed in a complex analytic setting. – Adam
Epstein Mar 20 '13 at 14:30
@Adam: Thanks I will look at Sec 6. – Abdelmalek Abdesselam Mar 20 '13 at 16:27
Even in a finite diimensional setting there would be the issue of possible resonances among the eigenvalues, and little is known about those numbers – Adam Epstein Mar 20 '13 at 18:57
add comment
Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems reference-request renormalization mp.mathematical-physics or ask your own question.
|
{"url":"http://mathoverflow.net/questions/125020/partial-linearization-near-a-hyperbolic-fixed-point-classical-scattering","timestamp":"2014-04-19T02:50:23Z","content_type":null,"content_length":"61848","record_id":"<urn:uuid:d87c0fed-bc57-4d53-a784-c0246963c880>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Teachers from UK shocked by Chinese multiplication table | ChinaHush
Teachers from UK shocked by Chinese multiplication table
From Sina:
The teachers from UK asking questions
January 17th, An English delegation of more than 50 teachers and deans from top 25 middle schools and primary school in UK came to Ningbo city in Zhejiang Province to attend math classes for learning
and sharing.
They went to two schools: Ningbo Wanli International School and Ningbo Gaoxin Foreign Language School.
One UK headmasters said that Chinese kids’ are well known for their math abilities in the world. Teachers in UK always puzzled about why always the Chinese kids got the first prize in international
math competitions? They came all the way here to China, aiming to bring some insights back about the math teaching.
Until the afternoon yesterday, they had listened four math classes. The result is that, they were totally shocked by the multiplication table and Chinese kids’ math skills.
Standing all the time through listening
During the math classes yesterday, the chairs for the UK teachers were all left unseated. They walked into the students, checking their textbooks, notebooks, and took photos with their cellphones.
The Chinese kids did not let them down.
On student went to the stage and quickly wrote the correct answer of 24. This student said the answer can be quickly concluded through the use of multiplication table. The 12 teachers at the scene
were surprised by the method.
One English teacher said they don’t have such multiplication table in UK. If they want to solve the problem above, the process will be like this:
10×3=30,10×3=30,4×3=12,then add them up and get 24.
For this kind of problem, students in UK will have to learn through several lessons to solve them successfully.
But for kids, is it too hard to apply the Chinese way of education? One English teacher did not think so. He thought the standard of English education is too low.
Fail to learn the multiplication table, because of the pronunciation
After the math classes, UK teachers showed their interests in learning the multiplication table. But the dean of primary school, Zheng said that the multiplication table was a traditional method in
China which cannot be easily learned by English teachers because of the pronunciation.
The multiplication table has five Chinese characters maximum, and is very clear at a glance. But when translated into English, the sentences will be too long. For example, “九九八十一” is translated
as “nine nine eighty one”.
Although they failed to learn that, the experience was valuable. The teachers said that they were going to document this investigation into files and report them to the Department of Education of UK,
which they hoped would be helpful to the primary math education in UK.
Chinese kids are good at higher order multiplication and division only because they are forced to learn it by heart. And I do not agree that it is necessarily a good idea to learn it that way, I
believe forcing kids to learn stuff this way destroys creativity and limits actual conceptual intuition.
And the idea that it can not be learned by English speakers because of pronunciation is absurd. I agree that “九九八十一” is easier than “nine thousand nine hundred and eighty one” and that is why it
is never written that way, it is obviously always written as 9981 in multiplication tables…
• Agree.
To try to infer that English speakers can’t use their super secret table because our words are too long is ridiculous; we have number symbols too! 1, 2, 3, 4, 5, 6, 7, 8, 9 and 0.
• “the idea that it can not be learned by English speakers” or whiteys.
Imaging how hard this would be for blacks and French speaking people?
But Indians are good at miming english so they can learn “九九八十一” faster.
• 九九八十一 doesn’t mean 9981, it means 9×9=81. Because kids know they are learning multiplication tables they don’t need bother to list/read/state out the “times” and “equals”. Equivalent in
English would be something like “nine nines eighty one.” I don’t see why this is so mysterious. It is just memorizing basic multiplication. The question is how high does it go? When I was in
elementary school in the US, we also had to memorize the multiplcation tables, but only up to 12×12. My daughter is in second grade in an international school in Beijing that also teaches the
local math curriuculum, and as far as I can tell they have only gone up to 10×10. And yes, she can recite the entire multiplication table in Chinese but again, it isn’t anything terrible advanced
or mysterious.
I am an american of a certain age. Old enough to remember that we learned multiplication tables in elementary schoolback in the 1960s. I did very well in arithmetic and can estimate discounted cash
flows in my head to a fair degree of accuracy.
I have a PhD in Engineering and that skill amazes the youngsters.
English students are taught to question, challenge and think for themselves — not learn things by repetition.
• That’s why in the west most students get a passing mark for doing nothing but dreaming in class.
And many times they don’t need to show up for classes to get a passing grade.
□ this comment is based on your deep knowledge of every other country’s education system gained from extensive practical experience?
• That is a myth. I am teaching college classes in the US. Today’s American students are just horrible in everything. We cannot find the so called creativity there.
So they have to go to China to find out that they don’t learn the multiplication tables ? They could simply go to france instead, as we learn them in primary school too. And it doesnt make us better
in math.
Lets face it; Westerners are good in English and Chinese are good in math.
In the UK, these days, students are taught something called ‘chunking’, that apparently raises numeracy rates as it is makes a variety of mathematical problems easier to tackle. It is of limited use
at higher levels of mathematics, but it suits primary needs well enough. So the UK has moved away from the rote learning of multiplication tables that I learnt as a kid. I remember being something
like six or seven years old, endlessly reciting my tables, week after week. The format was very much like the Chinese; we’d say “once two is two, two twos are four, three twos are six…” and so on. We
didn’t bother with ‘ones’ for obvious reasons, and we went up to twelves (so, “twelve twelves are a hundred and forty four”, being the highest we’d go). It still serves me well, today.
I have seen British kids, today, plough through long multiplication and long division in seconds using the ‘chunking’ method (and I can’t find any good argument against that, as it seems to be fit
for purpose), but when I see them fail at what ought to be simple mental arithmatic I can’t help but worry about their grasp of the fundamentals.
Seperately, it seems fairly common in China to dismiss foreigners’ interest in Chinese ways by simply saying that it is too difficult for foreigners. Well, I’m sorry, but if it’s easy enough for a
primary school student in China, then guess what? it’s not too difficult for me. Whenever I came up against this in the years that I spent living there, I found, ultimately, that what the teachers
really meant was that they found it difficult to explain to foreigners. They teach children in a very direct style – it is written on the board and students must recite it, copy it and remember it –
they seldom have to explain it. Get over that barrier and you can actually have some very productive sharing and learning going on.
I learned the Chinese method for multiplication tables and I have to say, it is darned useful. For some reason, it is easy to memorize and it has stuck with me all this time.
I also want to learn the Japanese method for multiplying numbers where you cross numbers on a line to get the answer.
Honestly, don’t get why UK teachers are shocked. It’s been taught for so long. But I don’t think this style would translate well to english
Hmm. Arithmetic is not the same as mathematics.
so the English education system sucks? I have never heard of anyone intentionally teaching a student
“10×3=30,10×3=30,4×3=12,then add them up and get 24.”
• They are teaching that to my kids right now (elementary), when I saw that I was like what the heck is this mess. What happened to knowing something? I’m all for breaking something down for
smaller kids but what happens when they just need to know it? There will be times they won’t be able to over think it. It’s a bit scary. I’m teaching them myself the basic memorization, because I
think it’s needed. I just don’t get the way America seems to think our kids are too dumb to do any better than they are. No body pushes for the best anymore. And if we don’t encourage them to
give their best, they’ll never try. Just my opinion.
Foreigners just don’t get math. Accept it and grow.
The ability to memorize the multiplication table does not make someone good at math. This is only part of being good at mathematics. Another component of mathematics is the ability to reason and to
solve properly using logic and analysis.
However, memorizing the table well can allow someone to solve problems quicker as all the calculations can be done swiftly.
Concluding that someone’s mathematics skills is advanced based mainly on the fact that they can memorize tables is like concluding that someone is intelligent based on their ability to speak.
• may the force be with you.
□ thats not right
Bring some languages or humanities teachers from UK to a Chinese school and see them shocked for totally different reasons…
• You mean the history of slavery, colonization, and imperialism?
US is weak in these areas, their students are coddled and hate challenges.
• 70% of students in the US aren’t testing up to standards. It’s not that they’re coddled – it’s also because the US invests in their childrens’ education. It’s too easy to be a teacher in the
States and many people had teaching as a backup career – they care jack squat about teaching. And a lot of the kids are from low income areas.
I’m studying to be a teacher and I HATE when it is the kids, the victims, who are blamed by the adults
□ The reason education standards are abysmally low in the U.S. is due to cultural Marxism, as originally designed by the Frankfurt school. We don’t want the non-white kids to feel stupid, since
that would “raysuss,” so the solution is to make idiots out of everyone.
Add *what* up and get 24? Does this example even make sense?
• 3(10 + 10 + 4) = 3(24) = 72
I still remember learning this when I was in elementary school in China, it has stuck with me for all these years. It is a form of rote learning, in which the Chinese kids are forced to do lots of.
Recently, I went back to China and was astonished to see how much my little cousin is required to memorize for his studies. A lot of it is useless stuff especially in the internet age where one could
simply look it up in a search engine. Regardless, I find it encouraging to see westerners come to China for exchanges in learning opportunities whether it is in something the Chinese are
stereotypically good at like math is besides the point. I know the university I graduated from also first went to China for math students, it helps open up the doors for the Chinese students in other
when i was young my parents made me recite it but now as i am older i realise it is more of a rhyme when reciting the multiplication table using the chinese technique.
i sometimes forget but when i go through the multiplication table slowly i get it right because the rhyme sounds right.
Wow! Chinese kids are smart.
Americans are good at the Olympics where they strive to be the best even if that includes steroids and taking drugs. But when it comes to academics, they easily give up.
Chinese people are good at math, yes, and they are equally good at walking by dying people on the street and doing nothing to help. Math is math, humanity is something altogether different.
• fuck you
• Well, of course they don’t help. If they do, chances are the dying person will magically come to life and blame them for having tried to kill them, then sue them for all they’ve got.
• You have hundreds of years of dirty history and you are saying ‘humanity”?
Everyone that protects Americans: tell me how your country is awesome. Your soldiers kill for oil, your government is corrupt to the core (to the point you’re almost living in communism, without
realizing), your education system is awful. You’re mere puppets. You think U.S.A. ‘s education system allows students to be creative? It’s just another thing that makes them uneducated. In school,
they don’t learn anything. At home, they watch T.V. , spend time on the Internet and eat. Please, tell me more how that makes them free and creative.
You’re nothing more than sheep.
• That’s funny, a Chinese robot calling us “sheep.” Meanwhile, in your country, you sell your own children, kill them, treat them like trash, don’t give a flying fuck about anyone but yourselves,
you are the most selfish, self-centered, ignorant, unfeeling bunch of numb automatons one has ever seen. And for all the talk of “math,” you’re all stupid as shit, too. Go to any average
university, choose a student at random and ask him to find Russia on a map.
I have no idea where this bullshit stereotype comes from about Chinese being “good at math.” It’s probably because the few that do make it here are the truly soulless automatons who spent their
entire lives in the library and had their first girlfriend at twenty-five. The average Chinese university student is about as smart and capable of critical thought as a log of wood.
□ You have Obamacare. hahaha…
Can you even make healthcare.gov work after spending billions of dollars?
Give the DEM another 8 years after 2016, then no one can stop the rise of China. Liberals will take care of everything, including the 17T debts (maybe 25T in 2020).
What’s up, just wanted to tell you, I loved this post. It was practical. Keep on posting!
Thanks a lot for giving everyone an extraordinarily nice possiblity to read in detail from this site.
It’s usually so pleasing plus jam-packed with a great time for me and my office co-workers to visit your website on the least thrice every week to read the new stuff you have got. And indeed, I am
also certainly motivated concerning the superb concepts you give. Some 4 facts on this page are essentially the very best we have had.
there is a much easier and more analytical or playful way of solving such a question. children who really understand maths could also solve it by saying 75 : 3 is 25, so 72 : 3 must be one less which
is 24. This comes from experience. Rote learning is fine but there is no way to check if the answer is right if you just rely on memory rather than logic.
This reminds me…I totally think in English now, and have for years – but I still count in Chinese!
Both the West and the East have good and bad points when it comes to education. I think children needs play time and room to be creative, but some things, ‘building blocks’, needs to be memorized.
Alphabetic letters needs to be memorized, multiplication tables needs to be memorized – only when you have these in your head can do something more complicated with them.
I never feel that learning multiplications need skill to remember it. I often saw various methods from different part of the world. Methods methods methods, why the hell, we need it? It is
photographic mind that help it during my childhood years. Having various methods make our kids look stupid.
In case your lawn is tiny it is possible to go for an electric lawn mower. Together with the front …» more
Excuse the typo, I corrected it (see below). » more
If (as expected) England fail to deliver the goods in Brazil we can't threaten to send them down a coal …» more
If (as expected) England fail to deliver the goods in Brazil we can threaten to send them down a coal …» more
Awesome post. » more
|
{"url":"http://www.chinahush.com/2013/01/23/teachers-from-uk-shocked-by-chinese-multiplication-table/","timestamp":"2014-04-19T22:55:15Z","content_type":null,"content_length":"104848","record_id":"<urn:uuid:606589ba-f4ac-4143-b2f8-64bbfc78c904>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Discrete Mathematics/Finite state automata
From Wikibooks, open books for an open world
Formally, a Deterministc Finite Automaton is a 5-tuple $D = (Q, \Sigma, \delta, s, F)$ where:
Q is the set of all states.
$\Sigma$ is the alphabet being considered.
$\delta$ is the set of transitions, including exactly one transition per element of the alphabet per state.
$s$ is the single starting state.
F is the set of all accepting states.
Similarly, the formal definition of a Nondeterministic Finite Automaton is a 5-tuple $N = (Q, \Sigma, \delta, s, F)$ where:
Q is the set of all states.
$\Sigma$ is the alphabet being considered.
$\delta$ is the set of transitions, with epsilon transitions and any number of transitions for any particular input on every state.
$s$ is the single starting state.
F is the set of all accepting states.
Note that for both a NFA and a DFA, $s$ is not a set of states. Rather, it is a single state, as neither can begin at more than one state. However, a NFA can achieve similar function by adding a new
starting state and epsilon-transitioning to all desired starting states.
The difference between a DFA and an NFA being the delta-transitions are allowed to contain epsilon-jumps(transitions on no input), unions of transitions on the same input, and no transition for any
elements in the alphabet.
For any NFA $N$, there exists a DFA $D$ such that $L(N) = L(D)$
|
{"url":"http://en.wikibooks.org/wiki/Discrete_Mathematics/Finite_state_automata","timestamp":"2014-04-18T00:27:21Z","content_type":null,"content_length":"26535","record_id":"<urn:uuid:eca55217-e58d-44d7-b41a-2bad884a483e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Archives of the Caml mailing list > Message from cousinea@d...
Date: -- (:)
From: cousinea@d...
Subject: Re: new library modules
As I said, I have also implemented balanced trees and used them
to manipulate what Xavier calls sets and maps .
It is certainly less efficient that Xavier's implementation
since it is a purely functional one
but I have nonetheless used them quite extensively, building
balanced trees of size over 2^20.
I am not sure my choices have been the right ones
but here is a short description.
My main functions over balanced trees are:
empty: 'a t
add: ('a*'a -> comparison) -> option -> 'a -> 'a t -> 'a t
exists: ('a -> comparison) -> 'a t -> bool
get: ('a -> comparison) -> 'a t -> 'a
get_all: ('a -> comparison) -> 'a t -> 'a list
remove: ('a -> comparison) -> 'a t -> 'a t
remove_all: ('a -> comparison) -> 'a t -> 'a t
Values of type ('a*'a -> comparison) are supposed to be
total preorders and it is user's responsibality to use
them consistently.
type option = Take_Old | Take_New | Take_Both
is a type that specifies what should be done
when adding a value to a tree that contains an equivalent one.
Note that functions "get" and "exists" use an argument
of type ('a -> comparison)
which is similar to giving both an argument "ord" of type
('a*'a -> comparison) and an argument "x" of type 'a as Xavier
does for function "mem". This choice takes into account
the fact that, when using a preorder and not an order,
it is sometimes useful to search for objects that have
a more specific property that just identity or equivalence
to a given object.
This goes against Damien's suggestion to encapsulate the preorder
in the type which I approve for sets and maps (by the way,
the proposal by Laufer and Odersky should give a solution
for that).
Xavier's maps are easily obtained from my balanced trees:
type ('a,'b) map == ('a,'b) t;;
let add_map ord m x y= add ord Take_New (x,y) m;;
let find_map ord m x = snd (get (fun y -> ord x (fst y)) m);;
Note that here, the order on type 'a induces a preorder on type
('a * 'b) which is used for the search. This is another argument
for the importance of preorders.
|
{"url":"http://caml.inria.fr/pub/ml-archives/caml-list/1993/05/b1bd2edf7d2987545f778fd406dac03a.en.html","timestamp":"2014-04-16T19:01:38Z","content_type":null,"content_length":"6998","record_id":"<urn:uuid:cc98a37a-e209-4386-a1a9-23ad73899448>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: April 2003 [00627]
[Date Index] [Thread Index] [Author Index]
Re: Re: Re: Finding derivatives of a list?
• To: mathgroup at smc.vnet.net
• Subject: [mg40919] Re: [mg40888] Re: [mg40854] Re: [mg40816] Finding derivatives of a list?
• From: Daniel Lichtblau <danl at wolfram.com>
• Date: Thu, 24 Apr 2003 05:25:51 -0400 (EDT)
• References: <200304230517.BAA05828@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Selwyn Hollis wrote:
> I feel compelled to point out that using interpolation for this purpose
> is, in general, a Bad Idea.
> Please have a look at the following, which uses an example attributed
> to Runge. The Do loop plots the difference between f''[x] and the
> second derivative of the InterpolatingFunction based on 21 nodes -1,
> -.9, ..., .9, 1, for InterpolationOrders 3 through 20. You'll notice
> that the difference is quite large in all cases.
> runge[x_] := 1/(1 + 20 x^2);
> Plot[runge[x], {x, -1, 1}, Frame->True, PlotLabel->"Runge's example"];
> pts = Table[{x, runge[x]}, {x, -1., 1, 0.1}];
> Do[ interp = Interpolation[pts, InterpolationOrder -> n];
> Plot[Evaluate[{runge''[x] - interp''[x]}], {x, -1, 1},
> PlotRange -> {-10, 10},
> PlotLabel -> "InterpolationOrder --> " <> ToString[n]],
> {n, 3, 20}]
> -----
> Selwyn Hollis
> http://www.math.armstrong.edu/faculty/hollis
> On Tuesday, April 22, 2003, at 06:44 AM, Daniel Lichtblau wrote:
> > AES/newspost wrote:
> >>
> >> Specific problem is how to generate a list of values of the second
> >> derivative of a relatively smooth function at a set of equally spaced
> >> points, when the function itself is known only as a list of numerical
> >> values at those same points?
> >>
> >> --
> >> "Power tends to corrupt. Absolute power corrupts absolutely."
> >> Lord Acton (1834-1902)
> >> "Dependence on advertising tends to corrupt. Total dependence on
> >> advertising corrupts totally." (today's equivalent)
> >
> > Here are some possibilities.
> >
> > (i) Form an interpolation of relatively high order (say 6 or so). Take
> > second derivatives.
> >
> > (ii) Use finite differences to approximate the second derivatives.
> >
> > (iii) Use Fourier to get the approximated derivatives. See for example
> >
> > Wang, Jing. B (2002). Numerical differentiation using Fourier. The
> > Mathematica Journal 8:3. 383-388.
> >
> > I believe there was a small error in the code provided; you might want
> > to contact the author at wang at physics.uwa.edu.au
> >
> >
> > Daniel Lichtblau
> > Wolfram Research
It depends on how it is used, on the underlying assumptions of
smoothness of the data set, and on relative merits of alternative
approaches. Your example will serve to illustrate.
First let me say why I want to use "relatively high order" for the
interpolation (I'm sure you know this, but others may not). For low
order one simply does not get smooth second derivatives in the
interpolation. An order of five or six should suffice to get this much.
But now we run into another issue, as seen in your example. One must
have alot more points in order for the interpolation to be a useful
approximation at most points. For example, with interpolation order of
15 the function and its approximation do not agree at all well, and, not
surprisingly, the second derivatives disagree all the more. I think a
general rule of thumb is to use an order no higher than Sqrt[n] for
interpolating n points. Actually this is for polynomial interpolations
(to keep "wiggles" down), but I think the rule is also often reasonable
for piecewise interpolating functions such as those provided by
Hence to get an order of 6, I'd generally want twice as many points as
you have. But that's not the real issue for your example (or so I
believe). What matters more is the amount of variation of that second
derivative relative to the point spacing. It's quite large. Hence in
some sense that point spacing violates the "smoothness" assumption. To
see what I mean, look at
Plot[runge'[x], {x, -.9, .9}]
near the origin. A spacing of 1/10 simply cannot capture this variation.
I'll illustrate by finding those derivatives using an alternative
approach of finite differencing. Before I take this further I should
point out that the comparison is mildly unfair insofar as I used a
reasonably high order interpolation, but only the most basic discrete
difference approximation. So you may want to try more careful discrete
approximations to see if they yield significantly better results than
those below.
dx = .1;
derivapprox = ListConvolve[{1,-2,1},pts[[All,2]]]/dx^2
innerpts = Take[pts[[All,1]],{2,-2}];
innerptderivs = Map[runge''[#]&, innerpts]
We now have the second derivatives and their approximations evaluated at
the interior points -.9,-.8,...,.9. If we plot them together we see they
do not agree well near the middle.
Perhaps a better way is to make a plot akin to yours, where we first do
an interpolation (of default order) of the discretely approximated
second derivatives.
derivinterp = Interpolation[Transpose[{innerpts,derivapprox}]]
Plot[Evaluate[runge''[x] - derivinterp[x]], {x, -.9, .9}]
You will see that this also give significant variation, just as did the
second derivative of the sixth order interpolation.
Another point to make is that one really should check relative rather
than absolute error of these second derivatives. For that you might
{x, -1, 1}, PlotRange -> {-10, 10}]
{x, -.9, .9}, PlotRange -> {-10, 10}]
These seem to give fairly similar pictures.
If you cut the spacing in half then both methods will give substantially
better results. Below is the code I used, in its entirety (I hope). My
guess, which I have not tried to verify, is that we really only need the
closer spacing near the center where the second derivative of the
underlying function is varying most rapidly.
dx = .05;
pts = Table[{x,runge[x]}, {x,-1.,1,dx}];
interp[6] = Interpolation[pts, InterpolationOrder->6];
Plot[Evaluate[runge[x] - interp[6][x]], {x, -1, 1}]
Plot[runge''[x]-interp[6]''[x], {x,-1,1}]
derivapprox = ListConvolve[{1,-2,1}, pts[[All,2]]] / dx^2;
innerpts = Take[pts[[All,1]], {2,-2}];
innerptderivs = Map[runge''[#]&, innerpts];
MultipleListPlot[innerptderivs, derivapprox]
derivinterp = Interpolation[Transpose[{innerpts,derivapprox}]];
Plot[Evaluate[runge''[x] - derivinterp[x]], {x,-1+dx,1-dx}];
{x,-1,1}, PlotRange -> {-10,10}];
{x,-1+dx,1-dx}, PlotRange -> {-10,10}];
I think in this case the prior interpolation followed by differentiation
gave the better results. A last way to assess the error is by
For the differentiated interpolation function I get around .011.
For the finite difference approximation function I get around .054. So
in this example the differentiated interpolation seems to perform better
(again, subject to the caveat that I did not try the anything beyond the
simplest finite differencing).
In conclusion, as to what method to use for approximating second
derivatives from a table of data, I'd have to say that it depends
heavily on underlying assumptions about smoothness, variation relative
to spacing, etc. And this does not even touch upon the assumptions (or
coding) necessary in order to make good use of a Fourier-based approach.
Daniel Lichtblau
Wolfram Research
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2003/Apr/msg00627.html","timestamp":"2014-04-18T13:31:12Z","content_type":null,"content_length":"42494","record_id":"<urn:uuid:57ac53ea-b677-4044-88c0-ca860c83f55a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
- User Profile for: Buel_@_rookline.mec.edu
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: Buel_@_rookline.mec.edu
User Profile for: Buel_@_rookline.mec.edu
UserID: 5
Name: Nancy B.
Registered: 12/3/04
Total Posts: 9
Show all user messages
|
{"url":"http://mathforum.org/kb/profile.jspa?userID=5","timestamp":"2014-04-20T08:54:12Z","content_type":null,"content_length":"14471","record_id":"<urn:uuid:8f040a0f-3c27-40fc-b6b9-52e6b14778c8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus: Anti-derivatives and Area Under a Curve
The textbook we use (Briggs), and I think most of the textbooks I've used in the past (including Stewart and Thomas), introduce anti-derivatives before area under a curve. So they show students the
notation for indefinite integrals before showing them the notation for definite integrals. I think this is a BIG mistake.
Here's what happens...
∫ f(x) dx (aka indefinite integral) means find all functions F(x) so that F'(x)=f(x).
(Why does it use that funny symbol? Why does it have that dx part at the end? Hard to explain without referencing a connection that hasn't been made yet, isn't it?)
And then we start thinking about areas under curves and use a notation that's almost the same.
∫[1]^2 f(x) dx (aka definite integral) means find the area which is between f(x) and the x-axis (area below the axis counts as negative), and between x=1 and x=2.
Seeing almost identical notation and names, we're going to assume that these two act the same in some ways. Students are going to
anti-derivatives, even though it's area we're talking about here. So it's not much surprise when the Fundamental Theorem of Calculus tells us that to find area we can use anti-derivatives.
Wait! That
be a surprise. It's kind of amazing, isn't it? Derivatives give us slope. Why would going backwards in that process give us area?! Seems to me that's a big one we need to meditate on for a while.
This semester I knew I wanted to connect the new ideas with the position, velocity, and acceleration problems, so I introduced anti-derivatives first. And, I showed the indefinite integral symbol.
Oops! I shouldn't have. If I had held off, I believe the meaning of the definite integral would have taken hold better in my student's minds.
Until this semester, I've followed the textbook pretty closely, so my way around this problem has been to introduce the 'Area Function' without using this notation. I found this idea/project in
a book put out by the MAA
. I've revised it a lot over the years, but the original author, Charles Jones (of Grinnell College) still deserves credit for getting me started in this direction. (I wish I could figure out how to
thank him personally, but he doesn't seem to be at Grinnell College these days, and google gives me lots of people with that name.)
I've put a pdf of the project
. If you'd like my Word file, just email me (mathanthologyeditor on gmail).
We've started that project, and it's going well enough, but I realized that if I hadn't introduced the indefinite integral, we'd be better off. Next semester I'll get that right.
Tomorrow we wrap up the project, and I clarify the implications of the Fundamental Theorem. Cool stuff!
3 comments:
1. This makes so much sense. The area function is very concrete and a much better point of departure for introducing integration.
I also agree that the unfamiliar notation and terminology is disconcerting. It can help to "demystify" it if you give students some helpful ways to associate meaning with those terms so they
"own" them.
Your "area function" first approach lends itself very naturally to that, since you can talk about the integration symbol being just a fancy form of "S" for summation (and approximating areas
obviously involves summing up many smaller areas with widths of dx).
And the word "integrate" also has helpful etymological associations, since the word literally means "to render something whole" and you are literally putting many different puzzle pieces together
to compute the area of the whole when you integrate.
2. I like to start out by using F(x) as the symbol for the antiderivative of f(x), and the integral sign for the area. Then the fundamental theorem is really exciting and nonobvious! I certainly
remember in my first calculus class, seeing that theorem and thinking "So the integral of the function is the integral? What is this pointless thing?" and then the problems we were given for it
all involved d/dt of the integral from a to t of f(x) dx, and with all those letters flying around the whole thing didn't make any sense.
Your project looks fantastic -- I love the way you start, introducing the notation and the concept and estimation without any heavy machinery involved. Then moving on to linear functions -- maybe
a bit too big a leap? I might use y = 3 first, and then y = 2x, and then their sum, with the bonus of introducing another fact about integrals.
Then they get to the Fundamental Theorem in such a way that it seems almost natural, like they're discovering it for themselves!
I'm not sure how much they know about antiderivatives before this project -- I'd be curious to know a little more about where in your course this lands.
3. We can avoid using any symbol for anti=derivative at first, by offering students functions labeled f'(x), ansking asking them to find f(x). I think that's my preference. And doing this project
has made me notice that the definite integral is an odd new function, in that it needs two inputs. Our area function avoids this by keeping the left side constant, so I like the F(x) notation for
this, which clarifies that it's just another function.
I think you're right about starting off more gradually. Maybe I should make those modifications now, before I forget.
I had done a section like the textbooks do, on anti-derivatives, so they saw how polynomials work, and probably sine, cosine, and e^x. I think we had done simple chain rule problems too, like
find y when y'=sin(2x).
I'm reading a very interesting open source textbook right now by Crowell, and he makes the connection between rate of change and area clear in the first chapter of the book. He's starting out
with discrete change. It's an intriguingly different approach.
Comments with links unrelated to the topic at hand will not be accepted. (I'm moderating comments because some spammers made it past the word verification.)
|
{"url":"http://mathmamawrites.blogspot.com/2012/11/calculus-anti-derivatives-and-area.html","timestamp":"2014-04-16T04:11:59Z","content_type":null,"content_length":"111226","record_id":"<urn:uuid:eb01dd6f-d194-454d-b00b-f7d2576e2a4e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Adding fractions word problems
Adding fractions word problems arise in many situations. I will not show you how to add fractions here Adding fractions will show you how to add fractions and it has a fraction calculator to help you
Example #1:
John walked 1/2 of a mile yesterday and 3/4 of a mile today. How many miles has John walked?
This word problem requires addition of fractions
Choosing a common denominator of 4, we get
1/2 + 3/4 = 2/4 + 3/4 = 5/4
So, John walked a total of 5/4 miles
Example #2:
Mary is preparing a final exam. She study 3/2 hours on Friday, 6/4 hours on Saturday, and 2/3 hours on Sunday. How many hours she studied over the weekend
This word problem requires addition of fractions
Choosing a common denominator of 12, we get:
3/2 + 5/4 + 2/3 = 18/12 + 15/12 + 8/12 = 41/12 = 3.42 hours
So, Mary studied a total of 3.42 hours
Example #3:
A recipe requires 1/2 teaspoon cayenne pepper, 3/4 teaspoon black pepper, and 1/4 teaspoon red pepper. How much pepper does this recipe need?
Choosing 4 as a common denominator, we get:
1/4 + 3/4 + 1/2 = 4/4 + 3/4 + 2/4 = 9/4 = 2.25
So, the recipe needs 2.25 or 2 teaspoon and one-fourth of a teaspoon pepper.
Have A Great Basic Math Word Problem?
Share it here with a very detailed solution!
Enter Your Title
Enter Your Basic Math Word Problem!
[ ? ]
Upload 1-4 Pictures or Graphics (optional)
[ ? ]
Add a Picture/Graphic Caption (optional)
Click here to upload more images (optional)
Author Information (optional)
To receive credit as the author, enter your information below.
Your Name (first or full name)
Your Location (e.g., City, State, Country)
Submit Your Contribution
Check box to agree to these submission guidelines.
(You can preview and edit on the next page)
What Other Visitors Have Said
Click below to see contributions from other visitors to this page...
recipe and fractions
you are at a grocery store to buy ingredients for your cupcakes. the recipe requires 4 2/8 a cup of rising flour to make 28 cupcakes. your want to make …
Inheritance math problem Not rated yet
An inheritance of $1,400,000 is to be divided among Scott, Alice, and Tricia in the following manner Alice is to recieve 6/7 of what Scott recieves, while …
Judy's weight Not rated yet
Judy was trying to lose weight. she weighed 180 pounds. So she ran for 4/2 hours on sunday and lost 4/2 pounds. She was so happy with the results …
Click here to write your own.
|
{"url":"http://www.basic-mathematics.com/adding-fractions-word-problems.html","timestamp":"2014-04-18T23:15:03Z","content_type":null,"content_length":"43548","record_id":"<urn:uuid:6f19feb7-6141-40a4-bfde-f538b48831d6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Golfing with a Single Photon
Phys. Rev. Lett. 87 , 050402 (2001)
Where quantum mysteries are concerned, Schrödinger’s cat has nothing on a single photon–at least you’d have some chance of finding the feline, whether dead or alive. In contrast, if you looked for a
photon in a small space, within a limited range of momentum, you’d seem to have a negative chance of finding it. This strange result shows up in measurements appearing in the 30 July print issue of
PRL and is rooted in Heisenberg’s uncertainty principle, which limits how precisely you can simultaneously measure an object’s position and momentum.
Light is both a particle and a wave. It’s not hard to define the terms “position” and “momentum” for light particles (photons), but these quantities must be defined in a more abstract sense for light
waves. Either definition can be represented in quantum phase space, which looks something like a hilly golf course. North-south coordinates mark the photon’s position, while east-west coordinates
give momentum. The height of the land at each point is much like the probability of finding a photon there. But in the quantum world, where a photon’s position and momentum cannot be determined
simultaneously, this “elevation” can only be understood as an approximation to probability. The new experiments by Alex Lvovsky and his colleagues at the University of Konstanz in Germany show that
the photon’s phase space contains a circular ridge, where the photon is likely to be found, and a deep crater in the center, where your chances of finding the photon seem to be negative.
Probability can’t be negative, but the photon’s quantum phase space can contain negative valleys because Heisenberg’s uncertainty principle won’t allow you to squeeze the photon into such a narrow
range. If you draw a stripe across the golf course–representing a narrow range of positions–and try to putt the photon onto it, the photon immediately smears out along the length of the line. So
instead of measuring the negative valleys directly, Lvovsky’s team did the equivalent of walking around the edges of the course and measuring average elevations along lines drawn in many different
To accomplish that feat, Lvovsky and his colleagues created pairs of photons sharing the same quantum state. They measured the wave-like behavior of one light beam and the particle-like behavior of
the other–essentially accessing the same photon state with two different beams. The researchers used the first beam as a compass: By measuring the (random) phase of the wave in the beam, they could
draw stripes across the golf course in different directions. To get the average height along each stripe orientation, they counted the number of times a photon was detected from the other beam
coincident with a given phase in the first beam. Lvovsky says this was the first experiment to measure both the wave and particle nature of a single photon simultaneously.
This mapping technique was developed by Michael Raymer at the University of Oregon. For Raymer, the experiment was impressive because it produced single photons in well defined wave packets. While
the experiment is “a real step forward” for his game of physics, says Raymer, it also suggests that if miniature golf ever goes quantum, we’re going to have a hard time finding the ball.
Rebecca Slayton is a freelance science writer in Cambridge, MA.
|
{"url":"http://physics.aps.org/story/v8/st7","timestamp":"2014-04-18T21:11:53Z","content_type":null,"content_length":"14317","record_id":"<urn:uuid:5d296680-0d93-42c2-ab25-79343bad1156>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is there a slick proof of the classification of finitely generated abelian groups?
up vote 36 down vote favorite
One the proofs that I've never felt very happy with is the classification of finitely generated abelian groups (which says an abelian group is basically uniquely the sum of cyclic groups of orders
$a_i$ where $a_i|a_{i+1}$ and a free abelian group).
The proof that I know, and am not entirely happy with goes as follows: your group is finitely presented, so take a surjective map from a free abelian group. The kernel is itself finitely generated
(this takes a little argument in and of itself; note that adding a new generator to a subgroup of free abelian group either increases dimension after tensoring with $\mathbb{Q}$ or descreases the
size of the torsion of the quotient), so our group is the cokernel of a map between finite rank free groups. Now, (and here's the part I dislike) look at the matrix for this map, and remember that it
has a Smith normal form. Thus, our group is the quotient of a free group by a diagonal matrix where the non-zero entries are $a_i$ as above.
I really do not think I should have to algorithmically reduce to Smith normal form or anything like that, but know of no proof that doesn't do that.
By the way, if you're tempted to say "classification of finitely generated modules for PIDs!" make sure you know a proof of that that doesn't use Smith normal form first.
ac.commutative-algebra gr.group-theory
3 While we're on the subject, why not wish for a nice proof of Jordan form? Of course, it is essentially the same question, but somehow the Jordan form annoys me more :) – t3suji Jan 16 '10 at 20:36
1 @t4suji: The Smith normal form implies the generalized jordan form, and this in turns implies over algebraically closed fields the jordan form. Alternatively, check out the
Jordan-Chevally-decomposition. Without coordinates, everything becomes clearer! – Martin Brandenburg Jan 16 '10 at 20:42
I suppose it is in this Smith normal form that you prove the crucial part that $a_i | a_{i +1}$. – Anweshi Jan 16 '10 at 20:56
Alternatively, there isn't a canonical classification of finitely-generated abelian groups, as there's also the prime power decomposition to consider. In a sense your choice of classification is
choosing the technique of proof. – Ryan Budney Jan 16 '10 at 21:58
1 My favorite proof first proves the primary decomposition for modules, then realizes the case of PIDs as emergent because all primes are principal and of height 0, therefore, they're all isolated,
and the primary decomposition is unique up to generating element. – Harry Gindi Jan 17 '10 at 19:40
show 1 more comment
7 Answers
active oldest votes
I reject the premise of the question. :-)
It is true, as Terry suggests, that there is a nice dynamical proof of the classification of finite abelian groups. If $A$ is finite, then for every prime $p$ has a stable kernel $A_p$
and a stable image $A_p^\perp$ in $A$, by definition the limits of the kernel and image of $p^n$ as $n \to \infty$. You can show that this yields a direct sum decomposition of $A$, and
you can use linear algebra to classify the dynamics of the action of $p$ on $A_p$. A similar argument appears in Matthew Emerton's proof. As Terry says, this proof is nice because it
works for finitely generated torsion modules over any PID. In particular, it establishes Jordan canonical form for finite-dimensional modules over $k[x]$, where $k$ is an algebraically
closed field. My objection is that finite abelian groups look easier than finitely generated abelian groups in this question.
The slickest proof of the classification that I know is one that assimilates the ideas of Smith normal form. Ben's question is not entirely fair to Smith normal form, because you do not
need finitely many relations. That is, Smith normal form exists for matrices with finitely many columns, not just for finite matrices. This is one of the tricks in the proof that I give
Theorem. If $A$ is an abelian group with $n$ generators, then it is a direct sum of at most $n$ cyclic groups.
Proof. By induction on $n$. If $A$ has a presentation with $n$ generators and no relations, then $A$ is free and we are done. Otherwise, define the height of any $n$-generator
presentation of $A$ to be the least norm $|x|$ of any non-zero coefficient $x$ that appears in some relation. Choose a presentation with least height, and let $a \in A$ be the generator
such that $R = xa + \ldots = 0$ is the pivotal relation. (Pun intended. :-) )
up vote 32
down vote The coefficient $y$ of $a$ in any other relation must be a multiple of $x$, because otherwise if we set $y = qx+r$, we can make a relation with coefficient $r$. By the same argument, we
accepted can assume that $a$ does not appear in any other relation.
The coefficient $z$ of another generator $b$ in the relation $R$ must also be a multiple of $x$, because otherwise if we set $z = qx+r$ and replace $a$ with $a' = a+qb$, the coefficient
$r$ would appear in $R$. By the same argument, we can assume that the relation $R$ consists only of the equation $xa = 0$, and without ruining the previous property that $a$ does not
appear in other relations. Thus $A \cong \mathbb{Z}/x \oplus A'$, and $A'$ has $n-1$ generators. □
Compare the complexity of this argument to the other arguments supplied so far.
Minimizing the norm $|x|$ is a powerful step. With just a little more work, you can show that $x$ divides every coefficient in the presentation, and not just every coefficient in the same
row and column. Thus, each modulus $x_k$ that you produce divides the next modulus $x_{k+1}$.
Another way to describe the argument is that Smith normal form is a matrix version of the Euclidean algorithm. If you're happy with the usual Euclidean algorithm, then you should be happy
with its matrix form; it's only a bit more complicated.
The proof immediately works for any Euclidean domain; in particular, it also implies the Jordan canonical form theorem. And it only needs minor changes to apply to general PIDs.
I happened to be looking at M.A. Armstrong's very nice book "Groups and Symmetry" today and noticed that he has a proof along similar lines, except that he first minimizes the number
of generators and then the height. Minimizing the number of generators seems to be necessary in order to obtain a splitting where each modulus divides the next. (Perhaps minimizing the
number of generators is implicit in the "By induction on $n$" at the beginning of Greg's proof.) – Allen Hatcher Jan 25 '10 at 22:10
1 I think that it's okay as it is. If you have more than the minimum number of generators, then I think that the arguments yields summands that are just $\mathbb{Z}/1$. – Greg Kuperberg
Jan 26 '10 at 4:01
You’re right, the “just a little more work” takes care of it. – Allen Hatcher Jan 26 '10 at 8:34
add comment
I believe that the Smith normal form is equivalent to the classification of finitely generated abelian groups. Besides, the algorithm can be used e.g. in linear algebra to compute the
generalized jordan form of a matrix. So what is so bad about it? It's a very intuitive algorithm which simplifies the relations step by step.
There are also other proofs of the classification result. Check out Lang's Algebra, Ch. I, § 8 (available at google books). The key result is the following lemma: Let $A$ be a finite
up vote 6 abelian $p$-group and $a \in A$ of maximal order. Then every element of $A/a$ can be lifted to an element of $A$ of the same order.
down vote
edit: This seems to be an elementary formulation of Emerton's proof above.
add comment
I'm not sure what ingredients you are allowing, but here is one proof sketch:
Let $A$ be our f.g. abelian group. Since $\mathbb Z$ is Noetherian, the torsion subgroup $A_{tors}$ is also f.g., and the quotient $A/A_{tors}$ is torsion free, and f.g. (being a quotient of
something f.g.). [As pointed out in a comment, we will later show that $A_{tors}$ is a direct summand of $A$, and so the Noetherianess argument is not actually needed.]
(1) If $A$ is f.g. and torsion free over $Z$, it is free.
Proof: Induction on the dimension of $V := {\mathbb Q}\otimes\_{\mathbb Z} A$ (which is fin. dimensional, since $A$ is f.g.).
If this equals $1$, then $A$ is a f.g. subgroup of $\mathbb Q$, and finding a common denominator shows that it is cyclic. (This is the Euclidean algorithm.)
In general, choose a line $L$ in $V$. If $A \cap L = 0$, then $A$ embeds into $V/L$, the dimension drops, and we are done by induction. (Of course, this actually can't happen, but never
mind; we don't need to prove that here.)
Otherwise, we have $0 \rightarrow A\cap L \rightarrow A \rightarrow B \rightarrow 0,$ and $B$ embeds into $V/L$, so is free by induction, $A/A\cap L$ is f.g. (by Noetherianess of $\mathbb
Z$) and embeds into $L$, so is free by the dim. 1 case. Freeness of $B$ makes this s.e.s split, so $A = A\cap L \oplus B$ is free.
(2) In general, $A = A\_{tors} \oplus \text{something free} .$
Proof: We have the s.e.s $0 \rightarrow A_{tors} \rightarrow A \rightarrow A/A_{tors} \rightarrow 0.$ Part (1) shows that $A/A_{tors}$ is free, and then this freeness lets us split the
(3) Now suppose $A$ is torsion. Its Sylow subgroups are unique (by abelianess, although there are many other ways to prove this too), and all have mutually trivial intersections, to $A$ is
isomorphic to their direct sum.
(4) We have now reduced to the case $A$ is a $p$-power order abelian group. Let $p^e$ be the exponent of $A$, so $A$ is a ${\mathbb Z}/p^e {\mathbb Z}$-module. Choose an element $a \in A$ of
order $p^e$. Then we have ${\mathbb Z}/p^e {\mathbb Z} \hookrightarrow A,$ an embedding of ${\mathbb Z}/p^e {\mathbb Z}$-modules. Sincer ${\mathbb Z}/p^e$ is injective over itself, this
splits. (There are many elementary ways to prove this, or to alter the argument: e.g. apply Pontrjagin duality, which for a group of exponent $p^e$ is just Homs to ${\mathbb Z}/p^e {\mathbb
up vote Z},$ to get a surjection from a ${\mathbb Z}/p^e {\mathbb Z}$-module to ${\mathbb Z}/p^e {\mathbb Z}$, which must then split, the latter being free of rank one; now apply Pontrjagin duality
29 down again to get a splitting of the original sequence.)
Continuing by induction on the order, we write $A$ as a sum of cyclic groups of $p$-power order.
(5) We have now shown that any f.g. $A$ is a direct sum of a free group and of cyclic groups of prime power order. It is easy to rearrange this information to get the classification in terms
of elementary divisors.
Comment: while this may not seem so slick, I think it has the merit that the techniques it uses are elementary versions of standard commutative algebra arguments for analyzing modules over
any commutative Noetherian ring, namely various localization and devissage techniques.
E.g. the preceding argument extends immediately to the PID case. In step (1), one uses the PID property to find a common denominator, rather than the Euclidean algorithm.
In step (3), one observes that $A_{tors}$, being finitely generated and torsion, is annihilated by some non-zero ideal $I$ in the PID $R$, hence is a module over the Artinian ring $R/I$, and
so is the sum of its localizations $A\_{\mathfrak p},$ where $\mathfrak p$ ranges over the finitely many (non-zero, hence maximal) prime ideals containing $I$.
EDIT: If one wants to work more in the spirit of the classification by elementary divisors, and avoid working one prime at a time, one can combine steps (3), (4), and (5) as follows:
(3') Suppose $A$ is f.g. torsion. Let $e$ be its exponent. Then it is a ${\mathbb Z}/e{\mathbb Z}$-module, and contains an element of order $e$. Thus one has an embedding ${\mathbb Z}/e{\
mathbb Z} \hookrightarrow A,$ which must split (either by the injectivity argument of (3), applied now to ${\mathbb Z}/e{\mathbb Z}$, or the Pontrjagin duality argument). Proceeding by
induction, one writes $A = \oplus {\mathbb Z}/e\_i{\mathbb Z},$ where $e_i | e_{i-1},$ as required.
EDIT: Suppose that one wants to prove directly that ${\mathbb Z}/e{\mathbb Z}$ is injective as a module over itself (as Martin asks below): using a standard criterion for injectivity of
modules over a commutative ring, one need just show that for any ideal $I$ of ${\mathbb Z}/e{\mathbb Z}$, any map $I \hookrightarrow {\mathbb Z}/e{\mathbb Z}$ of extends to a map ${\mathbb
Z}/e{\mathbb Z} \rightarrow {\mathbb Z}/e{\mathbb Z}$.
This is easily done: $I$ is of the form $f {\mathbb Z}/e{\mathbb Z}$, for some $f | e$. Equivalently, $I = ({\mathbb Z}/e{\mathbb Z})[e/f]$ (the $e/f$-torsion submodule). The given map $I \
rightarrow {\mathbb Z}/e{\mathbb Z}$ then necessarily lands in $({\mathbb Z}/e{\mathbb Z})[e/f] = I,$ and a map $I \rightarrow I$ can certainly be extended to a map ${\mathbb Z}/e{\mathbb Z}
\rightarrow {\mathbb Z}/e{\mathbb Z}$, as required.
2 Although as with things that only use commutative algebra, it will probably not pass the "slickness" test. – Hailong Dao Jan 16 '10 at 21:00
I think there is a genuine tension between proofs that a professional will like (where professional here may mean <I> professional algebraist</I>!) and ones that are elementary. For
9 professionals, reductions and devissages are easy, natural, and we don't even think of them as real landmarks in the proof; they are just serve as passages between the key points and
ideas. But in writing things out, they can take a lot of words, and seem (as you wrote) mysterious and difficult. I don't know the best way to deal with this tension. – Emerton Jan 16 '10
at 23:06
1 Matt, I wonder whether it is easier to reduce to indecomposable groups to begin with (i.e., use Krull-Schmidt). For one thing, that will take care of uniqueness part of the statement; for
another, you won't need inductive part of the argument. – t3suji 1 min ago – t3suji Jan 17 '10 at 1:31
2 Once it is proved to be a direct summand, it is finitely generated. – t3suji Jan 17 '10 at 2:19
5 I read this proof as "we classify finitely generated sheaves over spec Z; we use the facts that Z is one dimensional to reduce the problem to line bundles classification, and the fact
that the Picard group is trivial to classify locally" – David Lehavi Jan 18 '10 at 10:28
show 6 more comments
I wanted undergraduates in my number theory course, most of whom have had only one semester of abstract algebra [which at UGA means, believe it or not, that groups are not covered at
all!], to have available a proof of the structure theorem for finite abelian groups, so I wrote this up in Section 5 of
In comparison to M. Emerton's argument above (which I upvoted), what I say:
$\bullet$ is considerably more elementary (indeed the point of the entire document is to develop everything you might need to know about finite abelian groups, from scratch)
up vote 12
down vote $\bullet$ does not address the fact that a torsion free f.g. abelian group is free (for this I like Emerton's argument, although I might rather recast it in more elementary language for
my intended audience)
$\bullet$ includes a proof of the uniqueness of the invariant factor decomposition.
Comments welcome, as always.
(As I was enjoying your notes, I noticed a minor typo: on page 12 in the paragraph before "We now begin the proof..." I believe you want H_i \cap H_j = {1}.) – user4977 Oct 24 '10 at
@TS: Thank you. I have fixed it. – Pete L. Clark Oct 24 '10 at 23:40
add comment
Show first the
Lemma. Let $A$ be a PID, $L$ a free $A$-module and $M\subseteq L$ a non-zero submodule. There exist $z\in L$, a submodule $S\subset L$ and $c\in A$ such that (i) $L=\langle z\rangle\
oplus S$, (ii) $M=\langle cz\rangle\oplus (S\cap M)$, and (iii) if $f:L\to A$ is $A$-linear and $f(z)=1$, then $f(M)=\langle c\rangle$.
by picking a morphism $h:M\to A$ with the property that the ideal $h(M)\subseteq A$ is maximal non-zero, $S=\ker h$, $c$ a generator for $h(M)$, and $z=u/c$ for $u\in h^{-1}(c)$ (this
makes sense for $u$ is divisible by $c$), and checking that the claims of the lemma hold.
up vote 2 Next, deduce by induction
down vote
Corollary. Let $A$ be a PID, $L$ a free $A$-module and $M\subseteq L$ a finitely generated submodule. Then there is a basis $\mathcal B=\{e_i:i\in I\}$ of $L$, a finite subset $\{e_
{i_j}:1\leq j\leq n\}$ of $\mathcal B$, and elements $a_1,\dots,a_n\in A$ such that $a_i\mid a_{i+1}$ for all $i$ and $M=\bigoplus_j\langle a_ie_{i_j}\rangle$.
Finally, pick a finitely generated module $M$ over a PID, consider a surjection $\phi:L\to M$ from a finitely generated free module, and apply the corollary to the submodule $\ker\phi$ of
$L$ to describe the quotient $L/\ker\phi$.
add comment
Perhaps the issue is that the classification is not canonical or functorial in any reasonable way, and so any proof of this form must at some point create an arbitrary choice. (In
particular, even though the classification tells us that every finite abelian group is isomorphic to its Pontryagin dual, there is no way to make this isomorphism canonical.) Presumably
there is some category-theoretic way to formalise this issue, though I don't know how to do this. (A related fact, though, is that the above classification breaks down horribly for infinite
abelian groups, much as the Jordan canonical form breaks down for infinite dimensional spaces.)
On the other hand, the special case of the classification for vector spaces over a finite field has the same issue (no canonical choice of basis), and yet doesn't seem to cause the same
up vote amount of dissatisfaction. I guess because here the full complexity of the Jordan canonical form does not emerge.
26 down
vote Greg Kuperberg pointed out to me in this blog post of mine that the Jordan canonical form for nilpotent transformations and the classification of abelian p-groups had essentially the same
proof - in both cases the key is to understand the dynamics of a nilpotent homomorphism, which in the latter case is the operation of multiplication by p. This is perhaps the only "ugly"
part of the whole story (and requires one to manually split a number of short exact sequences, etc.); reducing the Jordan normal form to the nilpotent case, or the classification of general
abelian groups to p-groups, is all very clean and canonical.
add comment
The slickest (nonconstructive!) proof I know of is the one I put in my Group Theory notes, p22. You choose a generating set $x_1,\ldots,x_n$ for the group such that $x_1$ has the minimum
possible order, and then prove that the group is the direct sum of the subgroups generated by $x_1$ and by $x_2,\ldots,x_n$. Now apply induction on $n$ to see that the group is a direct
up vote 36 sum of cyclic groups.
down vote
Awesome. There's nothing like a nonconstructive proof to wake you up in the morning. – Harry Gindi Jan 17 '10 at 19:36
I agree: awesome. Also, for finite abelian groups this is similar to the first proof of the theorem, given by Kronecker in 1870. – John Stillwell Jan 17 '10 at 20:48
This nice argument is in a sense dual to the one that I give above. I choose a generating set such that the quotient $A/\langle x_2,\ldots, x_n\rangle$ is as small as possible, and show
that the subgroup $\langle x_2,\ldots, x_n\rangle$ is complemented. – Greg Kuperberg Jan 18 '10 at 20:26
I like this one, and it seems to generalize cleanly to finitely generated modules over a PID: Consider the partially ordered set $S$ of ideals of the form $\mathrm{Ann}(x_1)$, where
$x_1,\dots,x_n\in M$ ranges over generating sets of size $n$. Choose a generating set so that $\mathrm{Ann}(x_1)$ is a maximal element of $S$, and proceed in the same way. – Charles
Rezk Oct 14 '12 at 17:00
I think this argument is also the same as the one given by R. Rado: see MR0042406 Rado, R. A proof of the basis theorem for finitely generated Abelian groups. J. London Math. Soc. 26,
(1951). 74–75; erratum, 160. – Dan Ramras Oct 27 '12 at 20:12
show 1 more comment
Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra gr.group-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/12009/is-there-a-slick-proof-of-the-classification-of-finitely-generated-abelian-group?answertab=oldest","timestamp":"2014-04-18T00:41:06Z","content_type":null,"content_length":"111300","record_id":"<urn:uuid:a539907d-0eb3-4cc1-b021-7f52ae263971>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I Want to Teach Forever
My students had a lot of trouble with
applying proportions and ratios to geometric figures and other problem situations
, and I believed that the problem was in their inability to visualize the idea of scaling something up or down. I gave innumerable examples, of course, but being able to see in your mind what is
described in a word problem is probably the most difficult skill needed to master Algebra I and pass the
9th grade math TAKS test
(our state standardized test). They also had trouble doing measurement problems.
So I came up with a simple idea last year:
why not have the students measure themselves
and figure out the dimensions of a giant statue of themselves? They would have to make measurements, use measurements, and understand and solve proportions and ratios. Students would have to get up
out of their chairs to make the measurements, do their own calculations with their own numbers, and would be forced to visualize a real life proportion problem.
After a successful trial during tutoring last year, I brought back the idea for this year's class. The only supply needed is
(ask the science teachers). Here's the plan:
1. I introduced the activity by asking students to imagine that it's the far future, and they're all doctors, lawyers, engineers, scientists, and Presidents of the United States. They're so famous
that their hometown wants to honor them with a giant statue in the middle of town. In order to make the statue lifelike, they need to use proportions to make sure that you don't look like a
telephone pole or worse, Eric Cartman.
2. Students make three measurements (in inches): height, shoulder width, and shoe length. They have to work together to get the measurements done quickly and easily.
3. Optionally, they can measure a partner to give them something to compare their measurements to.
4. Students return to their desks to make calculations; the teacher gives examples and then monitors independent work.
The questions are rather straightforward but require multiple calculations. I also included at this point in the lesson the idea of shrinking my students down to action figure size. Here are the
1. If we wanted to build a 50 ft tall statue, how wide and long would it be?
2. If we decided the statue would be a ratio of 5:1 bigger than you, what would the dimensions be?
3. If we wanted to make a 4 in tall action figure, how wide and long would it be?
4. If we were making an action figure that was a ratio of 1:10 smaller than you, what would the dimensions be?
As an assessment you could give them standardized test-style questions with similar triangles (and other polygons) and word problems requiring application of proportions and ratios. I included a
similar question on our
weekly quiz
which is linked in the resources at the end of this post.
Pitfalls and suggestions:
• When I originally tried this activity, I displayed the questions via document camera (fancy replacement for an overhead projector) with the idea that I would write in examples and how to set up
the proportions and ratios so we could do it together. I lost their attention after so effectively gaining it with the initial part of the activity. I corrected this in later classes by including
a clear graphic organizer.
• While getting them out of their desks once and a while is a good idea, you need to manage the activity effectively. Make sure you have enough yardsticks for pairs or groups of three. Give them a
time limit and hold them to it. Most importantly, make sure their are no yardstick fights!
• Many of my students had trouble setting up proportions themselves because they wanted to do something with all three dimensions at the same time. Thus, it might make sense to start with a
two-dimensional idea (giant painting, miniature cut-outs) and then add the third after you are sure they have mastered it.
• This activity is meant to be done after you have taught solving proportions, what ratios are and converting ratios to fractions.
• This can be a very difficult concept to grasp, and thus this activity might be better suited for later in the year when you are targeting students to help them pass standardized and end-of-year
Statue of Me documents available at The TeachForever Notebook:
1. Statue of Me initial activity - introduction and questions for display on overhead
2. Statue of Me Follow-Up Activity - Complete activity with graphic organizer that helps spell out what to do with their measurements
3. Weekly Quiz - Covers proportions, ratios, rates and percents. Included is an example problem connected to the Statue of Me activity.
I think this idea could use a lot of work, so I appreciate any suggestions and ideas you have. Please post a comment or
email me
Today I was lucky enough to bring a group of 5 students to HESTEC Robotics Day at the University of Texas-Pan American in Edinburg, TX. HESTEC stands for the Hispanic Engineering, Science and
Techonology Conference, which is now a week long event. This year's event was big enough to draw Speaker of the House Nancy Pelosi (D-CA), who spoke glowingly of the conference's accomplishments over
the years and the beneficial work of local Congressman Ruben Hinojosa (D-TX). We saw some of her speech via video, but we were whisked away to our day-long project shortly after she began.
The students were given a huge 600+ piece K'NEX EDUCATION Solar Energy System
I saw immediately how excited and engaged these students were. As soon as we began they were talking about aerodynamics, how engines work, and how different weights and designs would effect the final
product. After 2.5 hours of work inside a campus fine arts auditorium, however, our car was only partly finished. We headed out to the campus track to test and fix our car.
As per usual in south Texas, it was blistering in the midday sun (my face is an enchanting shade of red) and our car was not ready. The kids never gave up--there were plenty of teams that threw in
the towel early, but these kids worked hard to figure out how to get it working.
The first of 3 trial runs was a disaster. The car didn't move an inch. When the trials started, the time in between to make adjustments was short, so much so that the kids were willing to skip their
second trial to completely redo the car to make it work. They tore it all apart, moved the engine around and changed entirely the way it made the wheels spin.
But this time, it worked.
There was almost no time for testing--one dry run was it before it was time for our third and last trial. The rules stated that the top ten times of all trial runs would make it to the finals. I had
been running back and forth between the other trials and their diligent work, and it became clear that very few of the cars were even reaching the finish line. This meant that with one good run we
had as good a chance as anybody to make the top ten.
And so there I was, sweating and trying to calm my restless leg, feeling useless and helpless because I couldn't do anything to help them besides tell them that they still had an excellent chance.
Once the car started, it just kept going, reaching the finish line successfully in about 17.6 seconds. The kids were ecstatic for simply making a successful run, as they too had seen very few cars
make it at all. When the results came in, we didn't just make it by a thread. We rocked the house: the 3rd fastest trial of the 40+ teams out there. I could barely contain myself that not only had
they persevered and made the finals, but had a legitimate shot at placing and going home big winners.
Did I mention the prizes at stake?
1. 1st Place: Laptops for every student
2. 2nd Place: iPods
3. 3rd Place: iPod Nanos
Although these had piqued their interest at the beginning of the day, it was clear this was the farthest thing from their minds. They wanted to win, but not as much as they wanted it to work as it
had in that last miracle run. As the race began, our hearts collectively stopped, and almost as soon as they did they resumed again, for it was all over too soon.
I could rehash the griping about the inconsistent enforcement of the rules and the underhanded tactics of other teams that I made to the people in charge after our miracle run ended, but the truth
remains: in what could have been their finest hour as would-be engineers, the car once again wouldn't work. Even if the team that went on to win first place had been dealt with appropriately, it
wouldn't have changed our outcome, and that was the hardest thing to accept.
Except that it was only me who had trouble accepting it. My students were disappointed, but still riding the high of success in the face of failure, of triumph on what could have been dismissed as
just another day off from school. They saw and heard my outrage, and they understood it, but their spirits were never crushed. I realized that they had accomplished everything I could have hoped for
and more. In their words:
• "At least we had fun!"
• "I learned never to give up."
• "We can come back next year, right?"
• "When's the next competition?"
• "Sir, are you going to have the kit? Because I want to build another one."
With our great success on Robotics Day, now my administration is suggesting I start some sort of engineering club. I hope I'm lucky enough to be able to.
I have always used games to review for tests and quizzes--they make the often painful work of reviewing fun, easy and memorable, they help break up a sometimes boring routine, and they can make your
students excited about coming to class. Last year, I developed a version of the common basketball review game. The setup and rules are simple (see the picture below as well):
1. The desks are arranged into two groups (3 desks deep) facing a wide center aisle. The hoop is at the end of the aisle.
2. The class is split into two teams (one side vs. the other).
3. One student from each team comes to the board to complete a question, whoever answers it correctly first gets a shot for their team.
The twist I added to the versions I've read about is that I thought it would be fun to use one of those giant inflatable basketball hoops, such as the Sportscraft Monster Basketball Set you see in
this picture from last year. I can't stand the idea of using a garbage can as a "hoop" and wads of paper as a "ball" as this basketball review game (via About.com) calls for--my students would be
insulted if I tried to pull that trick. I wanted it to be as authentic as possible without leaving the room.
I looked around at the local big-box retailer and saw many versions of giant outdoor hoops. The one I actually preferred, which looked like those round ones that you usually see floating in pools,
was $60. I have been trying to cut back on the whole spend-thousands-out-of-pocket thing and this was simply too much for something I could only use sparingly. When I found the Monster Basketball Set
for $20, I picked it up immediately. It's a little over 6' tall and the ball is 16" in diameter, big enough to make an impression but small enough to fit in the room.
Students took shots from the front of the room (near the first row of chairs), which made the shot difficult but not impossible (due to the ceiling, you had to throw it straight or underhand in order
to make it). When we played last year, the games were always low scoring (2 or 3 points total) even when we plowed through a lot of questions and the students took a lot of shots.
I just ran this game again this year, and this is my advice for running it smoothly in your classroom:
• Make sure that while the two (or more) students are competing for a shot up at the board that everyone else is doing the same work. The easiest way to do this is to inform your kids you'll be
collecting all of the work on all of the problems we did a the end of class, and since you would review each answer there would be little excuse for students not having complete work and answers
for each problem.
• The game play allows consistent opportunities for the teacher to explain common mistakes and reteach difficult items by design. I usually confirm a winner, let them take their shot, and then
discuss what the winner did right and what (if any) mistakes the other player had made.
• This game is ideal for easier content that only requires memorization (the lower levels of Bloom's Taxonomy), although it can be used for concepts that require multiple steps and require higher
order thinking (it just may take longer and you won't be able to complete as many questions).
• Depending on your students' level of confidence on the topic being reviewed, you can choose to give them the problem first with a chance for them to work on it before coming to the board (which I
did today for the challenging topic of solving two-step equations and inequalities) or keeping the questions a surprise until they are already waiting at the board (which I did last year when I
was trying to get them to visualize and sketch linear equations without a calculator). The latter is better when you are focused on the type of easy material I described above.
• As with any game, you need very good classroom management in order to keep everything under control. If you have problems with vandalism, or don't believe your students can handle this without
hitting things or each other with the ball, don't even think about using this game.
The game keeps the kids engaged and while they can easily get overexcited, in a well-managed classroom you should be able to tell them the alternative will be the most boring thing you can think of,
and the mere idea of that will keep them focused.
Unfortunately, my hoop did spring a leak after repeated uses last year and being stuffed into a box over the summer this year. I couldn't patch it (I didn't keep any of the patching material included
with the game) even with a ton of duct tape and spent too much time inflating it repeatedly throughout the day. Alas, this game will have to go on hiatus until I can get another (or better) hoop.
I am extremely happy to report that the grades on today's weekly quiz, covering all the material reviewed yesterday in the game, are excellent. My students made a huge jump in comprehension and
retention this week compared to how they did on similar quizzes the last 2 weeks.
[Update 4/21/10: This idea and many others are part of my recently updated book Ten Cheap Lessons: Second Edition ($9.95 paperback, $2.50 digital).]
Recently I wrote here that on the first day of school, I had read quotes from end-of-year surveys from last year to my new students. My students had answered the question: "If you knew someone in the
8th grade who was going to be in this class next year, what would you tell them?" I also left a space for them to write whatever they wanted, which also garnered interesting answers.
I think it's important for all teachers to ask these sorts of questions and to look back at student responses periodically to both inspire us to keep going and tell us what we need to do to do well
this year. I wanted to share some of the responses that a question like this can get:
To not sit next to his friend and put attention when the teacher is talking and ask questions if you have problems.
To follow instructions and try the best they can do cause when I heard Algebra I said I think I am not going to pass that class, but I did.
To be very responsible for all the worksheets she does because they're all for a grade even when there is a substitute! ...It has been great as you know you helped me succeed to the next level 10
th I passed my Math TAKS Thank you for teaching me for reaching my goal!
No, don't, you should consider flunking and staying in 8th grade another year LOL! JK!
Yes thank you for all the help you have given me. I really appreciate all the time you gave up for us. Seriously I would of probably failed if you showed us you didn't care. Cuz then I would not
care either.
To pay attention and to do your work and take notes and you will pass the class with no problems... this year was good and I learned a lot and it was the first time I got commended on the math
I would tell him/her that you were a cool teacher, but not to joke around too much because there is a time & place for that. Also he/she would learn a lot from you.
I would tell them to do there work because the teacher is really badas* and you will learn a lot if you pay attention.
Always pay attention + please try real hard not to piss him off cuz then you + the sir will have a BAD DAY... thank you for always being there for me + actually caring for me + teaching me (not
like the other math teachers I've had before)
I would tell him/her that its better if you pay attention since the beginning of the year because if you don't pay attention and don't respect the teacher with Mr. D you are not going to pass
Mr. D this year has been great as you as my math teacher becaus I asked my other friends what have they learned in there other math class and they said nothing at all!!!!!
This is the stuff that keeps me going. I see the same themes every year in their responses--thank you for caring, I learned a lot--and that all the hard work we did was appreciated and had a positive
I'm not posting this to brag. In fact, after seeing some recent poor test results of my current group, I need to see this to build my confidence back up. While I was a Teach for America corps member,
a common punchline to our jokes was "...and that's why I Teach for America!" But the truth is having an impact like I think I did last year is why I got involved in the first place, and why I will
continue teaching. Nevertheless, though I may seem confident in my ideas I am constantly questioning my ability; I see myself making mistakes I shouldn't be making in my 5th year, and I wonder
whether I'm doing a good job at all.
I do hope to give some inspiration to others that your hard work and dedication to your students is going to pay off, and that you continue in the noblest of fields.
Friday was draft day for our Fantasy Football and Mathematics project. Unfortunately, drafting took far longer than I anticipated. We could have finished with the entire period devoted to drafting,
but they were having trouble with the short weekly quiz we did beforehand and took far longer than expected.
After hinting about fantasy football since the first day of school, last Friday I formally announced our project and asked students to start gathering information. I provided a copy of the roster
sheet they would need to fill out later so they could take notes during Week 1 of the NFL season. I listed the many ways they could find information about players without watching games (which,
understandably, some students didn't want to do):
• Watch highlight shows on local news, ESPN (NFL Live, NFL Primetime, Sunday NFL Countdown, Monday Night Countdown), and FSN
• Go to ESPN.com, CBSSportsLine.com, NFL.com, SI.com for ideas
• Type in "fantasy football" in any search engine
• Ask a friend or family member who knows/watches a lot of football
• Read one of the 9 different fantasy football magazines I had purchased
• Alternatively, wait until Monday and Tuesday and read our local newspaper, The Monitor, in class (free copies are delivered daily through the Newspapers in Education program)
Most importantly, I emphasized that you don't need to know anything about football to play the game. The great equalizer is the $40 million salary cap. This means that even if you know all the best
players in the league, you can't afford to have all of them on your team. Also, students are allowed to pick the same players if they want to, so the background knowledge necessary for your average
online FF league isn't needed.
I had to guide many students past the frustration of not knowing anything about football by emphasizing that point--if all else fails, they could do just fine picking anyone that could fit under the
cap. Even if you want to see pictures of players so you can fill your roster with the hottest guys (as some of my students are doing) you can. You'll probably have no more or less success than anyone
The other side is that even when they did excellent research, as one Pre-AP student did, they might come up with a roster no one (in fantasy or real life) could ever afford:
1. QB - Peyton Manning, IND
2. QB - Tom Brady, NE
3. RB - LaDainian Tomlinson, SD
4. RB - Steven Jackson, STL
5. RB - Frank Gore, SF
6. WR - Marvin Harrison, IND
7. WR - Terrell Owens, DAL
8. WR - Chad Johnson, CIN
9. WR - Steve Smith, CAR
10. K - Adam Vinatieri, IND
11. K - Robbie Gould, CHI
12. DEF - Chicago
13. DEF - San Diego
I told her she had done an amazing job assembling a great team, but the salary cap was going to make her team a little less like a Pro Bowl roster and more like the Kansas City Chiefs. Based on
Fantasy Football and Mathematics' player values, this team would probably cost 2 or 3 times the cap. For example, the 2 most expensive players, Tomlinson and Jackson, cost about $22 million together
(and you'd still have to fill 11 other positions).
Even though no class finished filling out their rosters, this was a great start because now students have a jumping off point. Many were excited to go home and do "research" over the weekend. Some
students told me they had a sibling or cousin who they would seek out this weekend to help fill out their rosters, and I'm glad to both get them excited about school and foster some family bonding.
They must have their roster set by next Friday, in time for Week 3. I expect that students will pick up on how good or bad their team is fairly quickly and be scrambling to make adjustments as the
weeks go on. After Week 3, they will learn how to calculate their points. Later, I can incorporate several activities and ideas from the FSM curriculum to extend the project into other areas.
So here's my list of what's needed for draft day:
1. Build pre-draft buzz - Start talking about it ASAP, discuss the information gathering suggestions above, start bringing in newspapers and magazines
2. Player values printouts - Free with the FFM Teacher's GuideFFM Student Workbook
3. Handout of FF Description and Rules/Fantasy Roster - Again, you can get this from the Teacher's Guide or Student Workbook. This describes the basics including the salary cap.
4. FF related magazines - I found 9 different titles out there, but they are expensive (usually a $7+ cover price)
Optionally, Internet access would be helpful so that students could find up-to-the-minute information.
If anyone else reading the blog is using this system, I'd love to hear about it. Please leave feedback or email me!
Yesterday in class we played SINK THE SIR! (a play both on what kids call male teachers in the RGV, "the Sir," and on the old movie Sink the Bismark!), a version of the game Battleship . The main
objective is for students to know how to identify and graph points correctly and to learn the necessary vocabulary:
• x-axis, y-axis
• x and y coordinates, points, ordered pair
Last year, I found a lesson plan on Education World called Play Battleship on Graph Paper. The only real difference between the real game and the lesson was that the original's grid was replaced with
a coordinate plane. I followed this lesson closely, having the students play against each other. The students certainly had a lot of fun, and got familiar with the coordinate plane, but the
objectives were totally lost. I did a poor job of explaining the directions and making it easy to do; the result was that students were graphing points incorrectly and confusing each other. Some
students were just fooling around because the student vs. student design made monitoring difficult. Later in the year, far too many students still had trouble graphing points accurately, which could
be traced directly back to the game.
I was determined to fix the problems and make "Battleship" work for my classroom. The first thing I did was change the game to a whole-class activity: teacher vs. students, sink "The Sir" before he
sinks you.
Each student had a graphic organizer with directions, a table for coordinates fired at me and fired at them, and a coordinate plane to keep track of their ships. On the whiteboard I had the plane
where everyone could keep track of when they hit or missed "the Sir". After labeling parts of the graph (x and y axes, origin and quadrants), the rest of the game remained the same. The slightly
tweaked lesson design allowed me to make sure the objective was covered thoroughly because I could come back to the key points easily:
1. With each shot made, I could ask students to identify the quadrant or axis where the point was located.
2. I could also give them multiple options for the location by pointing at my coordinate plane to check that they understand how to read ordered pairs, i.e. knowing the difference between (-2, 3)
vs. (3, -2) vs. (2, -3)
3. I focused on points on the x-axis and y-axis, which the students always mix up.
4. I was able to give hints to check their understanding, i.e. "One of my ships is located along the y-axis" or "I have a ship in Quadrant II" and then see if the next student fired at the right
5. I also connected the game to graphing linear equations by having my aircraft carrier located along the line y=x (a parent function), discussing how to figure out where that line would be and then
aiming at points along it.
The possibilities are endless for how many directions and how much material you can cover in this activity. Since the teacher directs the activity and the students are engaged (because they love
competition and love beating you even more), it's easy for you to steer them towards many different objectives. You could easily incorporate:
• more linear equations
• domain and range
• linear inequalities (identifying points that satisfy a linear inequality is a common question on our state standardized math test, like #51 on this released test)
• transformations
• quadratic and absolute value equations
You could do this game for half of the class period (as I did due to time constraints), but you wouldn't need more than a regular 45-55 minute period to complete the game and have students complete
follow-up practice questions. The students did excellent on practice problems today, so I think this year's version of the game was a great success (time will tell if these concepts remain with them
throughout the year).
If you decide to use this lesson, I recommend you change the domain and range of the graph -4 to 4 instead of -5 to 5. The latter was a bit too big and made it take a little longer than necessary for
us to sink each other's ships. You could also adapt this to return to the student vs. student format, but I think you then miss out on the possibilities I wrote about above.
It also helps to look the part; I wore a $7 captain's hat from the local costume shop and taped "CAPTAIN" on my school ID. You could also cue up some video or audio clips of torpedoes firing and
ships exploding for dramatic effect. Have fun with it--then your students will too!
UPDATE: Check out the revised 2008 edition of this game!
I tested out the Like Terms card game in class today with fairly positive results. It took me a few periods to work out the best way to introduce the game and explain the rules, requiring a lot of
adjustment on the fly, but by the end of the day things were going swimmingly.
I made a false assumption that many students had played a variation of rummy or any card games where you take a card, make a play, and discard. However, beyond Uno or poker, many students were
unfamiliar with this style of game. Thus, I had students following the rules to make groups of 3 or 4 like terms (which is good) but through varying methods of obtaining cards.
As far as knowing which cards go together (and thus which terms can be combined), I think there was no problem--that part of the objective was clear even in classes where the rules of the game were
confusing. What was lacking in the classes earlier in the day was a better explanation on how to score the game--taking the tally of each group of cards and turning it into a longer expression.
I realized that I needed to walk everyone through a turn or two to get things started. At first, I would explain the rules myself, showing examples from the decks and referring to the Like Terms
rules and scoring guide I had written before arranging them into groups or having the cards dealt. This was a significant mistake on my part that gave some students in earlier classes quite a bit of
Later in the day, I made sure the groups were formed and cards were dealt before I explained anything. Then I had the first person in each group complete a turn along with me walking the whole group
through it, so each group saw a clear example and knew how to proceed. I held off on the scoring until a few groups had ended the game, and using the score tracking sheet I had printed on the back of
the directions, wrote out a full example to show them when and what to add or subtract.
To clear up the morning confusion and review for everyone who did get it, I filled out a score sheet with 2 sample scores, one from a winning hand and one from a losing one, and will ask them to
total up the scores properly to start tomorrow's class. I told students all day that as long as they understood and remember the main idea of the game--which terms are like terms and how to simplify
expressions containing them--it doesn't matter if the rules of the game themselves were confusing.
So if you plan on doing something like this, be sure to:
1. Arrange students into groups and distribute cards first.
2. Walk students through the first turn (or more if needed).
3. Understand that it may take time for students to grasp the rules of the game (but probably not the concept).
4. Constantly monitor and be prepared to walk groups through the game procedures to get them to the point where they can play on their own.
5. I recommend having groups play through no more than 2 full games.
6. Design a closing activity, even as simple as 1-5 sample questions on the overhead, to end the lesson.
7. Follow up the next day with a filled out sample score sheet (you can borrow ones from students that are correct and cover the totals OR take one that is incorrect and have the students identify
the mistakes) and related simplification activities (we will be doing the ever-present "Find the perimeter of the polygon" where the sides are labeled with algebraic expressions.
|
{"url":"http://www.teachforever.com/2007_09_01_archive.html","timestamp":"2014-04-18T08:02:16Z","content_type":null,"content_length":"166221","record_id":"<urn:uuid:44619e08-58f1-4ea1-8184-8deec591c7cc>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Center City, PA SAT Math Tutor
Find a Center City, PA SAT Math Tutor
...I found little fulfillment in the business world especially because I didn't believe I was having a strong positive impact on society. This is not to say that I did not have a positive impact
because I did have a hand in creating some amazing machines that were used for medical services, civilia...
16 Subjects: including SAT math, Spanish, calculus, physics
...I enjoy helping students identify the easiest path to a solution and the steps they should employ to get there. My first teaching job was in a school that specifically serviced special needs
students. Each teacher received special training on how to aide students with a variety of differences, including ADD and ADHD.
58 Subjects: including SAT math, chemistry, reading, biology
...Areas of instruction include: SAT Writing SAT Critical Reading SAT Math GRE Verbal GRE Quantitative Reasoning GRE Analytical Writing MCAT verbal GMAT Praxis I Praxis English Subject Area TEAS
Reading and writing skills College application essays Algebra I and II Geometry Trigonometry Pre-calcul...
47 Subjects: including SAT math, chemistry, reading, English
...I have prepared high school students for the AP Calculus exams (both AB and BC), undergraduate students for the math portion of the GRE, and have helped many other students with math skills
ranging from basic arithmetic all the way up to Calculus 3 and basic linear algebra. In my free time, I en...
22 Subjects: including SAT math, calculus, geometry, statistics
...I have experience tutoring math at the levels of pre-algebra through calculus, and would also be able to tutor probability, statistics, and actuarial math. I graduated with a degree in Russian
Language, and spent a full year living in St. Petersburg, Russia.
14 Subjects: including SAT math, Spanish, calculus, geometry
Related Center City, PA Tutors
Center City, PA Accounting Tutors
Center City, PA ACT Tutors
Center City, PA Algebra Tutors
Center City, PA Algebra 2 Tutors
Center City, PA Calculus Tutors
Center City, PA Geometry Tutors
Center City, PA Math Tutors
Center City, PA Prealgebra Tutors
Center City, PA Precalculus Tutors
Center City, PA SAT Tutors
Center City, PA SAT Math Tutors
Center City, PA Science Tutors
Center City, PA Statistics Tutors
Center City, PA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Belmont Hills, PA SAT math Tutors
Cynwyd, PA SAT math Tutors
East Camden, NJ SAT math Tutors
Lester, PA SAT math Tutors
Middle City East, PA SAT math Tutors
Middle City West, PA SAT math Tutors
Oakview, PA SAT math Tutors
Penn Ctr, PA SAT math Tutors
Penn Valley, PA SAT math Tutors
Philadelphia SAT math Tutors
Philadelphia Ndc, PA SAT math Tutors
Verga, NJ SAT math Tutors
West Collingswood Heights, NJ SAT math Tutors
West Collingswood, NJ SAT math Tutors
Westville Grove, NJ SAT math Tutors
|
{"url":"http://www.purplemath.com/Center_City_PA_SAT_Math_tutors.php","timestamp":"2014-04-18T18:35:34Z","content_type":null,"content_length":"24465","record_id":"<urn:uuid:91b531b6-45a6-4143-bf39-3313155b3498>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Compatible Data Files
A typical data file is shown below for a two-winding, 30-turn to 30-turn transformer on an EC-70 ferrite core with sinusoidal excitation of 150 kHz and 8 A rms in both windings. Both transformer
windings are of the same size. The core is gapless and has a window breadth of 44.6 mm; the bobbin allows a winding area of 41.5 mm by 24 mm high; each of the two windings may then take up a height
of 12 mm. The field geometry is considered in one spatial dimension. Note the code used for the sinusoidal excitation. All generated text appears in green and data entered into the generated data
file is displayed in red. Data that is optional to change appears in blue.
%For an example and help, go to www.thayer.dartmouth.edu/inductor/litzopt or refer to winding data for use with litzopt.%Please replace the question marks with the pertinent data and leave all else
%There are 2 windings in your transformer.
%Resistivity (1.77e-8 is for room temperature copper)
rhoc =1.77e-8; % [ohm-meters]
%Perform optimization on designs with wire sizes ranging from American Wire Gauge strand size (32 to 48 is the default)
awg = [32:2:48]; %
%Average turn length for each winding ... 2 elements are required in this vector.
len =[ 97.3e-3 97.3e-3]; %
%Duration of each time segment ... 100 elements are required in this vector.
f=150e3; % 150 Khz
dt =inc*ones(1,divisions);
%Current at the end of the last interval is assumed equal to the first current value.
%Current values for winding 1 at the beginning of each time segment ... 101 elements are required in this
A=8*sqrt(2); % Waveform Amplitude
%Current at the end of the last interval is assumed equal to the first current value.
%Current values for winding 2 at the beginning of each time segment ... 101 elements are required in this
I(2,:)= -I(1,:);
%Number of turns in each winding... 2 elements are required in this vector.
N=[ 30 30]; %[turns]
%Breadth of the winding window in meters.
gap='No Gap';
The output of Litzopt, run with the above data file, is a set of figures (below), as well as a table of costs, losses, and stranding (below). The first figure is a plot of the current waveforms with
respect to time. The user can examine this figure to ensure that current waveform data has been correctly entered. The next figure generated by Litzopt is the 'optimal design frontier'. Each point on
the figure describes a distinct stranding design with a particular wire size, strand number for each winding, cost, and loss. Designs on the curve yield the lowest loss at the given cost. Improving
the performance of a transformer from a design on the optimal design frontier requires upgrading to a more costly stranding design. All the designs have costs relative the Awg 44 design (i.e. Awg 44
has a relative cost of unity).
Gauge Cost(rel) Loss [W] Number of Strands for Winding 1 Number of Strands for Winding 2
32 0.03059 42.57 5.051 5.051
34 0.04906 27.91 12.53 12.53
36 0.07962 18.56 30.96 30.96
38 0.1327 12.53 76.53 76.53
40 0.2331 8.602 190.4 190.4
42 0.4485 6.084 473.3 473.3
44 1 4.515 1130 1130
46 2.824 3.489 2508 2508
48 10.44 2.744 5253 5253
A typical data file is shown below for a 7-turn to 49-turn gapped flyback transformer core with a triangular excitation of 260 kHz with 11.4 A peak current in the primary winding and 1.6 A peak
current in the secondary winding (further detail is available regarding this transformer here). The integral of B squared values are obtained from an external simulations. All generated text appears
in gray and data entered into the generated data file is displayed in red. Data that is optional to change appears in blue.
%For an example and help, go to www.thayer.dartmouth.edu/inductor/litzopt, or refer to winding data for use with litzopt.
%Please replace the question marks with the pertinent data and leave all else alone....
%There are 2 windings in your transformer
%Resistivity (1.77e-8 is for room temperature copper)
rhoc =1.77e-8; % [ohm-meters]
%Perform optimization on designs with wire sizes ranging from American Wire Gauge strand size (32 to 48 is the default)
awg = [32:2:48]; %
%Average turn length for each winding...
len =[ 3.79e-2 5.08e-2 ]; %[meters]
%Duration of each time segment ... 3 elements are required in this vector
dt = [3.8e-6 .07e-6 3.9e-6];
%Current values for winding 1 at the beginning of each time segment ... 3 elements are required in this vector.
I(1,:)=[0 11.4 0 ];
%Current at the end of the last interval is assumed equal to the first current value
%Current values for winding 2 at the beginning of each time segment
I(2,:)= [0 0 -11.4/7 ]; %[amperes]
%Volume of each winding...
vol=[ 0.0821 0.2480 ].*1e-5; %[cubic meters]
%Number of turns in each winding...
N=[ 7 49 ]; %[turns]
%Integral of Bsquared values from simulation with current in winding 1
% Enter integral of B squared over winding 1
int_B2(1,1,1)=7.3875e-015; %[T^2-m^3 ]
% Enter integral of B squared over winding 2
int_B2(1,1,2)=2.7289e-015; %[T^2-m^3 ]
%Integral of B squared values from simulation with current in winding 1 and 2
% Enter integral of B squared over winding 1
int_B2(1,2,1)= 1.3110e-015; %[T^2-m^3 ]
% Enter integral of B squared over winding 2
int_B2(1,2,2)=3.3920e-015; %[T^2-m^3 ]
%Integral of Bsquared values from simulation with current in winding 2
% Enter integral of B squared over winding 1
int_B2(2,2,1)=1.0290e-014; %[T^2-m^3 ]
% Enter integral of B squared over winding 2
int_B2(2,2,2)=6.4625e-015; %[T^2-m^3 ]
The output of Litzopt, run with the above data file, is a set of figures (below), as well as a table of costs, losses, and stranding (below). The first figure is a plot of the current waveforms with
respect to time. The user can examine this figure to ensure that current waveform data has been correctly entered. The next figure generated by Litzopt is the 'optimal design frontier'. Each point on
the figure describes a distinct stranding design that can be associated with a particular wire size, strand number for each winding, cost, and loss. Designs on the curve yield the lowest loss at the
given cost. Improving the performance of a transformer from a design on the optimal design frontier requires upgrading to a more costly stranding design. All the designs have costs relative the Awg
44 design (i.e. Awg 44 has a relative cost of unity).
Gauge Cost(rel) Loss [W] Number of Strands Number of Strands
for Winding 1 for Winding 2
32 0.03059 5.942 0.5236 0.1585
34 0.04906 3.896 1.299 0.3932
36 0.07962 2.591 3.209 0.9715
38 0.1327 1.749 7.933 2.402
40 0.2331 1.201 19.74 5.975
42 0.4485 0.8492 49.06 14.85
44 1 0.6302 117.2 35.47
46 2.824 0.487 260 78.71
48 10.44 0.383 544.5 164.8
|
{"url":"http://engineering.dartmouth.edu/inductor/litzopt/examples.html","timestamp":"2014-04-19T22:15:46Z","content_type":null,"content_length":"24853","record_id":"<urn:uuid:0f3cd342-14a3-4d1a-8725-f6be6af45484>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Quadrilateral WXYZ is located at W(3, 6), X(5, -10), Y(-2, -4), Z(-4, 8). A rotation of the quadrilateral is located at W�(-6, 3), X�(10, 5), Y�(4, -2), Z�(-8, -4). How is the quadrilateral
transformed? A. Quadrilateral WXYZ is rotated 90º counterclockwise about the origin B. Quadrilateral WXYZ is rotated 90º clockwise about the origin C. Quadrilateral WXYZ is rotated 180º about the
Best Response
You've already chosen the best response.
focus on 1 point, say W. It moved from the 1st quadrant to the 2nd quadrant, therefore moving counter-clockwise Now draw a line from the origin to each point (W and W') what is the angle between
them look like?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4ef38566e4b0dc507db73613","timestamp":"2014-04-18T21:12:39Z","content_type":null,"content_length":"28102","record_id":"<urn:uuid:5f8f83f5-31e1-47fd-99fc-dff640237bc2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] Inverting Complex64 array fails on OS X
Travis Oliphant oliphant.travis at ieee.org
Sun Jan 15 17:05:05 CST 2006
Andre Radke wrote:
>Robert Kern wrote:
>>Accelerate.framework only contains the FORTRAN version of LAPACK, yes. I do
>>believe it contains CBLAS (given the existence of cblas_* symbols in
>>the dylib), though.
>Okay, thanks. In the meantime, I think I have figured out why
>inverting a matrix of type Complex64 failed for me:
>The implementation of linalg.inv() in scipy/linalg/basic.py uses
>get_lapack_funcs() in scipy/lib/lapack/__init__.py to obtain the
>reference to the underlying Fortran functions for performing the
>matrix inversion. get_lapack_funcs() examines the dtypechar attribute
>of the provided matrix to determine whether to use the single/double
>precision and real/complex version of the Fortran functions.
>The dtypechar attribute of my Complex64 matrix was 'G'.
This is definitely the problem. It should be 'D'. 'G' is a complex
number with long doubles.
How did you specify the matrix again? Could you show us some of the
attributes of the matrix you created. I'm shocked that 'G' was the
>This wasn't
>one of the type code chars expected by get_lapack_funcs(), so it
>defaulted to the version of the Fortran functions that take double
>precision real arguments, i.e. dgetrf and dgetri. Consequently,
>linalg.inv() returned only the inverse of input's matrix real part.
>I suspect my Complex64 matrix would instead have required using the
>zgetrf and zgetri Fortran functions (for a double precision complex
>argument) which would have happened if the dtypechar of the matrix
>had been 'D' instead of 'G'.
>Is this actually a bug in get_lapack_funcs() or are my assumptions
>about how this should work simply incorrect?
>Also, is Complex64 the preferred way to specify a double precision
>complex dtype when constructing a matrix?
No. Use numpy.complex128 or 'D' or just simply complex.
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2006-January/006620.html","timestamp":"2014-04-20T02:02:27Z","content_type":null,"content_length":"4896","record_id":"<urn:uuid:17df81db-e068-4f88-8715-ea31c88e796d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Space e#cient algorithms for some graph-theoretic problems
, 2000
"... We clarify the computational complexity of planarity testing, by showing that planarity testing is hard for L, and lies in SL. This nearly settles the question, since it is widely conjectured
that L = SL [25]. The upper bound of SL matches the lower bound of L in the context of (nonuniform) circ ..."
Cited by 23 (7 self)
Add to MetaCart
We clarify the computational complexity of planarity testing, by showing that planarity testing is hard for L, and lies in SL. This nearly settles the question, since it is widely conjectured that L
= SL [25]. The upper bound of SL matches the lower bound of L in the context of (nonuniform) circuit complexity, since L/poly is equal to SL/poly. Similarly, we show that a planar embedding, when one
exists, can be found in FL SL . Previously, these problems were known to reside in the complexity class AC 1 , via a O(log n) time CRCW PRAM algorithm [22], although planarity checking for
degree-three graphs had been shown to be in SL [23, 20].
, 2000
"... It has been known for a long time now that the problem of counting the number of perfect matchings in a planar graph is in NC. This result is based on the notion of a pfaffian orientation of a
graph. (Recently, Galluccio and Loebl [7] gave a P-time algorithm for the case of graphs of small genus.) H ..."
Cited by 8 (2 self)
Add to MetaCart
It has been known for a long time now that the problem of counting the number of perfect matchings in a planar graph is in NC. This result is based on the notion of a pfaffian orientation of a graph.
(Recently, Galluccio and Loebl [7] gave a P-time algorithm for the case of graphs of small genus.) However, it is not known if the corresponding search problem, that of finding one perfect matching
in a planar graph, is in NC. This situation is intriguing as it seems to contradict our intuition that search should be easier than counting. For the case of planar bipartite graphs, Miller and Naor
[22] showed that a perfect matching can indeed be found using an NC algorithm. We present a very different NC-algorithm for this problem. Unlike the Miller...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2749409","timestamp":"2014-04-16T22:41:52Z","content_type":null,"content_length":"14982","record_id":"<urn:uuid:722f2e31-c605-4d77-8afb-185df5021642>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jeu de taquin
From Encyclopedia of Mathematics
The French name of a puzzle game introduced in France by H.E. Lucas. It consists of fifteen wooden squares, numbered from one to fifteen, which can be moved in a
By analogy, this same name has been given by M.-P. Schützenberger (cf. [a4], [a5]) to a graphical representation of the plactic, or Knuth, congruences.
The Robinson–Schensted construction (cf. Robinson–Schensted correspondence) establishes a bijection between a permutation and a pair of standard Young tableaux (cf. Young tableau and [a2]). It is a
fundamental tool in the combinatorics of representations of symmetric groups (cf. [a3]). This construction has been extended to words (i.e. allowing repetitions) by D. Knuth. In the latter case, the
first tableau is no longer standard, but semi-standard (the entries are increasing along the columns and non-decreasing along the rows). Knuth has also shown that two words correspond to the same
semi-standard tableau if and only if it is possible to obtain one from the other by a succession of certain commutations of letters.
Further, A. Lascoux and Schützenberger have established that the equivalence relation generated by this set of relations is compatible with concatenation. The quotient monoid (which is in bijection
with the set of semi-standard tableaux) is known as the plactic monoid. They have also given the fundamental properties of this monoid, in [a1].
The plactic (or Knuth) congruences are the following: Let the alphabet
This equivalence can be transferred to (semi-standard) skew Young tableaux (cf. Skew Young tableau) in the following way. Two skew Young tableaux are equivalent if and only if one can be obtained
from the other by a succession of local transformations corresponding, respectively, to the four plactic congruences:
Figure: j110030a
Under this equivalence relation each class of skew Young tableaux contains exactly one Young tableau. This statement is in fact equivalent to Knuth's theorem. This game of transformation of tableaux
is what is called jeu de taquin.
It can be used to provide an alternative to the original Schensted insertion algorithm as generalized by Knuth. Let
For example, start with
Figure: j110030b
[a1] A. Lascoux, M.-P. Schützenberger, "Le monoïde plaxique" , Non Commutative Structures in Algebra and Geometric Combinatorics, Arco Felice (1978) , Quaderni Ricerca Sci. , 109 , Consiglio Naz.
Ricerche (1981) pp. 129–156
[a2] D. Knuth, "The art of computer programming" , 3 , Addison-Wesley (1973)
[a3] I.G. Macdonald, "Symmetric functions and Hall polynomials" , Clarendon Press (1995) (Edition: Second)
[a4] M.-P. Schützenberger, "Sur une construction de Gilbert de B. Robinson" , Algèbre (1971/2) , Sém. P. Dubreuil 25e année , 1: 8 , Secrétariat Math. Univ. Paris (1973)
[a5] M.-P. Schützenberger, "La correspondance de Robinson" , Combinatoire et Représentations du Groupe Symétrique, Strasbourg (1976) , Lecture Notes in Mathematics , 579 , Springer (1977) pp. 59–135
How to Cite This Entry:
Jeu de taquin. J. Désarménien (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Jeu_de_taquin&oldid=18760
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
|
{"url":"http://www.encyclopediaofmath.org/index.php/Jeu_de_taquin","timestamp":"2014-04-21T14:48:12Z","content_type":null,"content_length":"20837","record_id":"<urn:uuid:13c333f3-9725-4422-b43c-4e83bf660647>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need help with struc coding
What do you want to achieve?
In which way does your program behave different to your expectations?
forgot to mention, i am suppose to create a program that computes the binomial coefficient.
the program is suppose to compute binomial coefficient, but i do not have enough info to complete the program.
Unfortunately, you are lost enough that you need to go see your professor.
You shouldn't actually need a struct in this program.
You are essentially being asked to compute a slice of Pascal's Triangle.
Your program will need a function to compute the factorial of a number.
As well as a function to compute the slice. There is a very good collection of formulas on Wikipedia for calculating (n k).
You may need some math help.
Good luck!
I probably don't have to , but for the sake of learning it's a must. He is just teaching the principal of it. i know how to calculate the binomial coefficient. I just never done it in code before.
not that i was trying to do something smart, i don't think it was required either. He put the format on the board, i just took as a template. So,I assumed it would be void getbinomial_coefficient.
Last edited on
I didn't mean to sound mean. Sorry.
What I meant was, I assume that you were required to use
void getbinomial_coefficient( data & bc )
instead of
int getbinomial_coefficient( data & bc )
? Because the first is not a good design.
1) I've already answered that question.
2) You obviously need to go see your professor for more specialized help.
3) You are misplacing ';'s.
It looks like this is far as this goes i guess.
I really appreciate your help.
Topic archived. No new replies allowed.
|
{"url":"http://www.cplusplus.com/forum/general/114181/","timestamp":"2014-04-16T04:18:27Z","content_type":null,"content_length":"24305","record_id":"<urn:uuid:12f2f5b9-4c81-4a69-9d57-b0e16aba791f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Swap function using no temp variable
Swap function using no temp variable
i was assigned in my programming class to make a function of the form int swap(&a, &b) that doesnt use a third variable in the swap. i have pondered on this for a few hours and i cant think of
anything. any help would appreciated. thanks in advance
You can do this using bit manipulation, XOR-ing the values. I've
never needed to do this, but I believe it's something like:
Something like that.
>Something like that.
Yea, it's a common trick. There's no way to use it correctly, despite what some will tell you. Use a temporary variable, is it really that much harder?
Bad Practice:
Good Practice
temp = a;
a = b;
b = temp;
Seems just as easy to me, and you don't suffer scary code syndrome.
yeah I've seen this way, but it doesn't always work..
here's another way (simply mathematical)
a += b;
b = a - b;
a -= b;
>yeah I've seen this way, but it doesn't always work..
Of course not, the value of each variable is modified more than once between sequence points, which results in undefined behavior.
|
{"url":"http://cboard.cprogramming.com/c-programming/14552-swap-function-using-no-temp-variable-printable-thread.html","timestamp":"2014-04-20T22:31:35Z","content_type":null,"content_length":"8072","record_id":"<urn:uuid:bda3b165-38ad-4c60-8d5f-7da85bfa504b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex Instruction
In her book
What's Math Got to Do With It?
, Jo Boaler wrote about '
complex instruction
' as a vital part of making groupwork function effectively. What I read intrigued me, so I searched for more detailed information online, but I couldn't find much.
This past week I was able to attend a workshop on Complex Instruction put on by the
Center for Innovative Teaching
, at the Urban School in San Francisco. The workshop was led by Laura Evans, who did a fabulous job of introducing us to these powerful ideas. I'm going to try to explain what I got out of this
two-day workshop, but my head is spinning, so I might miss vital pieces or misrepresent parts of the theory. If you were there, and saw different aspects, please speak up. I was the only college
teacher there, but there were only a few things I had to translate for my situation.
Here's how it works: Start with a group-worthy task, make sure the students are ready in their groups, and understand the roles they're expected to play, give clear and detailed instructions about
how they should work together and what to do when they think they've completed their task. Then, let the math-play (play and work are equivalent in a context like this, right?) begin!
group-worthy task
• is open-ended
• is based on discovery
• is challenging
• requires multiple abilities
• can be represented in more than one way
I'll feel like I really understand this when I can take a task/problem and evaluate it based on these criteria. I'm not there yet. I also need to be able to look at the curriculum we have and decide
what sorts of group-worthy tasks would help the students learn each bit.
'Smart in Math'
Before students work in groups, it's important to help them understand that we typically have many misconceptions about what it means to be 'smart'. Typically, people think that someone who is 'smart
in math' ...
• answers questions quickly
• always gets the right answer
• doesn't have to work at it
But, really, people who are good at math ...
• are persistent
• wonder about relationships between numbers, shapes, functions, ...
• check their answers for reasonableness
• make connections
• are willing to try things out, experiment, take risks
• are resilient
• want to know why
• contribute to group intelligence by asking good questions
• notice and learn from their mistakes
• try to extend and generalize their results
Students may also need to know how synapses (the connections between neurons that are created each time you learn something new) are strengthened by repeated use. A new connection isn't strong until
it's been used:
• multiple times
• in multiple ways
• after a time away
Roles for Group Members
Before the groups dive into a math task, the group members also need to understand the roles they'll take on. Any teacher interested in using these ideas can modify these roles, but the idea is to
give students plenty of coaching in how to work productively, and much less coaching on how to do the math.
• The Facilitator asks if everyone understands what's been said, if anyone has a question, ...
• The Team Captain keeps the group on task, reminds people of how they're supposed to proceed, makes sure everyone's ideas are heard.
• The Resource Manager makes sure all conversations happen in the middle of the table, collects materials from the teacher, calls the teacher over when the whole group has a question, returns
materials, ...
• The Recorder takes notes on the ideas, questions, hypotheses, prepares the group;s presentation paper, makes sure everyone can explain the group's solution.
This list was part of the handout for the problem we were going to work on, which I'll describe next.
The Pile Pattern Problem
We needed to figure out the shapes for piles 5 and 6, and what their areas would be, and then to do the same for the 100th pile. We were also asked to think about the 1st and 0th shape, and if
possible the -1th shape. While we worked on our mathematical task, Laura walked around and took notes on what we said to one another. She came up to us at an opportune moment and said things like,
"Sue, it was really neat when you said 'I was thinking this, but it sounded like you were thinking about it this other way', you made connections between your thoughts and Rachel's." She had a very
specific bit of praise for each of us, related to how we worked
within the group
, to solve the problem.
We all presented parts of our solutions to the whole group (of about 20). We were able to look at the pattern geometrically, algebraically, numerically, and graphically. We had a recursive formula
for the area and an explicit one. We figured out what the 1st shape (#1) would have been, and hypothesized about the #0, #-1, and #-2 shapes. There were definitely some interesting twists to the
Every step along the way, Laura would mention bits about how she'd do this with students. Make sure the group that only got one part gets to go first, have each group after that present one
way of looking at this problem.
Within a group of high school students, each student has high or low social status and high or low academic status. (My question: How is this different among college students?) If someone is quiet,
it's generally because they don't expect their group to be interested in what they have to say, either because of past experience or because of subtle cues from other group members. Laura said,
"Students hesitate to share as a way to hide or protect their status. High or low status is a great barrier to risk taking."
The teacher's job is to change that dynamic in a few ways. She has already told the group very explicitly what each person can do to help. She can also look for ways to 'assign competence' to
students who have low status. If a low social status student has asked a question, she might mention how that was a great risk to take, and how it helped the group. Laura again, "When we raise their
status, we give students excuses to take the risk that they deep down want to take."
I loved this workshop, and I hope to be able to implement some of the ideas. I wish the workshop had been longer. It would have been great to have a chance to practice finding or creating a
group-worthy task, writing up instructions for it, seeing how groups work through it, and responding to the 'students' by commenting on how they're working together instead of offering them math
Edit on 5-30-13: When I wrote this, there were no complex instruction resources online. Now there is
this website
- it looks good.
11 comments:
1. Laura is great! So glad you got a chance to go to her workshop.
2. Thanks for posting this. I've been trying to wrap my head around CI for a few months now (I'm not sure who brought it to my attention) and it's been a bear.
3. What was a bear about it? (For me, just the lack of information. Something else for you?)
4. I gave some thought to the "group-worthy" part of it, so maybe I can help there a bit:
A closed-ended task is problematic for groups, because the fastest person gets the answer and the experience is unrewarding (pedagogically and psychologically) for the rest. Two people with very
closely matched styles and speeds can solve closed-ended problems together... occasionally - and this is hard to arrange and does not scale.
Likewise, direct teaching tasks (as opposed to discovery) are better done either individually ("go watch Khan videos, pause and rewind as needed") or in whole groups where questions of each
person are answered for the benefit of the group. Besides discovery, group tasks can also be art, composition, construction or evaluation. They have to be about students creating/remixing/
sharing, rather than "taking knowledge in."
A group is stronger than individuals, so mundane (non-challenging) tasks are a waste of this resource. Usually, mundane tasks lead to groups quietly falling apart and divvying the tasks for
individual work, anyway.
Unless the group is completely homogeneous (think "attack of the clones"), tasks that require multiple approaches will allow different people to self-select for different roles and activities.
Any challenging task will require multiple abilities, by the way, so the two requirements are redundant as far as problem selection goes. But they help us to understand situations.
Anything can be represented in more than one way, so I am not sure about this last requirement being significant.
5. betweenthenumbers posted a relection on implementing groupwork at her blog, here.
I totally agree with her that it takes courage to try these things. She calls herself 'chicken-shit' at one point, but I'm pretty sure she's braver than I am.
6. @Sue I think specifically it's the group worthy tasks issue. I could devote all my time to trying to find/create these. I've got "Designing Group Work" as my resource. Was there anything else
7. I don't remember any other books being mentioned.
For me, it's both finding groupworthy tasks and having the courage (and skill at bringing students along) to implement it.
For the groupworthy tasks, we need a good site that shows a traditional curriculum for each course, and gives good tasks to use that relate to each 'skill'. I plan to put out what I have so far,
but it's not much.
8. I took a CI class in Seattle and couple years ago and loved it. Since then, I have found that it is challenging to come up with/find tasks that are truly group-worthy. When I do, however, CI
works amazingly well.
The other thing about CI that is fun/challenging to learn/stretch/grow is status. Status is not on most teachers' radar (certainly not mine for the longest time), but once you're aware of it, and
do things like "assigning competence" (I can't remember where I learned about that), I feel that it is really helpful to all kids--not just the low-status kids.
9. I'd love to form some sort of support group for implementing CI. Would you be interested in something like that, Touzel?
10. I love your post on CI. My credential program was heavy on developing CI in the classroom and I have found it to be hugely successful in helping students enjoy math and develop confidence in
their mathematical ability. In particular, I like your discussion of the "habits" of someone who is good at math. I am trying to flush out a list of my own and would love your input. (http://
Also, here are a couple great resources if you haven't already seen them:
1. http://www.amazon.com/Designing-Groupwork-Strategies-Heterogeneous-Classroom/dp/0807733318
2. http://www.amazon.com/Fostering-Algebraic-Thinking-Teachers-Grades/dp/0325001545/ref=sr_1_1?s=books&ie=UTF8&qid=1329497471&sr=1-1
11. Your list looks fine. I was going to introduce you to Avery's work, but I see you've already linked to that.
I think each teacher's list will be different, depending on how the teacher works with students, and that's a fine thing.
Your list is way shorter than Avery's, and if I had a list, it might be shorter than yours. I don't have a list, but I often talk to students about 'making a simpler problem with the same
structure', to give themselves more insight, my favorite advice from Polya.
Comments with links unrelated to the topic at hand will not be accepted. (I'm moderating comments because some spammers made it past the word verification.)
|
{"url":"http://mathmamawrites.blogspot.com/2011/07/complex-instruction.html","timestamp":"2014-04-18T23:15:16Z","content_type":null,"content_length":"131303","record_id":"<urn:uuid:53eb0af3-710e-438f-a931-394e6206f343>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chemistry lab formulas?
2Mg(s) + O2(g) HEAT-- 2MgO(s)
I agree. This reaction is quite exothermic itself after the Mg is ignited, a lot of heat it given off on the products side.
MgO(s) + H2O(l) -- Mg(OH)2(l)
If the Mg(OH)2 is dissolved in the water it should be in the aqueous (aq) state, not liquid.However, I dont think MgO is very soluble in water.
CO2(g) + H2O(l) -- H2CO3(l)
Just like the previous reaction, if the H2CO3 is in solution it should be in the aqueous (aq) state, not liquid.
|
{"url":"http://www.physicsforums.com/showthread.php?t=133856","timestamp":"2014-04-17T12:39:09Z","content_type":null,"content_length":"23366","record_id":"<urn:uuid:24d854f0-68fb-4a76-b7b3-de61ad9f92f7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Plane Poiseuille Flow of Two Superposed Fluids
This Demonstration analyzes plane Poiseuille flow of two superposed fluids. For a specified channel, the rectilinear flow field is defined by four parameters: two viscosities and and two volumetric
flow rates and . Conservation of mass then determines the location of the liquid-liquid interface in the channel, which can be expressed as a thickness ratio .
The velocity field in each layer is given by , . The velocity is dimensionless with the interfacial velocity , and the coordinate is dimensionless with the thickness of the upper layer . The
parameters and are given by:
, , .
The subscripts 1 and 2 denote the upper and lower fluid, respectively; is the viscosity ratio; and is the thickness ratio. The origin of the vertical coordinate is located at the interface such that
the range of is given by [1].
Vary the flow rate and viscosity ratios to see their effect on the velocity profiles for the superposed flow. For , the velocity gradient is not continuous across the interface, but the shear stress
is necessarily continuous. The value of the thickness ratio is also shown on the plot.
The dependence of the thickness ratio on the flow rate ratio and the viscosity ratio is shown in separate plots.
[1] S. G. Yiantsios and B. G. Higgins, "Linear Stability of Plane Poiseuille Flow of Two Superposed Fluids,"
Physics of Fluids
(11), 1988 pp. 3225–3238.
|
{"url":"http://www.demonstrations.wolfram.com/PlanePoiseuilleFlowOfTwoSuperposedFluids/","timestamp":"2014-04-20T23:26:34Z","content_type":null,"content_length":"46162","record_id":"<urn:uuid:b3230b83-0a38-4993-bdb5-c10db92f98b5>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Learn Concets of Vibrating Sample Magnetometer | Transtutors
Vibration Magnetometer Assignment Help
Vibration magnetometer is used for comparison of magnetic moments and magnetic fields. This device works on the principle, that whenever a freely suspended magnet in a uniform magnetic field, is
disturbed from it's equilibrium position, it starts vibrating about the mean position.
Time period of oscillation of experimental bar magnet (magnetic moment M) in earth's magnetic field (B[H]) is given by the formula
T = 2π √(I/MB[H])
Where, moment of inertia of short bar magnet = WL^2/12 (w = mass of bar magnet)
Use of vibration magnetometer
(i) Determination of magnetic moment of a magnet :
The experimental (given) magnet is put into vibration magnetometer and it's time period T is determined.
Now T= 2π √(I/MB[H]) ⇒ M = 4π^2I/B[H] T^2
(ii) Comparison of horizontal components of earth's magnetic field at two places.
T= 2π √(I/MBH)
Since I and M the magnet are constant
So T^2 ∝ 1/ B[H] ⇒ (B[H])[1]/(B[H])[2] = T^2[2]/T^2[1]
(iii) Comparison of magnetic moment of two magnets of same size and mass.
T= 2π √(I/MB[H])
Here I and B[H] are constants.
So M ∝ 1/T^2 ⇒ M[1]/M[2] = T^2[2]/T^2[1]
(iv) Comparison of magnetic moments of two magnets of unequal sizes and masses (by sum and difference method) :
In this method both the magnets vibrate simultaneously in two following position.
Sum position
Two magnets are placed such that their magnetic moments are additive
Net magnetic moment M[s] = M[1] + M[2]
Net moment of inertia I[s] = I[1] + I[2]
Time period of oscillation of this pair in earth's magnetic field
Ts=2π √(I[s]/M[s]B[s])=2π √(I[1]+I[2]/M[1]+M[2]B[H]) ................ (i)
Frequency v[s]= 1/2p v(M[s](B[H])/I[s])
Difference position
Magnetic moments are subtractive
Net magnetic moment M[d] = M[1] + M[2 ]
Net moment of inertia I[d] = I[1] + I[2]
and T[d] =2π√(I[d]/M[d]B[H])=2π√(I[1]+I[2]/(M[1]-M[2])B[H]) ................ (ii)
and v[d] = 1/2π √((M[1]+M[2])B[H]/I[1]+I[2])
From equation (i) and (ii) we get
(v) To find the ratio of magnetic field : Suppose it is required to find the ratio B/B[H] where B is the field created by magnet and B[H] is the horizontal component of earth's magnetic field.
To determine B/B[H] a primary (main) magnet is made to first oscillate in earth's magnetic field (B[H]) alone and it's time period of oscillation (T) is noted.
T =2π√(I/MBH)
and frequency v = 1/2π √(MB[H]/I
Now a secondary magnet placed near the primary magnet so primary magnet oscillate in a new field with is the resultant of B and B[H] and now time period, is noted again.
Email Based Homework Assignment Help in Vibration Magnetometer
Transtutors is the best place to get answers to all your doubts regarding vibration magnetometer and use of vibration magnetometer for comparison of magnetic moments and magnetic fields. You can
submit your school, college or university level homework or assignment to us and we will make sure that you get the answers you need which are timely and also cost effective. Our tutors are available
round the clock to help you out in any way with magnetism.
Live Online Tutor Help for Vibration Magnetometer
Transtutors has a vast panel of experienced physics tutors who specialize in vibration magnetometer and can explain the different concepts to you effectively. You can also interact directly with our
physics tutors for a one to one session and get answers to all your problems in your school, college or university level magnetism homework. Our tutors will make sure that you achieve the highest
grades for your physics assignments.
Related Questions
• Mod 8 Help 3 hrs ago
Based on the Module 8 Reading PDF file complete the Module 8 Assignment document file.
Tags : Science/Math, Math, Discrete Mathematics, College ask similar question
• HCS/325 HEALTH CARE MANAGEMENT 3 hrs ago
<td width="114" valign="top" style="width:85.5pt;border:solid gray 1.0pt; mso-border-alt:solid gray .5pt;padding:5.75pt 5.75pt 5.75pt
Tags : Science/Math, Earth Science, Others, University ask similar question
• i want to make an assignment of corporate law 5 hrs ago
assignment o corporate law
Tags : Science/Math, Biology, Others, Graduation ask similar question
• If you have 4.25 grams of zinc and 5.0L of 0.3500M hydrochloric acid, how... 6 hrs ago
If you have 4.25 grams of zinc and 5.0L of 0.3500M hydrochloric acid, how many milliliters of hydrogen gas do you produce at 350K and 800mmHg.
Tags : Science/Math, Chemistry, Others, College ask similar question
• CopIf you have 4.25 grams of zinc and 5.0L of 0.3500M hydrochloric acid,... 6 hrs ago
CopIf you have 4.25 grams of zinc and 5.0L of 0.3500M hydrochloric acid, how many milliliters of hydrogen gas do you produce at 350K and 800mmHgy and paste your question here...
Tags : Science/Math, Chemistry, Others, University ask similar question
• The equation C = 5 9 (F - 32) is used to convert between Fahrenheit... 6 hrs ago
The equation C = <td align="center" style="border-bottom-color: currentColor; border-bottom-width: 1px; border-bottom-st
Tags : Science/Math, Math, Algebra, University ask similar question
• As a rocket passes the earth at 0.75 c , it fires a laser perpendicular to... 8 hrs ago
As a rocket passes the earth at 0.75<span style="position: absolute; clip: rect(1.994em, 1000em, 2.832em, -0.543em); to
Tags : Science/Math, Physics, Classical Physics, College ask similar question
• A boy on a skateboard coasts along at 7.0 m / s . He has a ball that he can... 8 hrs ago
A boy on a skateboard coasts along at 7.0<span class="t
Tags : Science/Math, Physics, Classical Physics, University ask similar question
• Unit One (Sets) 8 hrs ago
Original Response 1. Create a set representing a real-life group of "things" and write it in roster notation. The members of the set can be groups such as tools you will use in your fu
Tags : Science/Math, Math, Linear Algebra, College ask similar question
• As a rocket passes the earth at 0.75c, it fires a laser perpendicular to... 9 hrs ago
As a rocket passes the earth at 0.75c, it fires a laser perpendicular to its direction of travel as shown in the figure.What is the speed of the laser beam relative to the rocket?What is the
speed of the laser beam relative...
Tags : Science/Math, Physics, Classical Physics, College ask similar question
more assignments »
|
{"url":"http://www.transtutors.com/physics-homework-help/magnetism/vibration-magnetometer.aspx","timestamp":"2014-04-20T03:10:35Z","content_type":null,"content_length":"85392","record_id":"<urn:uuid:71d648ae-25e9-4141-8d1c-4727b2e5c839>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/tanner23456/asked","timestamp":"2014-04-17T04:03:11Z","content_type":null,"content_length":"107093","record_id":"<urn:uuid:7e630e80-1520-43a3-b81d-54730184a4c3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
|
getting mapscale [Archive] - OpenGL Discussion and Help Forums
04-24-2002, 09:49 PM
I am writting a simple CAD viewer with OpenGL
It draw the graphics objects in a "perspective" scene.
How do I calculate the current map-scale (1:1.000, 1:2.000, ....) of the view ?
Where can I read information?
Thanks in avance
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-135324.html","timestamp":"2014-04-19T07:16:21Z","content_type":null,"content_length":"7409","record_id":"<urn:uuid:3bfcdaea-d422-4634-8ccd-91b3b5387529>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math - Blogs
I am excited to share with you some math posters I stumbled upon online and that can be used in your classrooms and with your students. The purposes is not only to decorate the walls of your
classroom but to extend learning to every part of your learning. Educational posters can be a way to communicate certain concepts, teach hard-to- grasp formulae, get students to remember certain
facts by constant exposure to them, and above all arouse their motivation and…
Added by Wanda Collins on October 19, 2013 at 11:47pm — No Comments
Math Centers also called Math Stations, Rotation Stations, Math Labs etc. No matter what you call them, you'll find their the best strategy for reaching the kinesthetic learner. I have found
Implementing Math Centers in my daily routine an effective…
Added by Wanda Collins on August 31, 2013 at 2:37am — No Comments
The idea of throwing a highlighter party to make math fun and engage students in an unconventional way came from Mr. Heath. Make sure you check out his website for more resources and ideas that will…
Added by Wanda Collins on June 15, 2013 at 1:00pm — No Comments
First off, I hope everyone has a great summer!!! During this time, I try to find as many free resources as possible on the web to help other teachers as well as myself for next year. With that being
said, I found a great freebie: Common Core Math SMP posters. You can implement the new Common Core Math Standards with these awesome posters!!! Best of all, each poster has language from Common Core
so you can use them as vocabulary cards for your word wall as well.There…
Added by Wanda Collins on June 11, 2013 at 2:00pm — No Comments
Professional educators, community leaders express concerns about the high attrition rate among experienced math teachers. America's public school system sees thousands of capable math teachers leave
the teaching profession every year as they find more lucrative and less stressful opportunities in the private sector. The extent to which high-performing math teachers leave the profession differs
from district to…
Added by Wanda Collins on May 2, 2013 at 3:00pm — No Comments
Added by Wanda Collins on February 27, 2013 at 11:40am — No Comments
I found these awesome math games from DigitalLesson.com and my students absolutely love them. Best of all, they are free. These games are engaging, fun, and most importantly educational. They will
definitely reinforce math concepts previously learned and students won't realize they are actually working because…
Added by Wanda Collins on February 11, 2013 at 5:55pm — No Comments
The following are a list of mental math teaching ideas:
Added by Wanda Collins on February 8, 2013 at 10:39pm — No Comments
These fun math fact cross "word" puzzles from EducationWorld.com are excellent for math centers, enrichment activities, and the like.
Below is a list of math skills practiced in each Math Cross Puzzle:…
Added by Wanda Collins on February 5, 2013 at 2:32pm — No Comments
Recent standardized test scores indicate that fewer than 40% of students in the 4th and 8th grades show proficiency in math, and, according to a Governmental Advisory Panel, the situation does not
seem to be improving on its own.
For many years math educators have emphasized that math must be taught in an interactive and engaging manner that allows…
Added by Wanda Collins on November 26, 2012 at 2:16pm — No Comments
Think you have what it takes to win a math competition? We have compiled a list of math competitions you can enter below. I'm sure there are plenty more out there floating around in cyber space. You
can also check your local library and ask a librarian to assist you in finding competitions you can enter in your town or state and even your principal might have some suggestions. *Make sure you get
your parents permission before entering these…
Added by Wanda Collins on November 5, 2012 at 3:00pm — No Comments
Geometry Bingo: Teach your students all about shapes with a bingo game. Players have to find a variety of lines, 2 dimensional shapes and solid shapes to win the game…
Added by Wanda Collins on October 3, 2012 at 2:16am — No Comments
Tic Tac Toe and Math Facts Practice. This is a great game for 2 players. The pictures below show (smaller than actual size) what the game boards will look like when printed. …
Added by Wanda Collins on October 3, 2012 at 1:00am — No Comments
This year I am using manipulatives to engage my math students and have noticed they not only discover the concepts and its relevancy to the real world but they actually find math fun!! Several
students have written personal notes to me expressing their excitement and newfound love for math. One student wrote me a poem: “Roses are red violets are blue math is so fun that you would like it
too. I feel so happy and so glad that I am ready to fight the FCAT." I was thrilled and over joyed when I…
Added by Wanda Collins on September 30, 2012 at 3:28pm — No Comments
Mathematics Word Walls are to be active and built upon. Words are to be posted as they are introduced in the day’s lesson. Mathematics spirals and students need to explore multiple exposures to
important concepts and…
Added by Mrs. Wilson on September 8, 2012 at 12:00pm — No Comments
Math Concentration is hosting an online math video tutorial contest. Math Concentration is a social network that offers free math homework help via forum, chat, instant messaging and a virtual
whiteboard. Math Concentration promotes numeracy, building healthy parent/teacher relationships, and fostering students to become product citizens in society. So register today to upload your math
video tutorial for a chance to win a $200 Visa Gift Card! Remember, winners are chosen by highest vote…
Added by Wanda Collins on July 10, 2011 at 10:30pm — 21 Comments
As a teacher, I am sure you are always looking for resources you can use in your classroom. The internet made it easier for teacher to have access to free and low cost resources from around the
world. I have found one resource called Twitter. You can use Twitter to find Math Resouces with #Hashtags. The #hashtags help to organize Twitter and spread information more efficiently. Most Twitter
users append certain hashtags to tweets about a topic so it becomes easier…
Added by Wanda Collins on July 10, 2011 at 2:26pm — No Comments
Added by Mrs.S on July 6, 2011 at 7:38pm — No Comments
You know the saying that "free is not always better and you get what you pay for." Well, in the world of mathematics, sometimes this doesn't hold true. Free is better not only because it saves you
alot of money but primarily because these programs are useful and does not require an enormous amount of skills like Mathematica and…
Added by Wanda Collins on June 30, 2011 at 5:00pm — No Comments
MomPackPowered By Ringsurf Moms Network CommunityPowered By Ringsurf | Report An Issue MomPackPowered By Ringsurf" > Ringsurf | Terms of Service
|
{"url":"http://www.mathconcentration.com/profiles/blog/list?tag=Math","timestamp":"2014-04-16T07:14:27Z","content_type":null,"content_length":"106752","record_id":"<urn:uuid:16bdbd7b-4526-4430-b43f-8fd0371da77f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra 2 Tutors
Houston, TX 77089
Smart, fun, easy-to-work-with Tutor with a lot of experience
...I can tutor almost any subject, but my specialties are Math (almost all levels, including Algebra, Trig, Geometry, Pre-Cal, Calculus, College Algebra, Finite Math, and Math Models), Statistics
(almost any statistics course, including regular Stats, Business Stats,...
Offering 10+ subjects including algebra 2
|
{"url":"http://www.wyzant.com/Pasadena_TX_algebra_2_tutors.aspx","timestamp":"2014-04-19T00:22:06Z","content_type":null,"content_length":"61430","record_id":"<urn:uuid:e805ed3f-15c2-4e6e-bccf-5d33d3ce73e0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gurnee Calculus Tutor
Find a Gurnee Calculus Tutor
...I have taught pre-algebra to many students in the past. This course starts out with basic properties of operations such as associative, distributive, and many more. The course evaluates
expression using an order of operations known as PEMDAS.
11 Subjects: including calculus, geometry, algebra 2, trigonometry
...This was a two year curriculum. I have a Masters degree in applied mathematics and most coursework for a doctorate. This includes linear algebra, modern algebra, mathematical physics, topology,
complex and real variable analysis and functional analysis in addition to calculus and differential equations.
18 Subjects: including calculus, physics, geometry, GRE
...I taught Precalculus and Trigonometry at Loyola Academy, and tutored it a lot. I am a native Russian speaker, born and educated in Russia. I had a perfect GPA in high school, which included
Russian Grammar, Composition, and Russian Literature courses.
9 Subjects: including calculus, physics, geometry, algebra 1
...Next, I focus on vocabulary definitions. I believe it is essential to understand the material language and how to verbally explain processes. Then, I assist in tackling the problems.
31 Subjects: including calculus, chemistry, physics, geometry
...To prove simplicity of trigonometry, diagrams and geometrical analogies are widely used. Usually this gives excellent results: typically my students improve their grades by 1-2 letters after
3-4 sessions on average. I have tutored differential equations to college students for about 10 years.
8 Subjects: including calculus, geometry, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/Gurnee_calculus_tutors.php","timestamp":"2014-04-18T15:52:21Z","content_type":null,"content_length":"23666","record_id":"<urn:uuid:dc7fdc16-bf69-47f6-857e-fd582809dbaa>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
how to calculate the N to the power of N
Author how to calculate the N to the power of N
Ranch Hand
Hi Guys
Joined: Nov 17,
2010 How to calculate the N to the power of N ,
Posts: 58
public class ForCycle_04 {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int num = sc.nextInt();
int power = sc.nextInt();
int result = 1;
for (int i = 0; i < power; i++) {
result = result * num;
if N == 64 return 0 , please give me a solution.
Thanks in advance!
Sheriff Mark Guo wrote:Hi Guys
Joined: Sep 28, How to calculate the N to the power of N ,
Posts: 18117 public class ForCycle_04 {
public static void main(String[] args) {
39 Scanner sc = new Scanner(System.in);
int num = sc.nextInt();
I like... System.out.println("end");
int power = sc.nextInt();
int result = 1;
for (int i = 0; i < power; i++) {
result = result * num;
if N == 64 return 0 , please give me a solution.
Thanks in advance!
The source code that you provided doesn't have an N, so don't know what you mean by "if N == 64 return 0". Second, after clarifying that, please explain what issues you are having
with the code that you provided.
Books: Java Threads, 3rd Edition, Jini in a Nutshell, and Java Gems (contributor)
Ranch Hand
Joined: Nov 17,
2010 run the code, you can input num = 64 and power = 64 and get the result. The N means num or power.
Posts: 58
Mark Guo wrote:run the code, you can input num = 64 and power = 64 and get the result. The N means num or power.
Joined: Sep 28,
Posts: 18117
39 Hint: What is the legal range of an integer ? And what happens when you overflow it ?
I like...
Ranch Hand
Joined: Nov 17,
2010 Can you give me a solution to handle all number type?
Posts: 58
Mark Guo wrote:Can you give me a solution to handle all number type?
Joined: Sep 28,
Posts: 18117
A long has a larger range than an integer. A java.math.BigInteger has a range that is even larger than that. For 64 to the 64th power, the java.math.BigInteger class should be able
39 to handle it.
I like...
Joined: Sep 28,
2004 BTW, I did a quick and dirty port of the program to use the BigInteger class. And the result of 64 to the 64th power was ....
Posts: 18117
I like...
Joined: Oct 14, At first I thought this might relate to the exam, but now I think it's more general...
Posts: 8764
Spot false dilemmas now, ask me how!
(If you're not on the edge, you're taking up too much room.)
Ranch Hand
HI Bert Bates
Joined: Nov 17,
2010 I need your general solution, thanks!
Posts: 58
Ranch Hand
Mark Guo wrote:HI Bert Bates
Joined: Mar 20,
2007 I need your general solution, thanks!
Posts: 317
Henry Wong wrote:BTW, I did a quick and dirty port of the program to use the
class. And the result of 64 to the 64th power was ....
Henry has given hints for you to come up with a general solution. Is there something that you are stuck with?
Ranch Hand
Joined: Nov 17,
2010 Thanks guys, I like BigInteger
Posts: 58
Joined: Oct 13,
Posts: 36501 You realise you can change that linear complexity algorithm to run in logarithmic complexity?
If the index divides exactly by two, you halve the index, and square the result. Remember to get x squared, you write x * x.
subject: how to calculate the N to the power of N
|
{"url":"http://www.coderanch.com/t/546944/java/java/calculate-power","timestamp":"2014-04-19T15:30:48Z","content_type":null,"content_length":"44511","record_id":"<urn:uuid:687191e4-84f2-4027-9bfa-452cbda6cf5a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Khan Academy is using Machine Learning to Assess Student Mastery
See discussion on Hacker News and Reddit.
The Khan Academy is well known for its extensive library of over 2600 video lessons. It should also be known for its rapidly-growing set of now 225 exercises — outnumbering stitches on a baseball —
with close to 2 million problems done each day.
To determine when a student has finished a certain exercise, we award proficiency to a user who has answered at least 10 problems in a row correctly — known as a streak. Proficiency manifests itself
as a gold star, a green patch on teachers’ dashboards, a requirement for some badges (eg. gain 3 proficiencies), and a bounty of “energy” points. Basically, it means we think you’ve mastered the
concept and can move on in your quest to know everything.
It turns out that the streak model has serious flaws.
First, if we define proficiency as your chance of getting the next problem correct being above a certain threshold, then the streak becomes a poor binary classifier. Experiments conducted on our data
showed a significant difference between students who take, say, 30 problems to get a streak vs. 10 problems right off the bat — the former group was much more likely to miss the next problem after a
break than the latter.
False positives is not our only problem, but also false negatives. One of our largest source of complaints is from frustrated students who lost their streak. You get 9 correct, make a silly typo, and
lose all your hard-earned progress. In other words, the streak thinks that users who have gotten 9 right and 1 wrong are at the same level as those who haven’t started.
In Search of a Better Model
These findings, presented by one of our full-time volunteers Jace, led us to investigate whether we could construct a better proficiency model. We prototyped a constant acceleration “rocketship”
model (with heavy gnomes that slow you down on wrong answers), but ultimately decided that a prudent first step would be to just abstract away the streak model with the notion of “fill up the bar”.
We went from displaying the user’s current streak (bug not intended; could not find another screenshot):
to this:
and when full:
This gave us greater freedom to experiment with different underlying models without disrupting the interface.
Conversations with the team led me to conceive of applying machine learning to predict the likelihood of getting the next problem correct, and use that as the basis for a new proficiency model.
Basically, if we think you’re more than % likely to get the next problem correct, for some threshold , we’ll say you’re proficient.
I started off by hacking together a naive Bayes binary classifier modified to give a probability estimate. I trained this on a few days’ worth of problem logs, and initial results were promising —
the most striking being that fewer problem were needed to attain the same level of accuracy.
What do I mean by accuracy? We define it as
P(\text{next problem correct} | \text{just gained proficiency})
which is just notation desperately trying to say ”Given that we just gained proficiency, what’s the probability of getting the next problem correct?”
However, naive Bayes is typically used for classification — the task of determining which discrete category a data point belongs to — rather than for regression — returning a continuous value (in our
case, a probability estimate in ).
So, our full-time volunteer Jace, who is much more versed in statistics and machine learning, used R to quickly prototype and evaluate different machine learning algorithms and feature sets. R is the
de-facto programming language for statistical computing and comes pre-packaged with data analysis and machine learning tools.
To evaluate the different algorithms, input features, and thresholds, we came up with some metrics to gauge for desirable characteristics:
• Mean problems done to reach proficiency — ideally we like to minimize this so that students can spend less time rote grinding on problems they know well, and move on to other concepts.
• — Unfortunately, this is hard to correctly measure in our offline data set due to the streak-of-10 bias: students may loosen up after they gain proficiency and spend less time on subsequent
• Proficiency Rate — The percent of proficiencies attained per user-exercise pair. Again, this is hard to measure because of the streak bias.
• Confusion matrix for predicted next problem correct — This is for comparing binary classifiers on their accuracy in predicting the outcome of any answer in a user’s response history. We build up
the confusion matrix, and from that extract two valuable measures of the performance of a binary classifier.
We tested various models, including naive Bayes, support vector machines, a simple 10-out-of-last-11-correct model, and logistic regression. Based on the metrics above, we settled on…
Using Logistic Regression as a Proficiency Model
(Feel free to skip this section if you’re not technically inclined.)
Logistic regression is usually used as a classifier that gives a reasonable probability estimate of each category — exactly our requirement. It’s so simple, let’s derive it.
Let’s say we have the values of input features (eg. percent correct), and we stuff them into a vector . Let’s say we also happen to know how much each feature makes it more likely that the user is
proficient, and stuff those weights into a vector . We can then take the weighted sum of the input features, plus a pre-determined constant to correct for any constant bias, and call that :
z = w_0 + \sum_{i=1}^n w_ix_i
Now if we set , we can write that compactly as a linear algebra dot product:
z = \mathbf{w}^T\mathbf{x}
Already, you can see that the higher is, the more likely the user is to be proficient. To obtain our probability estimate, all we have to do is “shrink” into the interval . We want negative values of
to map into and positive values to fall in . We can do this by plugging into a sigmoid function — in particular, we’ll use the logistic function:
h(z) = \frac{1}{1+e^{-z}}
And that’s it! is the probability estimate that logistic regression spits out.
The tricky bit is in determining the values of the weight vector — that is, training logistic regression so that , aka. the hypothesis function in machine learning terminology, gives us a good
probability estimate. For brevity I’ll spare you the details, but suffice to know that there are plenty of existing libraries to do that.
So that raises the question, which features did we use?
• ewma_3 and ewma_10 — Exponentially-weighted moving average. This is just math-talk for an average where we give greater weight to more recent values. It’s handy because it can be implemented
recursively as , where is the weighting factor, is the most recent value, and is the previous exponential moving average. We set to 0.333 and 0.1 for ewma_3 and ewma_10 respectively.
• current_streak — This turned out to be a rather weak signal and we’ll be discarding it in favour of other features in the future.
• log_num_done — . We don’t try to predict until at least one problem has been done.
• log_num_missed —
• percent_correct —
As for the proficiency threshold, we chose 94% based on our metrics.
Now for some Python code. To compute the exponentially-weighted moving average:
1 def exp_moving_avg(self, weight):
2 ewma = EWMA_SEED
4 for i in reversed(xrange(self.total_done)):
5 ewma = weight * self.get_answer_at(i) + (1 - weight) * ewma
7 return ewma
and for the actual logistic regression hypothesis function:
1 class AccuracyModel(object):
3 # ... snip ...
5 def predict(self):
6 """
7 Returns: the probabilty of the next problem correct using logistic
8 regression.
9 """
11 # We don't try to predict the first problem (no user-exercise history)
12 if self.total_done == 0:
13 return PROBABILITY_FIRST_PROBLEM_CORRECT
15 # Get values for the feature vector X
16 ewma_3 = self.exp_moving_avg(0.333)
17 ewma_10 = self.exp_moving_avg(0.1)
18 current_streak = self.streak()
19 log_num_done = math.log(self.total_done)
20 log_num_missed = math.log(self.total_done - self.total_correct() + 1)
21 percent_correct = float(self.total_correct()) / self.total_done
23 weighted_features = [
24 (ewma_3, params.EWMA_3),
25 (ewma_10, params.EWMA_10),
26 (current_streak, params.CURRENT_STREAK),
27 (log_num_done, params.LOG_NUM_DONE),
28 (log_num_missed, params.LOG_NUM_MISSED),
29 (percent_correct, params.PERCENT_CORRECT),
30 ]
32 X, weight_vector = zip(*weighted_features) # unzip the list of pairs
34 return AccuracyModel.logistic_regression_predict(
35 params.INTERCEPT, weight_vector, X)
37 # See http://en.wikipedia.org/wiki/Logistic_regression
38 @staticmethod
39 def logistic_regression_predict(intercept, weight_vector, X):
40 # TODO(david): Use numpy's dot product fn when we support numpy
41 dot_product = sum(itertools.imap(operator.mul, weight_vector, X))
42 z = dot_product + intercept
44 return 1.0 / (1.0 + math.exp(-z))
There’s another interesting problem here — how do you display that probability value on the progress bar? We try to linearize the display and distribute it evenly across the bar. Since it’s 4 am
right now, I’ll just give you the code for it (it’s well-commented) and won’t make any helpful explanatory graphs (unless people request it ;)).
1 class InvFnExponentialNormalizer(object):
2 """
3 This is basically a function that takes an accuracy prediction (probability
4 of next problem correct) and attempts to "evenly" distribute it in [0, 1]
5 such that progress bar appears to fill up linearly.
7 The current algorithm is as follows:
8 Let
9 f(n) = probabilty of next problem correct after doing n problems,
10 all of which are correct.
11 Let
12 g(x) = f^(-1)(x)
13 that is, the inverse function of f. Since f is discrete but we want g to be
14 continuous, unknown values in the domain of g will be approximated by using
15 an exponential curve to fit the known values of g. Intuitively, g(x) is a
16 function that takes your accuracy and returns how many problems correct in
17 a row it would've taken to get to that, as a real number. Thus, our
18 progress display function is just
19 h(x) = g(x) / g(consts.PROFICIENCY_ACCURACY_THRESHOLD)
20 clamped between [0, 1].
22 The rationale behind this is that if you don't get any problems wrong, your
23 progress bar will increment by about the same amount each time and be full
24 right when you're proficient (i.e. reach the required accuracy threshold).
25 """
27 def __init__(self, accuracy_model, proficiency_threshold):
28 X, Y = [], []
30 for i in itertools.count(1):
31 accuracy_model.update(correct=True)
32 probability = accuracy_model.predict()
34 X.append(probability)
35 Y.append(i)
37 if probability >= proficiency_threshold:
38 break
40 self.A, self.B = exponential_fit(X, Y)
41 # normalize the function output so that it outputs 1.0 at the
42 # proficency threshold
43 self.A /= self.exponential_estimate(proficiency_threshold)
45 def exponential_estimate(self, x):
46 return self.A * math.exp(self.B * x)
48 def normalize(self, p_val):
49 # TODO(david): Use numpy clip
50 def clamp(value, minval, maxval):
51 return sorted((minval, value, maxval))[1]
53 return clamp(self.exponential_estimate(p_val), 0.0, 1.0)
Now, until Google App Engine supports NumPy, the implementation for exponential_fit is just the derivative of the least-squares cost.
The full, uncut, unaltered code is available at our Kiln repo.
Showdown: Logistic Regression
The metrics may tell us that logistic regression wins, but being the illogical, squishy human beings that we are, we yearned for an intuitive understanding of the unique behaviour of the different
models. I developed an internal tool to simulate exercise responses and visualise the prediction of different models. Here’s a highlight reel of the salient characteristics.
As expected, order matters. Both models will weigh problems done more recently higher than earlier problems. What may be surprising is the relative importance:
Logistic regression seems to care much less than streak.
Both models monotonically increase confidence the more responses of the same type they receive:
Logistic regression also recognises consistently spotty performance:
Logistic regression takes into account prior performance. So, getting lots correct is always a good thing, and you’ll be able to recover faster from a wrong answer if you were previously doing well.
Contrast with the streak model, which loses all memory after a single incorrect answer.
This could also work against you. If you’ve gotten lots of wrong answers, you’ll need to do more work to convince logistic regression that you’re actually competent. This mitigates one of the issues
we had with the streak, where we found that there was a significant difference in actual proficiency for those getting a streak immediately vs. after 30 problems.
Could this be overly harsh for struggling students? That’s a question we are actively investigating, and as a stopgap measure we only keep the last 20 problems as history. This compromise has an
insignificant effect on logistic regression’s predictive accuracy, but it lets us sleep knowing that a student will not be damned for life if they were doing some unusual exploration and got 10
problems in a row wrong.
Due to 4 am, I don’t have an interactive demo on this page, but it won’t be hard to add it. If you would like to play with this please say so.
This was a fairly large change that we, understandably, only wanted to deploy to a small subset of users. This was facilitated by Bengineer Kamen’s GAE/Bingo split-testing framework for App Engine.
Crucially, it allowed us to measure conversions as a way of gathering more accurate statistics on actual usage data.
The experiment has been running for 6 days thus far with 10% of users using the new logistic regression proficiency model. Before I reveal anything else, see a screenshot of GAE/Bingo in action (from
a few hours ago):
The graph above shows the results over time, so you can see when trends have stabilised.
Now what you’ve been waiting for, our current statistics (5 am PST, Nov. 2) show that, for the new model, we have:
• 20.8% more proficiencies earned:
• 15.7% more new exercises attempted:
• 4.4 less problems done (26% less) per proficiency:
• Essentially the same accuracy at proficiency:
• Higher accuracy attained among a set of 3 pre-chosen easy problems. Jace came up with this statistic to gauge any actual differences in learning. The basic idea is: If accuracy as determined by
logistic regression is a good approximation of competence, then higher attained accuracies would be indicative of greater competence. Note the precipitous drop at 94% for the accuracy model —
this is due to the proficiency threshold set at 94% for logistic regression, so once a user reaches that level, we tell them to move on. (A streak of 10 with no wrong answers nets an accuracy of
• Slightly higher accuracy attained for a set of 10 pre-chosen hard problems. Going above and beyond the call of duty seems much less popular here, among accuracy model participants.
• P(do another problem | just answered incorrectly) not affected
• 11.7% more proficiencies earned for the hard problems
• 14.8% more proficiencies earned for the easy problems
In high level terms, we increased overall interest — more new exercises attempted, fewer problems done per proficiency — without lowering the bar for proficiency — P(next problem correct | just
gained proficiency) was roughly the same for both groups. Further, it seemed that overall learning, as measured by the distribution of accuracies obtained, went up slightly under the new model.
Optimistically, we hypothesise that our gains are from moving students quicker off exercises they’re good at, while making them spend more time on concepts in which they need more practice. To
confirm or deny this…
In the Pipeline
…we will look into truly where the new proficiencies are coming from. We are also interested in seeing if there is any variation in knowledge retention — in particular, we want to know if P(next
problem correct | took a few days break) is affected.
This is just the end of the beginning for us. We wish to investigate and possibly implement:
• Stochastic gradient descent for online learning of logistic regression
• …which would allow adaptive models per user and per exercise. Should we bump up the proficiency threshold for users who find the exercises too easy?
• On a similar note, could we define a fitness function that takes into account both accuracy and student frustration, and find the optimal time to tell the student to move on? Could this allow us
to maximize student learning by maximizing accuracy across many exercises?
• Model improvements. Here are some things we still need to try:
□ Incorporate more features, such as time spent per problem, time since last problem done, and user performance on similar exercises.
□ Experiment with non-linear feature transformations and combinations. Eg.
□ Along with the above, apply regularization to prevent overfitting (thanks Andrew Ng and ml-class!)
□ Train and use separate models for the first 5 problems vs. those after that.
This work to create an accurate predictor has many other applications than just to power the proficiency meter:
• Determine if the user is struggling, and if so suggest a video to watch, using some hints, or taking a break.
• Determine the optimal date to schedule a review for spaced repetition learning.
• Tailor a custom mixed-question review session that addresses weak areas.
Stay tuned for a continuation blog post if we find more interesting results!
Obligatory Recruiting Plug
Think you can do better? Well, I agree! I’m sure you know lots of ways to improve what we did. Good news: we’re open-source and hiring!
We welcome contributors to our exercises and exercise framework on GitHub. Some of our best exercises were created by volunteers: check out this awesome derivative intuition exercise created by
Bengineer Eater.
Another reason I love working at the Khan Academy is the passionate and talented team. Our lead developer, Bengineer Kamens, is committed to our productivity and well-being. He Bengineers internal
refactorings, tools, and spends much of his time getting new developers up to speed. Without his Bengineering, it would not have been possible to collect all this interesting data. Also, if you ever
have a question about jQuery, you could just ask John Resig here.
Do you want to make 0.1% improvements in ad click-thru rates for the rest of your life, or come with us and change the world of education?
Also, if you were wondering, we are not based in the UK, Canada, or Australia… my Canadian heritage compels me to spell “-our” and “-ise” when it’s not code. :P
Update (Nov 3, 2 am PST)
Thank you all for your suggestions and feedback!
There’s some interesting discussion on Hacker News and Reddit.
Update (Nov 12)
Having ran the experiment for more than two weeks now, we analysed 10 days’ data to see if longer-term knowledge retention was affected. It turns out that students are slightly more likely to answer
correctly after taking a break under the new model:
These results are encouraging. It shows that the new model attempts to address one of the core problems with the streak — the variability of student success rates after taking a break — while at the
same time increasing proficiency rates. Thus, we have reason to conclude that the accuracy model is just a better model of student proficiency.
This information gave us the confidence to roll out from 10% to 100% of users. We have now officially launched the logistic regression proficiency model site-wide!
|
{"url":"http://david-hu.com/2011/11/02/how-khan-academy-is-using-machine-learning-to-assess-student-mastery.html","timestamp":"2014-04-17T21:23:03Z","content_type":null,"content_length":"56595","record_id":"<urn:uuid:6ffa1721-4bd3-4474-a583-1d83c4b808db>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Methuen Precalculus Tutor
Find a Methuen Precalculus Tutor
...This is a new start, for both of us! There is hard work involved, but now you know, you will be in a position to understand, your subject matter better, if this is Mathematics or Science or
Engineering (college students) and get results! The practice sessions, I shall plan for you, will be straight to the point and will yield best possible results, gradually and steadily.
6 Subjects: including precalculus, physics, algebra 1, prealgebra
...All children in this program were in K-6 and would receive instruction and assistance with their homework. I would aid children with math, social studies, English and any other area of study
for they needed assistance. Phonics requires breaking words down so that they may be recognized and pronounced correctly.
45 Subjects: including precalculus, chemistry, English, writing
...If it sounds to you like I can be helpful I hope you will consider giving me a chance. Thanks.My fascination with math really began when I started studying calculus. For more then a decade I
have been using calculus to solve a wide variety of complex problems.
14 Subjects: including precalculus, calculus, geometry, algebra 1
...I was a co-captain of the math team, and I did baseball and track. I took as many math and science classes as possible, including AP statistics and calculus. I got As in both classes and 4/5 on
both the national tests.
29 Subjects: including precalculus, English, finance, economics
I am an Engineer with a Master's degree in Electrical Engineering from an Ivy league school - UPenn, PA. I did my Bachelor's in EE and Applied Mathematics from Stony Brook University, NY. During
my college years I tutored Statistics, Algebra, Chemistry and Electrical Engineering Circuits courses and received university credits and/or pay for doing so.
22 Subjects: including precalculus, calculus, physics, geometry
|
{"url":"http://www.purplemath.com/methuen_ma_precalculus_tutors.php","timestamp":"2014-04-20T10:51:54Z","content_type":null,"content_length":"24058","record_id":"<urn:uuid:08d2354a-92e8-4399-8b57-5219191b3a2e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
|
matrix-chain multiplication
Newbie Poster
1 post since Oct 2006
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
hi every one...........
i'm new in this forum and need ur help please....
i want to write a programme using c++ code,
programme that find the optimal parenthesization of a matrix-chain product, then perform the optimal matrix -chain multiply 4 a sequence of matrices, but the user should enter the number of the
matrices and its dimensions.........
plz need ur help guys
thnx .........
Posting Sage
7,177 posts since Dec 2005
Reputation Points: 5,138 [?]
Q&As Helped to Solve: 970 [?]
Skill Endorsements: 41 [?]
•Team Colleague
Posting Sage
7,036 posts since Aug 2005
Reputation Points: 1,307 [?]
Q&As Helped to Solve: 592 [?]
Skill Endorsements: 74 [?]
• Featured
>programme that find the optimal parenthesization of a matrix-chain product
Try reading this:-
>then perform the optimal matrix -chain multiply 4 a sequence of matrices
Multiplication would go something like this:-
public static void mmult (int rows, int cols,
int[][] m1, int[][] m2, int[][] m3) {
for (int i=0; i<rows; i++) {
for (int j=0; j<cols; j++) {
int val = 0;
for (int k=0; k<cols; k++) {
val += m1[i][k] * m2[k][j];
m3[i][j] = val;
>but the user should enter the number of the matrices and its dimensions.........
I assume you know what cin is and how to initialise a 2D array?
Newbie Poster
1 post since Nov 2009
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
Given an array of 10 elements, sort the data from index 2 to 6 using insertion sort algorithm.
Write the c/c++ code for Matrix Chain Multiplication Algorithm for at least 5 matrixes.
please solve this in c code.
Newbie Poster
1 post since Feb 2010
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
could anyone be kind enough to provide me with a c++ program on matrix chain multiplication?
|
{"url":"http://www.daniweb.com/software-development/cpp/threads/57453/matrix-chain-multiplication","timestamp":"2014-04-17T12:41:19Z","content_type":null,"content_length":"40886","record_id":"<urn:uuid:7520cf1b-a51d-4466-ad92-ba252a27636e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wave Structure Matter
Wave Structure Matter
The Wave Structure Matter (WSM) model is an emerging theory in the category of metaphysics. It was proposed 150 years ago by mathematician William Kingdon Clifford and later by Erwin Schrodinger.
They proposed the view that what we observe as large-scale matter (discrete and particle-like), is only the 'appearance' (schaumkommen) of wave structures in space. Since 1985, Milo Wolff and others
have found a mathematical basis of Wave Structures. The theory is based upon a wave medium in all space of the universe that has the dimension of energy/volume. Though the theory is gaining
popularity, it still remains largely in the domain of Fringe science possibly because the popularity is more among non-scientists than among scientists.(In what follows, the claims of the theory are
given in an assertive form, without the words 'According to WSM' preceding them. These are not to be confused with currently established scientific claims.)
Two principles describe the properties of the space medium, and are the origin of all matter and the natural laws: matter determines the properties of space, and reciprocally, space determines the
properties of matter. This mechanism occurs because the Wave Structure of Matter agrees with Einstein's theory of General Relativity, there are no discrete separate particles, matter is a structure
of space, and thus affects the properties of space. Einstein writes:
'Physical objects are not in space, but these objects are spatially extended (as fields). In this way the concept 'empty space' becomes replaced by the notion of undifferentiated energy - or
'no-thing'.The field thus becomes an irreducible element of physical description, irreducible in the same sense as the concept of matter (particles) in the theory of Newton. ' The physical reality of
space is represented by a field whose components are continuous functions of four independent variables, the co-ordinates of space and time. Since the theory of general relativity implies the
representation of physical reality by a continuous field, the concept of particles or material points cannot play a fundamental part, nor can the concept of motion. The particle can only appear as a
limited region in space in which the field strength or the energy density are particularly high. (Albert Einstein, Relativity, 1950)
The particle of the WSM is a spherical standing wave with an intensity, decreasing with distance from the wave center. This wave has the character of a quantum wave. The particle is regarded as the
entire wave structure but its location is the quantum wave-center when observed. This fundamental structure, with presence throughout the universe, is an electron or a positron (which have opposite
phase waves).
The two important basic principles of the WSM are:
1. Space exists as a wave medium for the propagation of quantum waves described by a scalar wave equation.
2. The total mass-energy density of all waves from all matter (in our finite spherical universe) creates the mass-energy density and properties of the wave medium (space).
• This medium forms a scalar field that are the two only possible 3D solutions of a wave equation. This is contrary to the vector field used by most physicists. The only two solutions are spherical
inward and outward waves.
• The basic electron or positron is a pair of inward and outward spherical waves that form a spherical standing wave in space. The positron has opposite phase waves to the electron, thus explaining
matter/anti-matter annihilation due to destructive interference of the waves.
• The electromagnetic vector fields of practical engineering are large scale appearances of many discrete energy exchanges of the scalar quantum waves, i.e. standing waves only exist and interact
at discrete frequencies, which is the foundation for the Schrodinger equation of Quantum Mechanics (QM).
• The inward and outward waves are solutions of the wave equation:
${ \partial^2 u(r,t) \over \partial t^2 } - c^2 abla^2u(r,t) = f(r,t)$
Instead of interpreting a wave function as a probability distribution of discrete particles as in QM, the 'particle' is represented by the entire wave function and we locate it experimentally at the
wave-center where energy-exchange takes place. Thus discrete 'particles', or charge substances, or mass substances do not exist.
• Total wave amplitudes always follow a minimum amplitude principle (MAP) at each point in space. Thus opposite charges attract, and like charges repel. Other examples are when water seeks a common
level or when heat moves from hot to cold regions.
• Atoms and molecules are compositions and interactions of these standing waves, thus matter only exists at discrete frequencies and energy states.
• The 1) fundamental forces, and 2) the QM and relativity laws, are properties of WSM, i.e. 1) the motion of the wave-centers that obey the MAP and 2) Doppler rules that lead to the well-known QM
and relativity mathematics.
(Summarized by Nicholas Cooper)
The idea, conceived over 150 years ago by William Kingdon Clifford who has become the father of the algebra of geometry ,was stated by him:
'I hold that:
1. Small portions of space are in fact analogous to little hills on a surface which is on the average flat, namely that the ordinary laws of geometry are not valid in them.
2. This property of being curved or distorted is continually being passed on from one portion of space to another after the manner of a wave.
3. This variation of the curvature of space is what really happens in that phenomenon which we call the motion of matter, whether ponderable or ethereal.
4. In this physical world, nothing else takes place but this variation subject to the law of continuity.'
(This statement also became the basis of the curvature of space that was mathematically constructed by Albert Einstein.)
In the classic view of quantum mechanics, electrons are regarded as discrete particles that can only be located within a statistically determined volume. This view uses one of two possible
interpretations of the Schrodinger Equation: that (1) discrete particles do exist and waves are only 'probabilities'. Schrodinger himself took the opposite view (2) that only waves exist, whereas
particles do not. His view has led to the Wave Structure of Matter.
'What we observe as material bodies and forces are nothing but shapes and variations in the structure of space. Particles are just schaumkommen (appearances). ... The world is given to me only once,
not one existing and one perceived. Subject and object are only one. The barrier between them cannot be said to have broken down as a result of recent experience in the physical sciences, for this
barrier does not exist. ... Let me say at the outset, that in this discourse, I am opposing not a few special statements of quantum mechanics held today (1950s), I am opposing as it were the whole of
it, I am opposing its basic views that have been shaped 25 years ago, when Max Born put forward his probability interpretation, which was accepted by almost everybody. ... I don't like it, and I'm
sorry I ever had anything to do with it.' (Erwin Schrodinger, Life and Thought, Cambridge U. Press, 1989).
In 1945, John Archibald Wheeler and Richard Feynman attempted to find the energy transfer mechanism of the electron - typically light. They found a 'response of the Universe' to electron outward
charge waves that simulated spherical in-ward waves. However, they worked with spherical vector electromagnetic waves and retained the concept of the 'particle' which generated these waves that acted
on other 'particles'. Their use of continuous vector electromagnetic waves causes problems of an infinite field strength (as radius of field tends to zero) and renormalisation. As Paul Dirac and
Richard Feynman wrote;
'I must say that I am very dissatisfied with the situation, because this so called good theory does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way.
This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small - not neglecting it just because it is infinitely great and you do not want
it!' (Paul Dirac)
'But no matter how clever the word, it is what I call a dippy process! Having to resort to such hocus pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically
self consistent. ... I suspect that renormalisation is not mathematically legitimate.' (Richard Feynman, The Strange Theory of Light and Matter, 1985)
Albert Einstein was also aware of this problem as he explains in his critique of Lorentz's electromagnetic field theory for electrons (as it is still the same fundamental problem of the particle /
electromagnetic field duality).
'The inadequacy of this point of view manifested itself in the necessity of assuming finite dimensions for the particles in order to prevent the electromagnetic field existing at their surfaces from
becoming infinitely large.' (Albert Einstein, 1936)
During this same time, Albert Einstein was working on a generalized theory of gravitation to unite the fundamental forces of nature. In his theory, he represented the electron as continuous spherical
fields in space-time. His death in 1955 left his attempts at a unified field theory of matter unsolved. As Einstein writes;
'All these fifty years of conscious brooding have brought me no nearer to the answer to the question, 'What are light quanta?' Nowadays every Tom, Dick and Harry thinks he knows it, but he is
mistaken. ... I consider it quite possible that physics cannot be based on the field concept, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, gravitation
theory included, [and of] the rest of modern physics.' (Albert Einstein, Letter to Michael Besso, 1954)
'Einstein thinks he has a continuous field theory that avoids 'spooky action at a distance', but the calculation difficulties are very great. He is quite convinced that some day a theory that does
not depend on probabilities will be found.' (Max Born, p158 Mar 1947)
The Wave Structure of Matter (WSM) simplifies Einstein's foundations, from his continuous spherical fields in space-time, to spherical waves in continuous space. By rejecting both continuous fields
and particles, and instead working with standing wave structures, the WSM explains the discrete energy states of light and matter found in Quantum Theory (which Einstein's Relativity could never
explain) without introducing the disturbing particle / wave duality of light and matter. As Einstein writes;
'The great stumbling block for the field theory lies in the conception of the atomic structure of matter and energy. For the theory is fundamentally non-atomic in so far as it operates exclusively
with continuous functions of space, in contrast to classical mechanics whose most important element, the material point, in itself does justice to the atomic structure of matter.' (Albert Einstein,
'The special and general theories of relativity, which, though based entirely on ideas connected with the field-theory, have so far been unable to avoid the independent introduction of material
points, ' the continuous field thus appeared side by side with the material point as the representative of physical reality. This dualism remains even today disturbing as it must be to every orderly
mind.' (Albert Einstein, 1954)
'The Maxwell equations in their original form do not, however, allow such a description of particles, because their corresponding solutions contain a singularity. Theoretical physicists have tried
for a long time (1936), therefore, to reach the goal by a modification of Maxwell's equations. These attempts have, however, not been crowned with success. What appears certain to me, however, is
that, in the foundations of any consistent field theory the particle concept must not appear in addition to the field concept. The whole theory must by based solely on partial differential equations
and their singularity-free solutions.' (Albert Einstein, 1954)
See also
• History of the Wave Structure of Matter
1.William Clifford, 1885, The Common Sense of the Exact Sciences, Ed. Karl Pearson, preface by Bertrand Russell, Dover, NY (1955).
2. E. Schrodinger. In Schrodinger - Life and Thought, Cambridge U. Press, p327 (1989).
4. E. Mach, (1883 German). English: The Science of Mechanics, Open Court (1960).
5. M. Wolff, 'Exploring the Physics of the Unknown Universe', Technotran Press, (1990).
6. M. Wolff, 'Gravitation and Cosmology' in From the Hubble Radius to the Planck Scale, R. L. Amoroso et al (Eds.), pp 517-524, Kluwer Acad. Publ. (2002).
7. W. Clifford, (1876) 'On the Space Theory of Matter' in The World of Mathematics, p568, Simon and Schuster, NY (1956).
8. J. A. Wheeler, and R. Feynman, Rev. Mod. Phys. 17, 157 (1945).
9. H. Tetrode, Zeits. F. Physik 10, 312 ((1922).
10. A. Einstein, Relativity, Crown Books (1950).
11. M. Wolff, Physics Essays 6, No 2, 181-203 (1993).
12. G. Haselhurst, (to be published in) What is the Electron, Apeiron Press (2005). Also: http:www.SpaceandMotion.com
13. C. Mead, 'Collective Electrodynamics', MIT Press (2000).
14. E. Batty-Pratt and T. Racey, Int. J. Theor. Phys. 19, 437 (1980).
External links and resources
• On Truth and Reality: Philosophy of Physics and Metaphysics in the Wave Structure Matter model
• Milo Wolff's Quantum Science Corner
• Blaze Labs Research: The Particle - The wrong turn that led physics to a dead end
• Physics Philosophy Metaphysics of Space / Wave Structure of Matter Forum
• Matter is made of waves
|
{"url":"http://www.spaceandmotion.com/wikipedia/wave-structure-matter.htm","timestamp":"2014-04-18T08:25:03Z","content_type":null,"content_length":"16171","record_id":"<urn:uuid:4b7bea76-7b5d-49df-a856-15f033a18bba>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This collection provides one file:
_tail.ss_: special form for tail-context continuations
This library provides a syntax for capturing a continuation and binding it to a
variable that, when applied, evaluates its subexpression in tail position with
respect to its parent expression.
> (let-tail-continuation k body1 body2 ...)
> (let/tc k body1 body2 ...)
A syntax for capturing the current continuation and binding it to the variable
k. The variable is in scope for the evaluation of the body expressions. The
difference between _let/tc_ and _let/cc_ is that the argument to an
application of the continuation variable bound by _let/tc_ occurs in
tail position.
Continuations bound by _let/tc_ may only receive exactly one value.
> (push-begin e1 e2 ...)
A syntax for evaluating a sequence of expressions, like Scheme's
primitive BEGIN, but without preserving tail context. In particular,
the last expression of _push-begin_ is NOT in tail position with
respect to the containing expression.
The value of a _push-begin_ expression, like BEGIN, is the value of
the last expression in the sequence.
The expressions of a _push-begin_ may evaluate to multiple values (or
no values).
EXAMPLES -------------------------------------------------------------
(define (current-continuation-mark-list key-v)
(continuation-mark-set->list (current-continuation-marks) key-v))
(define (countdown n)
(with-continuation-mark 'countdown n
(let/ec return
(if (zero? n)
(return (current-continuation-mark-list 'countdown))
(return (countdown (sub1 n)))))))
> (countdown 10)
(0 1 2 3 4 5 6 7 8 9 10)
(define (countdown* n)
(with-continuation-mark 'countdown n
(let/tc return
(if (zero? n)
(return (current-continuation-mark-list 'countdown))
(return (countdown* (sub1 n)))))))
> (countdown 10)
|
{"url":"http://planet.racket-lang.org/package-source/dherman/tail.plt/2/1/doc.txt","timestamp":"2014-04-18T18:17:20Z","content_type":null,"content_length":"5848","record_id":"<urn:uuid:79c25412-77bf-4b78-bd81-398285a6829e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ideas for Working with Fractions
Shodor > Interactivate > Lessons > Ideas for Working with Fractions
It is necessary for fourth and fifth grade students to be able to convert between fractions, decimals, and percents. Much of this learning must be done through repetition and practice once students
begin to grasp the concept. This lesson provides a way for students to practice this concept.
Upon completion of this lesson, students will:
• practice naming and converting between fractions, decimals, and percents.
Standards Addressed:
Student Prerequisites
• Arithmetic: Student must be able to:
□ add, subtract, multiply, and divide whole numbers.
• Technological: Students must be able to:
□ perform basic mouse manipulations such as point, click and drag.
Teacher Preparation
• access to a browser
• pencil and paper
Key Terms
decimal Short for the term "decimal fraction", a decimal is another way to represent fractional numbers. The decimal uses place value to express the value of a number as opposed to a fraction
that uses a numerator and denominator.
decimal A fraction where the denominator is a power of ten and is therefore expressed using a decimal point. For example: 0.37 is the decimal equivalent of 37/100
fraction A rational number of the form a/b where a is called the numerator and b is called the denominator
percent A ratio that compares a number to one hundred. The symbol for percent is %
Lesson Outline
1. Focus and Review
Get the students to think about fractions by asking them to think of things they needed to share by splitting a single object into parts. Possibly give the example of a pizza or a piece of paper.
2. Objectives
Today, class, we are going to use the computers to help practice our skills working with fractions, decimals, and percents.
3. Teacher Input
Give students several examples of fractions, decimals and percents used in the real world. Examples such as sharing a pizza, a ruler, measures on bottles or cans for decimals, or food labels for
percents. Discuss with the students that these are all representations of the same thing. A fraction can be expressed as a decimal or as a percent and vice versa.
4. Guided Practice
Give the students time so they may try to solve the problems on the worksheet by themselves. Once the students have had enough time to solve the problems, have them check their fraction to
decimal conversion answers using the Fraction Converter applet.
5. Independent Practice
Once the students have opened the Pie Chart applet, tell them to adjust the applet settings so that the pie chart only has two sections. After all of the settings are correctly adjusted, have the
students try to replicate their answers from the worksheet using the two sections. It may be helpful to remind your students the sum of the two sections must equal 100. For example:
You would have the students enter 75% as the percentage for one portion of the circle and 25% for the other. Then the students would compare the circle they colored with the one colored by the
Pie chart applet to make sure they look similar. Once the students have completed the worksheet, they can choose partners and play the Fraction Four applet. Be sure you tell them to set the
"problem type" in the Fraction Four applet to "fractions, decimals, and percents".
6. Closure
Have students come to the board and share one of their responses to the worksheet.
©1994-2014 Shodor Website Feedback
It is necessary for fourth and fifth grade students to be able to convert between fractions, decimals, and percents. Much of this learning must be done through repetition and practice once students
begin to grasp the concept. This lesson provides a way for students to practice this concept.
Upon completion of this lesson, students will:
• practice naming and converting between fractions, decimals, and percents.
Standards Addressed:
Student Prerequisites
• Arithmetic: Student must be able to:
□ add, subtract, multiply, and divide whole numbers.
• Technological: Students must be able to:
□ perform basic mouse manipulations such as point, click and drag.
Teacher Preparation
• access to a browser
• pencil and paper
Key Terms
decimal Short for the term "decimal fraction", a decimal is another way to represent fractional numbers. The decimal uses place value to express the value of a number as opposed to a fraction
that uses a numerator and denominator.
decimal A fraction where the denominator is a power of ten and is therefore expressed using a decimal point. For example: 0.37 is the decimal equivalent of 37/100
fraction A rational number of the form a/b where a is called the numerator and b is called the denominator
percent A ratio that compares a number to one hundred. The symbol for percent is %
Lesson Outline
1. Focus and Review
Get the students to think about fractions by asking them to think of things they needed to share by splitting a single object into parts. Possibly give the example of a pizza or a piece of paper.
2. Objectives
Today, class, we are going to use the computers to help practice our skills working with fractions, decimals, and percents.
3. Teacher Input
Give students several examples of fractions, decimals and percents used in the real world. Examples such as sharing a pizza, a ruler, measures on bottles or cans for decimals, or food labels for
percents. Discuss with the students that these are all representations of the same thing. A fraction can be expressed as a decimal or as a percent and vice versa.
4. Guided Practice
Give the students time so they may try to solve the problems on the worksheet by themselves. Once the students have had enough time to solve the problems, have them check their fraction to
decimal conversion answers using the Fraction Converter applet.
5. Independent Practice
Once the students have opened the Pie Chart applet, tell them to adjust the applet settings so that the pie chart only has two sections. After all of the settings are correctly adjusted, have the
students try to replicate their answers from the worksheet using the two sections. It may be helpful to remind your students the sum of the two sections must equal 100. For example:
You would have the students enter 75% as the percentage for one portion of the circle and 25% for the other. Then the students would compare the circle they colored with the one colored by the
Pie chart applet to make sure they look similar. Once the students have completed the worksheet, they can choose partners and play the Fraction Four applet. Be sure you tell them to set the
"problem type" in the Fraction Four applet to "fractions, decimals, and percents".
6. Closure
Have students come to the board and share one of their responses to the worksheet.
It is necessary for fourth and fifth grade students to be able to convert between fractions, decimals, and percents. Much of this learning must be done through repetition and practice once students
begin to grasp the concept. This lesson provides a way for students to practice this concept.
decimal Short for the term "decimal fraction", a decimal is another way to represent fractional numbers. The decimal uses place value to express the value of a number as opposed to a fraction
that uses a numerator and denominator.
decimal A fraction where the denominator is a power of ten and is therefore expressed using a decimal point. For example: 0.37 is the decimal equivalent of 37/100
fraction A rational number of the form a/b where a is called the numerator and b is called the denominator
percent A ratio that compares a number to one hundred. The symbol for percent is %
Get the students to think about fractions by asking them to think of things they needed to share by splitting a single object into parts. Possibly give the example of a pizza or a piece of paper.
Today, class, we are going to use the computers to help practice our skills working with fractions, decimals, and percents.
Give students several examples of fractions, decimals and percents used in the real world. Examples such as sharing a pizza, a ruler, measures on bottles or cans for decimals, or food labels for
percents. Discuss with the students that these are all representations of the same thing. A fraction can be expressed as a decimal or as a percent and vice versa.
Give the students time so they may try to solve the problems on the worksheet by themselves. Once the students have had enough time to solve the problems, have them check their fraction to decimal
conversion answers using the Fraction Converter applet.
Once the students have opened the Pie Chart applet, tell them to adjust the applet settings so that the pie chart only has two sections. After all of the settings are correctly adjusted, have the
students try to replicate their answers from the worksheet using the two sections. It may be helpful to remind your students the sum of the two sections must equal 100. For example:
You would have the students enter 75% as the percentage for one portion of the circle and 25% for the other. Then the students would compare the circle they colored with the one colored by the Pie
chart applet to make sure they look similar. Once the students have completed the worksheet, they can choose partners and play the Fraction Four applet. Be sure you tell them to set the "problem
type" in the Fraction Four applet to "fractions, decimals, and percents".
Have students come to the board and share one of their responses to the worksheet.
|
{"url":"http://www.shodor.org/interactivate/lessons/WorkWithFractions/","timestamp":"2014-04-19T07:01:37Z","content_type":null,"content_length":"29166","record_id":"<urn:uuid:3dfd3bb8-fe73-4f52-9c63-ae3ca676926b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Partial Derivatives and Gradient
Date: 12/12/98 at 02:27:21
From: alfredo
Subject: Partial and directional derivative, gradient
I don't understand the definitions of partial and directional
derivatives and gradients. Could you explain the meaning and relations
between them?
Thank you.
Date: 12/12/98 at 07:39:54
From: Doctor Jerry
Subject: Re: Partial and directional derivative, gradient
Hi Alfredo,
I'll assume that you know the meaning and definitions of partial
derivatives, which are directional derivatives in the {1,0} and {0,1}
directions, for functions of two variables. The directional derivative
D_u f(a) at a point a = {a_1, a_2} and in the u = {u_1, u_2} direction
(u_1 means u sub 1, etc) is the rate of change of f in the u-direction,
that is, the limit as h->0 of:
[f(a+h*u) - f(a)]/h
where u = {u_1, u_2} is a unit vector, and h is a number.
If u = {1,0}, then D_u f(a) is the partial derivative f_x of f with
respect to x.
Under mild hypotheses, D_u f(a) = grad(f) dot u, where
grad(f) = {f_x, f_y}. The gradient direction is the direction in which
the directional derivative is a maximum.
For more information on the gradient, see the archives:
The Gradient
- Doctor Jerry, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/52072.html","timestamp":"2014-04-19T10:44:54Z","content_type":null,"content_length":"6259","record_id":"<urn:uuid:5d08bc3a-bd8b-4787-afd9-6ca4678101f8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
GTO-Based STATCOM
The example described in this section illustrates application of SimPowerSystems™ software to study the steady-state and dynamic performance of a static synchronous compensator (STATCOM) on a
transmission system. The STATCOM is a shunt device of the Flexible AC Transmission Systems (FACTS) family using power electronics. It regulates voltage by generating or absorbing reactive power. If
you are not familiar with the STATCOM, please refer to the Static Synchronous Compensator (Phasor Type) block documentation, which describes the STATCOM principle of operation.
Depending on the power rating of the STATCOM, different technologies are used for the power converter. High power STATCOMs (several hundreds of Mvars) normally use GTO-based, square-wave
voltage-sourced converters (VSC), while lower power STATCOMs (tens of Mvars) use IGBT-based (or IGCT-based) pulse-width modulation (PWM) VSC. The Static Synchronous Compensator (Phasor Type) block of
the FACTS library is a simplified model, which can simulate different types of STATCOMs. You can use it with phasor simulation, available through the Powergui block, for studying dynamic performance
and transient stability of power systems. Due to low frequencies of electromechanical oscillations in large power systems (typically 0.02 Hz to 2 Hz), this type of study usually requires simulation
times of 30–40 seconds or more.
The STATCOM model described in this example is rather a detailed model with full representation of power electronics. It uses a square-wave, 48-pulse VSC and interconnection transformers for harmonic
neutralization. This type of model requires discrete simulation at fixed type steps (25 µs in this case) and it is used typically for studying the STATCOM performance on a much smaller time range (a
few seconds). Typical applications include optimizing of the control system and impact of harmonics generated by converter.
Description of the STATCOM
The STATCOM described in this example is available in the power_statcom_gto48p model. Load this model and save it in your working directory as case3 to allow further modifications to the original
system. This model shown on SPS Model of the 100 Mvar STATCOM on a 500 kV Power System (power_statcom_gto48p) represents a three-bus 500 kV system with a 100 Mvar STATCOM regulating voltage at bus
The internal voltage of the equivalent system connected at bus B1 can be varied by means of a Three-Phase Programmable Voltage Source block to observe the STATCOM dynamic response to changes in
system voltage.
SPS Model of the 100 Mvar STATCOM on a 500 kV Power System (power_statcom_gto48p)
STATCOM Power Component
The STATCOM consists of a three-level 48-pulse inverter and two series-connected 3000 µF capacitors which act as a variable DC voltage source. The variable amplitude 60 Hz voltage produced by the
inverter is synthesized from the variable DC voltage which varies around 19.3 kV.
Double-click on the STATCOM 500kV 100 MVA block (see subsystem in 48-Pulse Three-Level Inverter).
48-Pulse Three-Level Inverter
The STATCOM uses this circuit to generate the inverter voltage V2 voltage mentioned in the Static Synchronous Compensator (Phasor Type) block documentation. It consists of four 3-phase 3-level
inverters coupled with four phase shifting transformers introducing phase shift of +/-7.5 degrees.
Except for the 23rd and 25th harmonics, this transformer arrangement neutralizes all odd harmonics up to the 45th harmonic. Y and D transformer secondaries cancel harmonics 5+12n (5, 17, 29, 41,...)
and 7+12n (7, 19, 31, 43,...). In addition, the 15° phase shift between the two groups of transformers (Tr1Y and Tr1D leading by 7.5°, Tr2Y and Tr2D lagging by 7.5°) allows cancellation of harmonics
11+24n (11, 35,...) and 13+24n (13, 37,...). Considering that all 3n harmonics are not transmitted by the transformers (delta and ungrounded Y), the first harmonics that are not canceled by the
transformers are therefore the 23rd, 25th, 47th and 49th harmonics. By choosing the appropriate conduction angle for the three-level inverter (σ = 172.5°), the 23rd and 25th harmonics can be
minimized. The first significant harmonics generated by the inverter will then be 47th and 49th. Using a bipolar DC voltage, the STATCOM thus generates a 48-step voltage approximating a sine wave.
The following figure reproduces the primary voltage generated by the STATCOM 48-pulse inverter as well as its harmonics contents.
Frequency Spectrum of Voltage Generated by 48-Pulse Inverter at No Load
This frequency spectrum was obtained by running the power_48pulsegtoconverter example, which uses the same converter topology. The FFT analysis was performed by using the FFT Analysis tool in the
Powergui block. FFT uses one cycle of inverter voltage during the no-load operation and a 0–6000 Hz frequency range.
STATCOM Control System
Open the STATCOM Controller (see subsystem in STATCOM Control System).
STATCOM Control System
The control system task is to increase or decrease the capacitor DC voltage, so that the generated AC voltage has the correct amplitude for the required reactive power. The control system must also
keep the AC generated voltage in phase with the system voltage at the STATCOM connection bus to generate or absorb reactive power only (except for small active power required by transformer and
inverter losses).
The control system uses the following modules:
● PLL (phase locked loop) synchronizes GTO pulses to the system voltage and provides a reference angle to the measurement system.
● Measurement System computes the positive-sequence components of the STATCOM voltage and current, using phase-to-dq transformation and a running-window averaging.
● Voltage regulation is performed by two PI regulators: from the measured voltage Vmeas and the reference voltage Vref, the Voltage Regulator block (outer loop) computes the reactive current
reference Iqref used by the Current Regulator block (inner loop). The output of the current regulator is the α angle which is the phase shift of the inverter voltage with respect to the system
voltage. This angle stays very close to zero except during short periods of time, as explained below.
A voltage droop is incorporated in the voltage regulation to obtain a V-I characteristics with a slope (0.03 pu/100 MVA in this case). Therefore, when the STATCOM operating point changes from
fully capacitive (+100 Mvar) to fully inductive (-100 Mvar) the SVC voltage varies between 1-0.03=0.97 pu and 1+0.03=1.03 pu.
● Firing Pulses Generator generates pulses for the four inverters from the PLL output (ω.t) and the current regulator output (α angle).
To explain the regulation principle, let us suppose that the system voltage Vmeas becomes lower than the reference voltage Vref. The voltage regulator will then ask for a higher reactive current
output (positive Iq= capacitive current). To generate more capacitive reactive power, the current regulator will then increase α phase lag of inverter voltage with respect to system voltage, so that
an active power will temporarily flow from AC system to capacitors, thus increasing DC voltage and consequently generating a higher AC voltage.
As explained in the preceding section, the conduction angle σ of the 3-level inverters has been fixed to 172.5°. This conduction angle minimizes 23rd and 25th harmonics of voltage generated by the
square-wave inverters. Also, to reduce noncharacteristic harmonics, the positive and negative voltages of the DC bus are forced to stay equal by the DC Balance Regulator module. This is performed by
applying a slight offset on the conduction angles σ for the positive and negative half-cycles.
The STATCOM control system also allows selection of Var control mode (see the STATCOM Controller dialog box). In such a case, the reference current Iqref is no longer generated by the voltage
regulator. It is rather determined from the Qref or Iqref references specified in the dialog box.
Steady-State and Dynamic Performance of the STATCOM
You will now observe steady-state waveforms and the STATCOM dynamic response when the system voltage is varied. Open the programmable voltage source menu and look at the sequence of voltage steps
that are programmed. Also, open the STATCOM Controller dialog box and verify that the STATCOM is in Voltage regulation mode with a reference voltage of 1.0 pu. Run the simulation and observe
waveforms on the STATCOM scope block. These waveforms are reproduced below.
Waveforms Illustrating STATCOM Dynamic Response to System Voltage Steps
Initially the programmable voltage source is set at 1.0491 pu, resulting in a 1.0 pu voltage at bus B1 when the STATCOM is out of service. As the reference voltage Vref is set to 1.0 pu, the STATCOM
is initially floating (zero current). The DC voltage is 19.3 kV. At t=0.1s, voltage is suddenly decreased by 4.5% (0.955 pu of nominal voltage). The STATCOM reacts by generating reactive power (Q=+70
Mvar) to keep voltage at 0.979 pu. The 95% settling time is approximately 47 ms. At this point the DC voltage has increased to 20.4 kV.
Then, at t=0.2 s the source voltage is increased to1.045 pu of its nominal value. The STATCOM reacts by changing its operating point from capacitive to inductive to keep voltage at 1.021 pu. At this
point the STATCOM absorbs 72 Mvar and the DC voltage has been lowered to 18.2 kV. Observe on the first trace showing the STATCOM primary voltage and current that the current is changing from
capacitive to inductive in approximately one cycle.
Finally, at t=0.3 s the source voltage in set back to its nominal value and the STATCOM operating point comes back to zero Mvar.
The figure below zooms on two cycles during steady-state operation when the STATCOM is capacitive and when it is inductive. Waveforms show primary and secondary voltage (phase A) as well as primary
current flowing into the STATCOM.
Steady-State Voltages and Current for Capacitive and Inductive Operation
Notice that when the STATCOM is operating in capacitive mode (Q=+70 Mvar), the 48-pulse secondary voltage (in pu) generated by inverters is higher than the primary voltage (in pu) and in phase with
primary voltage. Current is leading voltage by 90°; the STATCOM is therefore generating reactive power.
On the contrary, when the STATCOM is operating in inductive mode, secondary voltage is lower than primary voltage. Current is lagging voltage by 90°; the STATCOM is therefore absorbing reactive
Finally, if you look inside the Signals and Scopes subsystem you will have access to other control signals. Notice the transient changes on α angle when the DC voltage is increased or decreased to
vary reactive power. The steady-state value of α (0.5 degrees) is the phase shift required to maintain a small active power flow compensating transformer and converter losses.
|
{"url":"http://www.mathworks.com/help/physmod/sps/powersys/ug/gto-based-statcom.html?nocookie=true","timestamp":"2014-04-20T21:50:03Z","content_type":null,"content_length":"49920","record_id":"<urn:uuid:56c20125-dbfe-412e-890f-d268e3972d76>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: chess challenge to experts
Kanovei kanovei at wmwap1.math.uni-wuppertal.de
Sun Nov 12 07:36:42 EST 2000
> Date: Fri, 10 Nov 2000 18:43:39 -0500
> From: Harvey Friedman <friedman at math.ohio-state.edu>
> Reply to Kanovei 6:42PM 11/10/00:
> >I began by the claim that (ontologically) there is NO
> >mathematical statement true but not provable.
> Are you claiming that "every true mathematical statement is provable"?
No, I like my version, it is not a formal statement, hence,
not-E has not necessarily the same meaning as A-not.
In particular your version would assume that I am obliged
to demonstrate WHY it is so, why in my version it is a duty
of an opponent to give a counterexample.
> you are making this claim, then you should clarify what provability means
> here. Provable by what means?
By mathematical means. Nowadays it is to be understood as
in ZFC. This reservation I do not understand (unless you hint
on new axioms to be added to ZFC). Anyway, let it be: by
methods accepted as mathematical means by experts. Today this
is ZFC tomorrow maybe something else (I doubt).
> If you are instead claiming that
> "every mathematical statement proved to be true is provable"
No, just as above. Every math. statmnt which is (ontologically)
true is (mathematically) provable. Any counterexample ?
CHESS CHALLENGE to experts.
By chess laws a chess game cannot be longer than some number
of moves which can be explicitly estimated.
It is, therefore, a PA theorem that
*either* W has a winning strategy *or* B has a non-losing strategy.
Is this really true in the physical word ?
I will not challenge this in matters of "tonns of GB of memory"
that either strategy may need to be formulated.
However, it is certain that possible strategies will definitely
need astronomical memory to count, to store intermediate data,
etc. Is there a counting method which, absolutely independently of
the size of counting material, is stable against quantum-mechanical
effects that yield mistakes even with a perfecr hardware
(whatever hardware may be in this case) ?
If chess is too complicated, the following is another version.
Let k(1)=1 and k(n+1)=10^{k(n)}.
Is there a counting algorithm which allows to check whether
a+b=b+a holds for all numbers a,b < k(1000), with stability
against QM spoiling guaranteed ?
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2000-November/004597.html","timestamp":"2014-04-20T16:38:06Z","content_type":null,"content_length":"4725","record_id":"<urn:uuid:c5b67bce-4c34-42b8-85e8-997a88596794>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Exponential Curve
I'm not planning our Algebra 1 classes this year, so I have not been producing much for it. But I did put together a scaffolded introduction to inequalities. The objectives are for students to:
• Compare numbers using a number line (i.e. "<" means "to the left of")
• Understand the difference between open and closed circles
• Graph the solutions of a statement like "x < 3"
• Understand graphically why adding/subtracting by any number or multiplying/dividing by a positive number does not change the relative position of two numbers, while multiplying/dividing by a
negative number does. In other words, students should understand when and why to "flip the inequality sign" when solving inequalities.
• Solve and graph linear inequalities
Here is the
I started this lesson with some theatrics. I asked them to simplify the fraction shown in the picture, and of course they all wanted to cancel the terms (as expected). I let them do it, and then
changed the pretty pink heart into the fiery eruption you see here. I told them that those red slashes are like daggers through a math teacher's heart. I also told them that, when they go to college,
I never ever want them to make the mistake of canceling out terms. Cancel factors, not terms! We spent a lot of time talking about the difference between factors and terms, and why this rule is true.
We talked about why you can't add 5 and 5x, but you can cancel the 5's in 5/5x. I think this was time well spent, because this canceling problem is a persistent weed. From there, we practiced
factoring and canceling. Pretty straightforward. In the following lesson, we multiplied and reduced products of polynomial fractions. There really were no new skills to learn, so after modeling one
problem, I had them do independent practice work.
And now, I am caught up on postings!
Lesson 11 (Reducing Polynomial Fractions) doc / keynote / quicktime
Lesson 12 (Multiplying Polynomial Fractions) doc
Continuing with the lessons, we learned to factor difference of squares expressions. I used a geometric approach to help make sense out of the pattern, and it has really helped some students figure
out how to more easily factor the nasty ones like 25x^2 - 16y^4. A quick sketch of the squares, labeled with their side lengths, has proven quite useful.
Lesson 9 (Difference of Squares) doc / keynote / quicktime
Lesson 10 (Review and Practice) doc / keynote / quicktime
It's been a while since I posted. The last week of February was our Junior Trip, in which we take all of our junior class on a 4-day-long trip around California to visit various CSU campuses. It's an
incredibly important part of our program, because it is the time when our juniors really start to imagine themselves as college students. The tours, the student panels, seeing the dorms and
classrooms, the admissions directors, and the DCP alumni all bring things into sharper focus for the 11th graders. We moved the trip earlier this year (it used to be in April) because kids come back
inspired and ready to make positive changes, and so we wanted them to have more time to improve their grades before the end of the semester. It's also a great time for students and staff to bond and
get to know each other in different ways. Needless to say, a 4-day, 3-night field trip with 80 high schoolers is tiring. We're all pretty much recovered now, and it's been back to business as usual.
Time to catch up on some lesson postings.
In Algebra 2, we're nearing the end of the polynomials and factoring unit. I've been focusing on basic factoring techniques (look for the GCF first, then either use trinomial factoring or difference
of squares, if possible). I'm still deciding whether to throw sum/difference of cubes into the mix this time around. I decided to bring simplifying and multiplying rational expressions into this unit
(instead of waiting for the rationals unit) because it seemed like a good way to have them get more practice with factoring without repeating the same exact problems again and again. Plus, these
questions are prominently featured on the STAR test.
One thing that has been helping students deal with factoring out the GCF is teaching them to write the prime factorization of each term in the polynomial, every time (including a -1 factor when there
is a minus sign). Though it takes longer, this is pretty much a foolproof way of factoring out the GCF - many students have a lot of difficulty with the "what's the largest expression that divides
into both" method.
Lesson 6 (Factoring the GCF and Trinomials) doc / keynote / quicktime
Lesson 7 (we used Algeblocks to get a better understanding of factoring trinomials) doc
Lesson 8 (Factoring Trinomials by Grouping) doc / keynote / quicktime
|
{"url":"http://exponentialcurve.blogspot.com/2009_03_01_archive.html","timestamp":"2014-04-20T10:47:22Z","content_type":null,"content_length":"80797","record_id":"<urn:uuid:59dd0d2c-4411-4be0-aa77-48a09b12d93c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Filter-bank based efficient transmission of Reduced-Guard-Interval OFDM
« journal navigation
Filter-bank based efficient transmission of Reduced-Guard-Interval OFDM
Optics Express, Vol. 19, Issue 26, pp. B370-B384 (2011)
We propose a new way to structure the digital signal processing for reduced guard-interval (RGI) OFDM optical receivers. The idea is to digitally parallelize the processing over multiple parallel
virtual sub-channels, occupying disjoint spectral sub-bands. This concept is well known in the optical or analog sub-carrier domains, but it turns out that it can also be performed efficiently in the
digital domain. Here we apply critically sampled uniform analysis and synthesis DFT filter bank signal processing techniques in order to realize a novel hardware efficient variant of RGI OFDM,
referred to as Multi-Sub-Band OFDM (MSB-OFDM), reducing by 10% receiver computational complexity, relative to a single-polarization version of the CD pre-equalizer. In addition to being more
computationally efficient than a conventional RGI OFDM system, the signal flow architecture of our scheme is amenable to being more readily realized over multiple FPGAs, for experimental
demonstrations or flexible prototyping.
© 2011 OSA
1. Introduction
(RGI) coherent optical
Orthogonal Frequency Division Multiplexing
(OFDM) [
1. X. Liu, S. Chandrasekhar, B. Zhu, P. J. Winzer, A. H. Gnauck, and D. W. Peckham, “448-Gb/s Reduced-Guard-Interval CO-OFDM Transmission Over 2000 km of Ultra-Large-Area Fiber and Five 80-GHz-Grid
ROADMs,” J. Lightwave Technol. 29(4), 483–490 (2011). [CrossRef]
] is a leading method leveraging the spectral efficiency advantages of OFDM while mitigating the excessive penalty of the Cyclic Prefix (CP) overhead.
In this paper we propose a new way to structure the
digital signal processing
(DSP) for RGI OFDM optical receivers:
Multi-Sub-Band OFDM
(MSB-OFDM), expanding on our brief introduction [
]. The idea (
Fig. 1
) is to digitally parallelize the transmitter receiver processing into multiple,
, parallel virtual sub-channels occupying disjoint spectral sub-bands, each of bandwidth
, where
is the total channel bandwidth (the channel may be one of multiple WDM channels, i.e. the sub-banding provides a lower multiplexing tier under the WDM level). Our optical communication community is
well-used to such (de)multiplexing concepts in the photonic or analog sub-carrier domains, but it turns out that the assembly and partitioning of sub-bands may also be performed
efficiently in the digital domain
This novel digital sub-banding method attains reduced cyclic prefix (CP) overhead while saving computational complexity, as the received samples are digitally partitioned into independent
sub-streams, over spectrally disjoint sub-bands which may be simply and accurately processed, generally improving almost all coherent OFDM receiver functions.
The core processing technique is the incorporation of a critically sampled (CS) analysis filter bank algorithm into the DSP front-end, breaking the digitized high-speed stream of a single WDM channel
into multiple spectral sub-bands, enabling to reduce the CP overhead. This essentially performs the same function as the conventional frequency domain pre-equalizer of RGI OFDM, but the processing is
parallelized in frequency rather than in time.
A previous paper [
] highlighted the advantage of OFDM multi-banding at the Rx, re CP overhead reduction, but pursued an analog sub-carrier or optically multiplexed version of multi-band OFDM, which requires more
complex modulator/receiver and more DACs/ADCs. We pick up where the analog/optical multi-band approach ends up, showing that the CP overhead reduction may be entirely realized digitally, within a
single OFDM receiver (no need for extra modulators, DACs, ADCs) and the filter-bank based realization is actually more efficient than that of conventional RGI OFDM, rather than being less efficient.
The paper is structured as follows: In section 2 we present the receiver concept of digital sub-band de-multiplexing for RGI OFDM. Section 3 introduces the synthesis filter bank at the transmitter.
In section 4 we review critically sampled filter bank implementations, and apply them to our configuration. Section 5 treats the performance of MSB-OFDM over the coherent optical link. Section 6
addresses orthogonal polarizations processing. The final section 7 discusses the multiple features of filter-bank realizations and outlines future work.
2. Digitally partitioning the processing into multiple sub-bands by means of a filter-bank
In this section we introduce the digital sub-band de-multiplexing rationale at the Rx side. We start by reviewing a conventional Coherent OFDM link using
-point (I)FFTs (the motivation for expressing the FFT size as the product of two integers
will be presented shortly).
Figure 2
illustrates an OFDM symbol or block at the Tx. The CP-add operation consists of replicating a section of the OFDM symbol tail at the symbol head. Due to the CD, in the fiber link, the OFDM symbol is
received with a delay spread,
proportional to the optical bandwidth,
, and the fiber length,
. The CP duration must be at least as long as the CD delay spread:
Let us now review the concept of RGI-OFDM.
Figure 3
introduces an overlap-save CD pre-equalizer implemented as a
(FDE), ahead of the CP-drop and FFT operations at the Rx. The combined impulse response of the optical channel and the FDE is now shorter than
thus we may now use a tiny CP. Therefore, we have a tradeoff between the large CP overhead of conventional OFDM and the high HW complexity in RGI OFDM, required to reduce the CP overhead.
Another way to reduce the CD delay spread, and hence the CP, would be to cut down the bandwidth, but along with it the bit rate would be reduced as well. Thus, to attain our high-speed target bit
rate, we might use multiple narrowband sub-channels in parallel, as indicated in
Fig. 1
, which illustrates our MSB OFDM Rx top-level structure. The key element is a bank of band pass filters in the digital domain, say
= 4 band pass filters in parallel, each handling 1/
(here a quarter) of the channel bandwidth, assumed here
= 25 GHz. Therefore, as each sub-channel at each band pass filter is narrowband, its CD-induced delay spread is very small. In addition, as different sub-bands have different center frequencies, they
propagate with different group velocities, thus the CD induces a successive time staggering of the individual sub-channels. Thus, each sub-band experiences little delay spread internally, however
different sub-bands arrive at different times at the Rx. Notice that at this point each filter-bank output still runs at the high sampling rate of the ADC, which is
times faster than a rate commensurate with its reduced
bandwidth. Therefore, we may place down-samplers (as described by arrow-down-
blocks) at the band pass filter outputs, retaining every
-th sample and discarding the samples in between, in effect reducing the sampling rate by a factor of
. The resulting Rx digital front-end is referred to in the DSP literature as
critically-sampled uniform-DFT-filter-bank
. The multiple outputs of the filter-bank are taken at the decimator outputs. Notice that as the sub-bands of the band pass filters are all equal in spectral widths, and their spectral supports are
contiguous and non-overlapping, this type of filter bank alludes to an efficient implementation, based on uniformly frequency shifting a single low-pass
prototype filter
by means of a DFT (section 4). Here, the
term refers to all the contiguous sub-bands having the same spectral width, while the term
critically sampled
(CS) refers to the decimation rate,
, coinciding with the number of paths,
, in the filter bank:
. In contrast, a filter bank with
, as treated in [
] is referred to as
(OS). In this paper we solely treat CS uniform-DFT-filter-banks, which will be henceforth simply referred to, for brevity, as
filter banks
As indicated in
Fig. 4
, each of the
filter bank outputs feeds a sub-band Rx. Thus, we have an array of
low-speed sub-band receivers following the filter bank. The rationale is that while we have invested some computational overhead in partitioning the incoming spectrum into
sub-bands, each sub-band receiver is now quite slow, therefore will be considerably simpler to realize, requiring much less than 1/
-th of the complexity of a full-band conventional OFDM receiver (including the FDE pre-equalizer), therefore we win in overall complexity, even when accounting for the filter-bank extra overhead.
Beyond complexity reduction, additional key complexity and performance advantages of partitioning into sub-bands will be outlined in section7. Here we highlight the tree structure of the DSP
structure of
Fig. 4
, with the filter-bank at the stem of the tree being the high-speed bottleneck, whereas the tree branches are terminated into slow rate sub-band receivers, wherein not only is the processing
simplified, but programmable hardware such as FPGA or even software based DSP may be used to provide flexibility of realization and prototyping. Evidently, in order for this structure to make sense,
it is crucial to devise an efficient implementation for the high-speed filter bank (else all the advantages gained in the slow sub-band Rx array would be offset by the added filter-bank complexity).
This challenge will be addressed in section 4, wherein a detailed block diagram for the implementation of each slow sub-band receiver will be shown. Actually, each sub-band Rx is considerably simpler
than a conventional full-rate OFDM receiver. As per
Fig. 4
, the time-staggering of the various sub-bands is partially corrected by discrete-time delays of integer number of slow-rate samples applied to each sub-band stream. These delays perform FFT window
synchronizations. Simple and accurate Schmidl-Cox [
6. T. M. Schmidl and D. C. Cox, “Robust frequency and timing synchronization for OFDM,” IEEE Trans. Commun. 45(12), 1613–1621 (1997). [CrossRef]
] based algorithms are applicable for estimating the required delay in each sub-band, but this is outside the scope of this paper. Next, each sub-band Rx performs an
-point FFT, followed by an array of 1-tap
(EQZ) (complex scaling of each of the FFT sub-carrier outputs). For sufficiently large number
of sub-bands (e.g.
= 16, for our 25 GHz channel), it turns out that the residual CD delay spread is less than the duration Ts of a single sample (0.644∙Ts for 2000km fiber), thus there is no need for a FDE
pre-equalizer in each sub-band receiver, which simply contains an integer-delay for coarse FFT window synchronization, the
-point FFT and the 1-tap EQZ, terminated in QAM-slicing of each of the 1-tap EQZ outputs, corresponding to the
N tones
(OFDM subcarriers) within each OFDM band. The 1-tap EQZ may correct any residual CD over the small sub-band (though in practice for say,
= 16 sub-bands and
= 25 GHz aggregate bandwidth the CD induced quadratic phase-shift over each sub-band is negligible), as well as compensate for the fine (fractional) timing. Notice that, in principle, the frequency
domain 1-tap EQZ is able to provide timing correction with arbitrary resolution, provided the time shift is within the CP span. To improve spectral efficiency, we use the shortest CP possible, just a
single sample per sub-band, hence the 1-tap EQZ provides perfect fractional time delay correction, while the integer pre-delay in the FFT window synchronization addresses the required integer delay
correction. This indicates that the multi-sub-band approach also enables simplified and robust OFDM timing recovery, and the overall sub-band receiver structure is indeed very simple.
Thus, the overall scheme adopts a
strategy, with the
occurring in the filter bank, while the
is completed in the individual sub-band receivers. In particular, the sub-band Rx operations are performed at the
times slower rate, thus the
-points FFTs are manageable, although we have
of them. The overall processing is now equivalent to performing larger MN-points FFT, which would have been quite computationally demanding at the full rate of the aggregate channel. Those versed in
FFT complexity may concur that
decoupled FFTs each of
points, are simpler than a single
-points FFT, even if efficiently realized by the Cooley-Tuckey algorithm. Thus, if high spectral and temporal (low CP overhead) efficiency is our objective, the number of OFDM tones should be large,
and so should the FFT size, which would be challenging at high speed (see [
] for state-of-the-art FFT sizes at high speed). The proposed filter bank approach effectively enables a very large FFT size, coupled with the other heavy operations required in the FDE equalization.
The savings are not only in the number of multipliers, but also in the elimination of a considerable amount of data shuffling involved in large FFT size generation – e.g. if the full
-points FFT is realized as a radix-
structure, we have
FFT-s of size
, followed by twiddle factor multiplications then followed by
FFTs of size
. However, the data needs to be re-organized after the first array of
sub-FFT-s of size
such that each of these FFTs feeds each of the
FFTs of size
of the second array; it is this data shuffling as well as the second array of sub-FFTs that get eliminated in our filter-bank based approach, which may be viewed as an efficient way of organizing the
large FFT, coupled with the dramatic impact of eliminating of the FDE pre-equalizer, which simplifies the overall processing. The FDE pre-equalization is not required at all ahead of the OFDM DFT in
each of the sub-band receivers, as just 1-tap equalization suffices after the sub-band FFT – this is the case provided that a sufficient number of sub-bands is used, such that each sub-band is
effectively frequency flat, seeing very little CD. To account for the FDE elimination, notice that it is well-known that the complexity of a CD equalizer, realized in the time domain as an FIR
filter, is quadratic rather than linear in the bandwidth
(more bandwidth implies more CD delay spread in seconds, but in addition, the sampling rate also gets proportionately higher with
, thus the number of samples to be processed in the CD equalizer goes as
(note that even if the CD equalizer is realized in the frequency domain, the effective temporal window to address, measured in samples, is still
- it is just that in a frequency-domain realization the number of multiplications required to implement the FDE is less than the number of FIR taps). Conversely, if the bandwidth is reduced by a
factor of
, (upon moving from the full band,
, to a sub-band,
) the quadratic dependence,
, implies that the impulse response window of the CD, as measured in full-rate samples, is reduced by a factor
. For ultra-long haul 2000km transmission of a 25 GHz channel over standard SMF, the CD delay spread duration is164 full-rate samples (here full-rate means a sampling rate slightly exceeding the
Nyquist rate of 25 GS/s). When the bandwidth is reduced by
= 16, i.e., for a single sub-band, we must scale down the sampling rate by the 1/
= 1/16
factor, obtaining the result that the CD impulse response duration is less than a single sample. Accordingly, as already mentioned, it suffices to use a reduced
CP of a single sample per sub-band
(i.e., CP overhead, of just 1/
for the
-point FFT assumed per sub-band). This indicates that we may realize RGI OFDM, with low residual overhead, e.g. 1/128 = 0.78% CP OH for
= 128 points FFT in an
= 16 sub-bands system.
The general principle at work here is that the overall optical receiver complexity is generally super-linear (increases faster than linear) in B, thus when the bandwidth is reduced by a factor of M,
the receiver complexity reduces by factor larger than 1/M, and as we have M sub-band receivers, the overall sub-band complexity of the sub-band receiver array is much reduced relative to the
full-rate conventional receiver. This principle will apply to every receiver function or block, with some functions of the conventional full-rate receiver being completely eliminated (such as the
FDE), while other functions (e.g. the timing recovery), will be seen to require less complexity overall, when summed up across all sub-band receivers. These savings are partially offset by the
overhead incurred in partitioning the signal into sub-bands by means of the filter-bank. In this introductory presentation of the filter bank concept we shall not be able to address all receiver DSP
functions, nor carry out a full comparison of the “full-rate” vs. filter-bank implementations, but in the remainder of this paper we shall focus on a thorough comparative analysis of the key
functionality of CD equalization which weighs heavily on the complexity of reduced guard band interval OFDM realizations. In section 6 we shall also briefly outline the filter-bank based receiver
structure required to handle polarization de-multiplexing.
3. Synthesis filter bank at the transmitter
Heretofore, we have addressed the receiver digital FDM de-multiplexing – the partitioning of the overall channel into multiple sub-channels, each transmitted over a sub-band, by means of a
filter-bank structure which may be characterized as an analysis filter bank (as it analyzes or separates the overall spectrum into sub-bands). Notice that in the absence of nonlinearities, the linear
impairments experienced by each of the sub-band Rx-s are decoupled -the signal received by each sub-band Rx is processed independently from the processing occurring in the other sub-band receivers.
This implies that each sub-band signal has also been generated at the transmitter independently from other sub-band signals. Thus, each sub-band receiver detects a single sub-channel, correspondingly
generated at the transmitter, decoupled from the other sub-channels. At the Rx we have the analysis filter bank - the dual concept at the transmitter is that of FDM multiplexing of multiple
sub-channels, digitally forming the overall channel spectrum by means of a synthesis filter bank, juxtaposing multiple sub-bands each carrying a sub-channel to be addressed to a corresponding
sub-band receiver (
Fig. 5
). As is well known in filter-bank DSP theory, the analysis and synthesis filter banks are duals of each other in the critically sampled case. Efficient implementation of synthesis and analysis CS
filter banks is addressed in section 4. Low filter-bank complexity is a key to achieving low overhead for the filter-bank operations, enabling to take advantage of the complexity savings in the
sub-band receivers.
The dual
filter bank description implies that any modulation format, besides OFDM, may be supported for the sub-channel streams injected at the transmitter in each of the sub-bands, by means of the synthesis
filter bank. This is indeed the case – e.g. each sub-channel may consist of single-carrier QAM – however this case will not be further pursued here. Instead, this paper focuses on the case wherein
each sub-channel, as transmitted over a sub-band, consists of a tributary OFDM signal with
N tones
(subcarriers). It is evident that the frequency-domain juxtaposition of
such OFDM tributaries, each containing
OFDM tones actually forms a single aggregate OFDM signal with
tones. A necessary condition for it is that the sub-channels be properly synchronized so that all the tones from all sub-bands fall onto a common frequency grid (this synchronization condition is
readily achieved digitally, provided the center frequencies of the sub-bands are made to fall onto the same grid). This further indicates, that in the OFDM case, with each of the sub-band receivers
detecting independent OFDM signal, we do not actually have to use a synthesis filter bank at the transmitter, but we may simply synthesize the overall transmitted signal by means of a conventional
OFDM transmitter with
points FFT size. This is actually the approach adopted in [
], nevertheless in this paper we do pursue a filter bank (a synthesis one) at the transmitter as well, as this may be advantageous for future joint processing of polarizations at the transmitter
(e.g. in order to realize polarization-time coding).
4. Critically Sampled analysis and synthesis Filter Bank implementation
In this section we pursue the computationally efficient digital implementation of the filter banks.
Figure 6
shows the block-diagram of a communication system employing filter-bank modulation and demodulation concepts [
]. A set of
modulation symbols
, (with
discrete-time at the rate of 1/
), is input in parallel into a set of
discrete-time filters with transfer functions
. This set of
filters with common additive output represents a so-called
filter-bank. At the receiver, demodulation is achieved by an
filter bank (a set of filters with common input) comprising
followed by
-fold down-samplers. When
(i.e. the number of filter-bank paths equals the down-sampling factor), a critically sampled filter-bank structure is obtained.
In practice, filter-bank modulation systems are almost never directly implemented as shown in
Fig. 6
(as band pass filters), the reason being that in this configuration the filters must operate at a rate that is
times faster than the symbol rate 1/
. If the band-pass frequency responses are appropriately selected, it is possible to achieve quite efficient realizations. For example in the critically sampled case, if the
transmit (receive) filters are selected as frequency-shifted versions of a single baseband filter H(f) (G(f)), the so-called
prototype filter
, the system of
Fig. 6
becomes equivalent to that shown in
Fig. 7
Fig. 8
. The next step is to realize the discrete-time modulations with the complex exponentials, by means of an inverse discrete Fourier Transform (IDFT) applying LTI filtering operations on the
branches, inserting
filters corresponding to the so-called
polyphase components
of the prototype filter [
]. These structures are quite standard in DSP theory but have not yet been applied in the digital domain for optical communication, to the best of our knowledge. The complexity of the resulting DFT +
polyphase filters (
Fig. 9
) structure is very low, and will be evaluated in the next section.
5. Performance of Multi-Sub-Band (MSB) OFDM for Optical Link
5.1 MSB OFDM system
Figure 10
presents a detailed MSB-OFDM block diagram, spectrally partitioning the channel in the Rx by means of a critically sampled filter bank, into multiple (
= 16) decoupled sub-channels, each of which is pulse-shaped in the Tx for tight spectral confinement, requiring
fold digital interpolation. Matched filters are used in the Rx filter bank, with
-fold decimation. To generate MSB-OFDM, a single
Root Raised Cosine
prototype filter
(PF) pulse shaped with tight α = 0.015 roll-off(and truncated to a finite number
of taps, i.e.
taps per polyphase) is frequency shifted by means of discrete-time modulations with complex exponentials, corresponding to uniformly frequency multiplexing the multiple sub-bands to form each of the
Quasi-Nyquist WDM [
1. X. Liu, S. Chandrasekhar, B. Zhu, P. J. Winzer, A. H. Gnauck, and D. W. Peckham, “448-Gb/s Reduced-Guard-Interval CO-OFDM Transmission Over 2000 km of Ultra-Large-Area Fiber and Five 80-GHz-Grid
ROADMs,” J. Lightwave Technol. 29(4), 483–490 (2011). [CrossRef]
] aggregate channels. Efficient implementation is based on a
uniform DFT filter bank
as described earlier, wherein the filtering operations on the
= 16 (I)DFT branches correspond to the so-called
components of the RRC prototype filter. Each polyphase filter with
= 30 taps is in turn implemented in the frequency domain based on
L[PF] = 128
(I)FFTs, with overlap equal to
= 30.Note that polyphase
based uniform DFT filter bank are modern multi-rate DSP structures, first applied to optical transmission in our recent work [
], reducing the conceptual filter bank of
Fig. 4
to the very low complexity implementation of
Fig. 9
Hardware-wise, in addition to the top-tier DFT with
= 16 points, used in the filter-bank, we also require a processing bottom tier of slow
= 128 points (I)DFTs, implementing, for each sub-channel, OFDM transmission with
subcarriers, with a minimal single-sample cyclic prefix. The shifted RRC spectral profiles are cleverly designed to slightly spectrally overlap (
Fig. 11a
), and in addition, three OFDM subcarriers per sub-channel, namely those located at the RRC filters roll-off transitions, are turned off (or used as pilots). The net effect is to generate spectral
guard bands wasting as little as 3%, robustly decoupling the individual sub-channels, allowing them to be processed independently at the bottom tier using slow
= 128 (I)FFTs for the OFDM modulations per sub-channel. At the top tier we have the
= 16 points fast (I)FFT required for the filter bank realization, but fortunately these fast (I)FFTs are kept short (a 16 points (I)FFT requires 8 fast multipliers). Our two-tiered FFT processing
effectively realizes the equivalent of a very long
= 2048 points FFT in an equivalent RGI OFDM system. It is this tiered FFT structure that enables efficient
FPGA/ASIC parallelization. As for our system performance, the received QPSK constellation for the worst OFDM subcarrier (at the band edge) is shown in
Fig. 12a
; the Modulation Error Ratio (MER) vs. OFDM subcarrier index (identical for all sub-channels) is shown in
Fig. 12b
Figure 13
plots average
Bit Error Ratio
(BER) vs. OSNR, over all sub-channels of the aggregate Nyquist WDM channel are shown in
Fig. 13
We next itemize the complex multiplier (CM) counts of the various stages of the proposed algorithm, yielding complexity formula for our scheme, with M,N, N[PF], L[PF] as defined above. We assume for
complexity calculations:
• - Polyphase filtering is performed in frequency domain using the OverLap-Save (OLS) method for each polyphase channel.
• - A Filter bank is used at RX only. The Tx comprises a conventional FFT based OFDM realization.
• - Sub-band processing includes an MN-points IFFT at Tx and an N-points FFT at RX followed by 1-tap equalizer.
The complexity of the filter banks based MSB-OFDM scheme is expressed as follows:
The complexity due to sub-band processing is:
The total complexity is therefore:
Figure 14
compares our complexity, as per the last formula, with that of a reference RGI OFDM system using the OS-FDE, adjusted to provide the same spectral efficiency, the complexity of which was already
modeled in [
6. Orthogonal Polarizations processing
Heretofore, we have treated a single polarization. The full treatment of polarization multiplexing and de-multiplexing is outside the scope of this paper, however, it is important to get a sense of
the “polarization processing roadmap”. Hence, in this section we briefly preview the MSB-OFDM receiver architecture modifications required in order to accommodate polarization de-multiplexing.
Figure 15
illustrates the proposed receiver architecture. The analysis filter bank is doubled up, with one instance of a filter bank for each of the X and Y received orthogonal polarizations. The X and Y
sub-channels associated with each of the sub-bands are routed in pairs to corresponding 2x2 MIMO polarization sub-band receivers. Assuming
= 16 sub-bands, the 25 GHz channel, is partitioned into 25/15 = 1.666 GHzsub-bands. Each of the 16 MIMO sub-band receivers then takes two 1.66 GS/s X and Y inputs from the two filter banks and
generates two 1.66GBd outputs, corresponding to the de-mixed X and Y polarizations (estimates of the original X and Y polarizations at the Tx).
The detailed analysis of the dual polarization sub-band receiver processing and resulting performance are outside the scope of this paper, but let us just mention that similarly to the CD handling
reducing to memoriless single-tap processing per each narrow sub-band, the polarization transformation also becomes memoriless, as each sub-band sees a fixed (frequency flat) birefringence.
Accordingly, there is no longer a need for a complex ‘butter-fly’ MIMO 2x2 polarization equalizer with memory with 4μDGDtaps (where μDGDis the DGD between the two principal polarizations, expressed
in sample units), but rather 4 complex taps per OFDM tone suffice to realize a “scalar” inverse Jones 2x2 matrix inversion, following the sub-band IFFTs of the X and Y polarization signals.
7. Discussion and Conclusions
In this paper we introduced a “divide&conquer” digital sub-band (de)multiplexing strategy, digitally partitioning the wideband spectrum of each WDM channel into M narrow sub-bands, to be separately
processed. This technique improves almost every aspect of receiver signal processing. Some of the advantages were already surveyed above, whereas the remaining ones are briefly outlined in this
One remaining deficiency of our scheme is 3% loss in spectral efficiency due to turning off three sub-carriers in the roll-off transitions of the band-pass filters of the filter bank. Thus our
spectral efficiency is 97%, whereas our CP-overhead temporal efficiency is 1-1/
= 1-1/128 = 99.2% .In a follow-up publication expanding on our brief introduction [
], we shall show how to replace the critically sampled filter banks treated in this paper, by
(OS) filter banks, which are 100% spectrally efficient and are even more hardware efficient than our first-generation CS filter-bank based systems described here. Nevertheless, the CS filter banks
treated here are easier to understand than their OS counterparts, therefore the current CS approach, serves as the best introductory approach to get acquainted with the FB method, the advantages of
which are briefly outlined in the following:
• 1.
Here we have only established modest savings of 10% due to the elimination of the FDE (accounting for the filter-bank overhead), however, the HW complexity (as measured in terms of the number of
complex multipliers) of the overall sub-band MSB-OFDM receiver may be improved even further by introducing oversampled filter banks. E.g. the FDE improvement evaluated in [
] attained 19% savings in the complex multiplier count relative to the FDE of conventional RGI OFDM.
• 2. The filter-bank Tx and Rx are highly amenable to FPGA parallelization, e.g. for the purpose of demonstrations and flexible prototyping. Partitioning of the overall processing task over
multiple FPGAs is facilitated, and so is efficient parallelized processing in ASIC realizations, taking advantage of the tree structure of the filter banks, with the full data stream being split
into (combined from) multiple independent slower parallel paths which do not exchange any information, and which directly interface to the ADC/DAC in parallel form. In contrast, in a conventional
realization the multiple FPGAs must communicate among them at full rate.
• 3. Filter-banks effectively provide a novel method to generate arbitrarily large FFT sizes (e.g. 512-4096 points) and the effective large FFT is readily partitioned over multiple (few) FPGA(s),
requiring far less inter-FPGA communication. Thus, FFT algorithms of arbitrarily large sizes can now be readily parallelized and spread across multiple processors (which might have significance
for other processing areas as well).
• 4. Each sub-channel is quite narrowband (M times narrower in bandwidth than the overall channel), hence sees an almost frequency-flat end-to-end transmission environment: Each sub-band
experiences negligible CD and PMD. This points to extremely simple sub-band receivers, Pol-Demux and PMD equalization are substantially easier per sub-band, requiring just 2x2 MIMO memoriless
• 5.
OFDM Rx synchronization (timing recovery, coarse and fine) are substantially simpler and more accurate per sub-band. In particular, wireless window synchronization algorithms [
6. T. M. Schmidl and D. C. Cox, “Robust frequency and timing synchronization for OFDM,” IEEE Trans. Commun. 45(12), 1613–1621 (1997). [CrossRef]
], which do not work well full band due to the CD of the overall channel (which is not known prior to timing synchronization –“chicken&egg problem”) may now be made to work “by-the-textbook” for
each frequency-flat sub-band.
• 6. Channel estimation becomes much simpler for each sub-band; Moreover, it may be further improved by joint sub-bands processing. Very simple and accurate monitoring of the channel CD is
• 7.
Carrier Recovery advantages:
- or
Dispersion-Enhanced Phase-Noise
(EEPN/DEPN) [
10. W. Shieh and K. P. Ho, “Equalization-enhanced phase noise for coherent-detection systems using electronic digital signal processing,” Opt. Express 16(20), 15718–15727 (2008). [CrossRef]
] is cut down by a factor of
, practically eliminated. EEPN is the effect whereby the phase noise of the LO laser is enhanced through the CD equalizer which has long impulse response (large delay). With the filter-bank
method, as each sub-band is narrowband, its CD impulse response duration is
times shorter, therefore EEPN is reduced by a factor of
• 8. Adaptive parameters adjustment algorithms (for CD, PMD, CR, etc.) converge faster and more accurately, due to the sub-banding, as is well-known in adaptive signal processing. Not only is the
number of coefficients in each sub-band smaller, but also each sub-band is considerably flatter in its frequency response, which implies much smaller eigenvalue spread, hence faster adaptive
algorithm convergence. This convergence speed-up will be manifested in every adaptive DSP algorithm, e.g. CMA for polarization de-multiplexing.
• 9. IQ imbalance correction algorithms may be more effectively formulated in the filter-bank context. It will be seen that pairs of sub-bands (with center frequencies symmetric vs. the mid-band
frequency) will be coupled in pairs in order to generate simple and rapidly converging IQ imbalance correction.
• 10.
Nonlinear compensation (NLC) is facilitated and improved. NLC may be applied per individual sub-band, and may be further improved by joint sub-bands processing- a recent study highlighting the
NLC advantage upon processing by sub-bands may be found in [
• 11. The proposed sub-band based algorithms do not require special excessive allocation of bits, relative to the full channel conventional implementations.
Two remaining disadvantages of the proposed technique are: (i) the realization of the sub-band partitioning by means of the critically sampled filter bank is still somewhat computationally intensive
(due to the requirement to realize spectrally sharp filters) and partially offsets the savings in the sub-band receivers complexity (though we still win overall, attaining a total 10% complexity
reduction relative to conventional RGI OFDM FDE). (ii): There is some spectral inefficiency penalty incurred in the filter transitions at the sub-band boundaries. In fact, this penalty trades off
with the complexity of the polyphase filters, i.e. we have a tradeoff between (i) and (ii), which we would like to further improve. In a follow-up publication expanding on [
], we shall introduce and develop the concept of
oversampled filter banks
to address the two remaining deficiencies mentioned above – this approach will remarkably attain 100% spectral efficiency while even further improving the computational efficiency.
A final disadvantage to be mentioned re the filter bank concept, is that it is hard to explain to skeptics not versed in multirate DSP.
Additional future work on the filter-bank based MSB-OFDM scheme, which is already underway with promising interim results, will further explore all the remaining points mentioned in 1-11above, which
were not addressed in this paper: channel estimation, FFT window synchronization, carrier recovery (phase and frequency estimation and compensation), polarization de-multiplexing, IQ imbalance
compensation and non-linear compensation, all of which functionalities are likely to be improved, once pursued in the context the decoupled narrowband sub-channels. A detailed treatment of these
multiple issues is outside the scope of this introductory paper on the filter-bank approach, however, initial studies show advantages of the filter-bank based architecture in all these aspects.
References and links
1. X. Liu, S. Chandrasekhar, B. Zhu, P. J. Winzer, A. H. Gnauck, and D. W. Peckham, “448-Gb/s Reduced-Guard-Interval CO-OFDM Transmission Over 2000 km of Ultra-Large-Area Fiber and Five 80-GHz-Grid
ROADMs,” J. Lightwave Technol. 29(4), 483–490 (2011). [CrossRef]
2. A. Tolmachev and M. Nazarathy, “Real-time-realizable Filtered-Multi-Tone (FMT) Modulation for Layered-FFT Nyquist WDM Spectral Shaping - paper SPMB3,” in European Conference of Optical
Communication (ECOC)(2011).
3. L. B. Du and A. J. Lowery, “Mitigation of dispersion penalty for short-cyclic-prefix coherent optical OFDM systems,” in European Conference of Optical Communication (ECOC)(2011).
4. S. L. Jansen and T. Schenk, “Optical OFDM for Long-Haul Transport Networks - Tutorial MH1,” in LEOS - IEEE Lasers and Electro-Optics Society Annual Meeting Conference Proceedings(2008).
5. A. Tolmachev and M. Nazarathy, “Low-Complexity Multi-Band Polyphase Filter Bank for Reduced-Guard-Interval Coherent Optical OFDM - paper SPMB3,” in Signal Processing in Photonic Communications
(SPPCom), Advanced Photonics OSA Conference(2011).
6. T. M. Schmidl and D. C. Cox, “Robust frequency and timing synchronization for OFDM,” IEEE Trans. Commun. 45(12), 1613–1621 (1997). [CrossRef]
7. R. I. Killey, Y. Benlachtar, R. Bouziane, P. A. Milder, R. J. Koutsoyannis, C. R. Berger, J. C. Hoe, M. Püschel, P. M. Watts, and M. Glick, “Recent Progress on Real-Time DSP for Direct Detection
Optical OFDM Transceivers - paper OMS1,” in Optical Fiber Communication Conference (OFC/NFOEC)(2011).
8. F. J. Harris, Multirate Signal Processing for Communication Systems (Prentice Hall, 2004).
9. J. Leibrich and W. Rosenkranz, “Frequency Domain Equalization with Minimum Complexity in Coherent Optical Transmission Systems,” in Optical Fiber Communication Conference (OFC/NFOEC)(2010).
10. W. Shieh and K. P. Ho, “Equalization-enhanced phase noise for coherent-detection systems using electronic digital signal processing,” Opt. Express 16(20), 15718–15727 (2008). [CrossRef] [PubMed]
11. Q. Zhuge, B. Châtelain, C. Chen, and D. V. Plant, “Mitigation of Equalization-Enhanced Phase Noise Using Reduced-Guard-Interval CO-OFDM,” in Optical Fiber Communication Conference (OFC/NFOEC)
12. E. Ip, N. Bai, and T. Wang, “Complexity versus Performance Tradeoff for Fiber Nonlinearity Compensation Using Frequency-Shaped, Multi-Sub band Back propagation - paper OThF4,” in Optical Fiber
Communication Conference (OFC/NFOEC)(2010).
OCIS Codes
(060.1660) Fiber optics and optical communications : Coherent communications
(060.4080) Fiber optics and optical communications : Modulation
(060.4230) Fiber optics and optical communications : Multiplexing
ToC Category:
Transmission Systems and Network Elements
Original Manuscript: September 19, 2011
Revised Manuscript: October 21, 2011
Manuscript Accepted: October 30, 2011
Published: November 18, 2011
Virtual Issues
European Conference on Optical Communication 2011 (2011) Optics Express
Alex Tolmachev and Moshe Nazarathy, "Filter-bank based efficient transmission of Reduced-Guard-Interval OFDM," Opt. Express 19, B370-B384 (2011)
Sort: Year | Journal | Reset
1. X. Liu, S. Chandrasekhar, B. Zhu, P. J. Winzer, A. H. Gnauck, and D. W. Peckham, “448-Gb/s Reduced-Guard-Interval CO-OFDM Transmission Over 2000 km of Ultra-Large-Area Fiber and Five 80-GHz-Grid
ROADMs,” J. Lightwave Technol.29(4), 483–490 (2011). [CrossRef]
2. A. Tolmachev and M. Nazarathy, “Real-time-realizable Filtered-Multi-Tone (FMT) Modulation for Layered-FFT Nyquist WDM Spectral Shaping - paper SPMB3,” in European Conference of Optical
Communication (ECOC)(2011).
3. L. B. Du and A. J. Lowery, “Mitigation of dispersion penalty for short-cyclic-prefix coherent optical OFDM systems,” in European Conference of Optical Communication (ECOC)(2011).
4. S. L. Jansen and T. Schenk, “Optical OFDM for Long-Haul Transport Networks - Tutorial MH1,” in LEOS - IEEE Lasers and Electro-Optics Society Annual Meeting Conference Proceedings(2008).
5. A. Tolmachev and M. Nazarathy, “Low-Complexity Multi-Band Polyphase Filter Bank for Reduced-Guard-Interval Coherent Optical OFDM - paper SPMB3,” in Signal Processing in Photonic Communications
(SPPCom), Advanced Photonics OSA Conference(2011).
6. T. M. Schmidl and D. C. Cox, “Robust frequency and timing synchronization for OFDM,” IEEE Trans. Commun.45(12), 1613–1621 (1997). [CrossRef]
7. R. I. Killey, Y. Benlachtar, R. Bouziane, P. A. Milder, R. J. Koutsoyannis, C. R. Berger, J. C. Hoe, M. Püschel, P. M. Watts, and M. Glick, “Recent Progress on Real-Time DSP for Direct Detection
Optical OFDM Transceivers - paper OMS1,” in Optical Fiber Communication Conference (OFC/NFOEC)(2011).
8. F. J. Harris, Multirate Signal Processing for Communication Systems (Prentice Hall, 2004).
9. J. Leibrich and W. Rosenkranz, “Frequency Domain Equalization with Minimum Complexity in Coherent Optical Transmission Systems,” in Optical Fiber Communication Conference (OFC/NFOEC)(2010).
10. W. Shieh and K. P. Ho, “Equalization-enhanced phase noise for coherent-detection systems using electronic digital signal processing,” Opt. Express16(20), 15718–15727 (2008). [CrossRef] [PubMed]
11. Q. Zhuge, B. Châtelain, C. Chen, and D. V. Plant, “Mitigation of Equalization-Enhanced Phase Noise Using Reduced-Guard-Interval CO-OFDM,” in Optical Fiber Communication Conference (OFC/NFOEC)
12. E. Ip, N. Bai, and T. Wang, “Complexity versus Performance Tradeoff for Fiber Nonlinearity Compensation Using Frequency-Shaped, Multi-Sub band Back propagation - paper OThF4,” in Optical Fiber
Communication Conference (OFC/NFOEC)(2010).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
Fig. 1 Fig. 2 Fig. 3
Fig. 4 Fig. 5 Fig. 6
Fig. 7 Fig. 8 Fig. 9
Fig. 10 Fig. 11 Fig. 12
Fig. 13 Fig. 14 Fig. 15
« Previous Article | Next Article »
|
{"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-19-26-B370&id=224548","timestamp":"2014-04-17T15:39:31Z","content_type":null,"content_length":"206491","record_id":"<urn:uuid:721ca109-a7ed-4a02-81e6-1af83cf783a9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
nth derivative
This topic contains 5 replies, has 4 voices, and was last updated by
2plus2 2 years, 1 month ago
Author Posts
Author Posts
February 23, 2012 at 5:14 pm #432
pin690 What is the nth derivative of nx^n?
February 23, 2012 at 8:06 pm #438
The first derivative is
The second derivative is
The third derivative is
thecalculuskid Interestingly, the nth derivative would be
Member (n^2)(n-1)(n-2)(n-3)….[n-(n-2)][n-(n-1)]x^(n-n)
where x^(n-n) is x^0, which is equal to 1, so that term kind of “diappears”….
The other factors can be looked upon in the reverse order and you would get:
n^2 can be broken down into (n)(n).
Hence the nth derivative is
n! times n or ….. nn!
February 23, 2012 at 8:12 pm #442
practicaldave Good answer @thecalculuskid
February 23, 2012 at 8:14 pm #443
thecalculuskid Thanks practicaldave.
February 23, 2012 at 8:15 pm #444
So, what do you think pin?
Do I get the point?
February 23, 2012 at 8:17 pm #445
2plus2 Yes, great answer.
You must be logged in to reply to this topic.
|
{"url":"http://yourmathguru.com/forums/topic/nth-derivative","timestamp":"2014-04-18T00:17:06Z","content_type":null,"content_length":"37161","record_id":"<urn:uuid:4ed53900-2aaf-40d0-9bd0-8c71b0118a7a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Numerology Calculator
The numbers are perhaps one of the most perfect and highest human concepts. Numerology is the discipline that studies the secret code of this wonderful vibration and teaches us to use it to our
What is Numerology ?
It is the discipline that studies the energy vibration of numbers and their influence on people, businesses , animals, objects , etc. .
Who was the creator of this discipline?
In the year 530 BCE Pythagoras, the Greek philosopher , methodically developing the relationship between the planets and their numerical vibration. I call it ” music of the spheres .” I also affirm
that words have a sound that vibrates in tune with the frequency of the numbers. It would be a facet of the harmony of the universe and the synchronicity of the laws of nature.
What used numerology ?
Basically , knowing a person’s full name and date of birth , you can discover your personality, your natural talents , their inner world , their spiritual challenges , their fate and the way in which
it will transit to perform. It also allows you to explore the karma that the person brings their past lives. Try the Numerology calculator to see what results you get!
How many numbers are used in this system?
According to Pythagoras , in numbers ranging from 1 to 9 is the basis of all the others . If a calculation beyond simple numbers , it should be reduced by adding the digits together.
Are there other numerological systems?
Many numerology systems. The ancient Babylonians , Egyptians, Hindus and Essenes , and the sages of the Arab world , were masters at finding the hidden meaning of the figures. The most accepted
numerological system, then the Pythagorean , which is associated with the Hebrew kabalah .
What differs from the previous ?
The correspondences between letters and numbers are different. Kabbalists do not accept that the numerical values of a letter are different from the next. For example , in the Pythagorean system to
the letter I corresponds to number 9 and the letter J the number 1. In the Kabbalist , both letters are allocated vibration number 1. Kabbalists also do not use the number 9.
Why Is The Number 7 Considered Sacred?
Is considered to represent the spiritual profile of all existence , while 9 is associated with the earthly. It takes into account the creation of the world in 7 days described in Genesis, and Seventh
Heaven, often mentioned in the Bible. The seven churches , seven seats, seven seats , 7 days march around the walls of Jericho. There are seven generations from David to the birth of Jesus . Ezekiel
speaks of the seven angels of the Lord , that come and go throughout the earth . In all religions, the number 7 is a key .
Why 9 is associated with the earthly ?
For starters, if you add all the numbers in our system [1 +2 +3 +4 +5 +6 +7 +8 +9 ] , we get 45 , which in turn adds 9 . Indestructible is a number , as many times you multiply or add your own
multiple is still obtaining 9 , which does not happen with any other number . For example : 9 +2 = 18 [ 1 +8 = 9 ] , 9×3 = 27 [ 2 +7 = 9 ] , and so on. Researchers have discovered that global changes
occur every 180 years fiery consequences [ 1 +8 +0 = 9] . The 360 degrees of a circle add up to 9. It is said that Jesus expired at nine in the evening. One Earth day has 1440 minutes ,totaling 9.
The human heart beats an average of 72 times per minute, back to 9. But most significant is that humans need 9 months of gestation to birth.
Is it true that 666 is the number of the Beast ?
The association is done is that 6 +6 +6 sum 18, which is reduced 9, the number associated with the earthly, the instinctive, the densest forces.
How do you think the number 0 ?
It represents power . The more we add zeros to a number, the more powerful will turn : 1, 10, 100,000 , 1,000,000. He is considered the number of eternity, the snake eating its own tail .
Visit the link below to get some more info!
|
{"url":"http://numerologycalculator01.wordpress.com/","timestamp":"2014-04-17T01:25:44Z","content_type":null,"content_length":"25335","record_id":"<urn:uuid:a5c33125-4f8d-4351-8f2a-72c9fb09519d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Prove that 2 is a primitive root modulo p.
Suppose q is a prime and p=4q+1 is a prime. Prove that 2 is a primitive root modulo p.
First observe that $2^{\frac{p-1}{2}}\equiv\pm 1 \mod{p}$. In this case, $2$ is a primitive root if and only if $2^{\frac{p-1}{2}}\equiv -1 \mod{p}$. (Ask if you'd like to see the proof of this.)
Assume otherwise, so $2^{\frac{p-1}{2}}\equiv 1 \mod{p}$. Note $\frac{p-1}{2} = 2q$. Now by Euler's Criterion, $2^{2q}\equiv 1\mod{p} \iff \left(\frac{2}{p}\right)=1$ $\left(\frac{2}{p}\right)=1 \
implies p\equiv 1 \text{ or } 7\mod{8}$. Since $p\equiv 1 \mod{4} \implies p\equiv 1 \text{ or } 5 \mod{8}$. Thus $p\equiv 1\mod{8} \implies 8\mid p-1 \implies 8\mid 4q \implies 2\mid q \implies q=
2$, which is a contradiction since $4\cdot 2+1 =9$ is not prime. Thus $2^{\frac{p-1}{2}} ot\equiv 1 \mod{p} \implies 2^{\frac{p-1}{2}}\equiv -1 \mod{p} \implies 2$ is a primitive root modulo $p$.
Last edited by chiph588@; March 27th 2010 at 11:01 AM.
|
{"url":"http://mathhelpforum.com/number-theory/134936-prove-2-primitive-root-modulo-p.html","timestamp":"2014-04-17T16:56:27Z","content_type":null,"content_length":"35539","record_id":"<urn:uuid:5e751362-b5f2-4ace-97bc-1cac34c10696>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Grapevine, TX Prealgebra Tutor
Find a Grapevine, TX Prealgebra Tutor
...I have three years of teaching/tutoring experience in a one-on-one setting. While a student at Trinity, I tutored freelance between 7-10 students regularly. I also worked for Huntington
Learning Center in San Antonio during my last year in college.
14 Subjects: including prealgebra, chemistry, ASVAB, SAT math
...I would work with manipulatives, diagrams and hands-on materials that your student would be able to touch and feel to learn. For the visual learner, I would work with various visual
illustrations, flash cards and other activities that would aid the student in their studies. Study skills would a...
8 Subjects: including prealgebra, algebra 1, grammar, vocabulary
...During this degree, I completed 1st and 2nd Year Electronic Engineering subjects. During my Master's, I completed one subject from 3rd Year Electronic Engineering called "Electronic System
Design". I have also taught Year 11 and Year 12 Physics, which includes topics in electricity and electronics.
56 Subjects: including prealgebra, chemistry, physics, calculus
...I have been working with elementary, middle school and high school students on TAKS weekly math and reading questions for the last year. I have completed several STARR/TAKS review including
3rd, 6th, 8th, 9th, 11th grades over the last two months. I am proficient in SAT/GRE prep and provide study and test taking skills.
32 Subjects: including prealgebra, chemistry, reading, geometry
...Additionally, I have tutored in grammar, writing (including book reports), and literature. I have taught math from simple counting to pre-algebra. I have tutored in social science, American
history, and world geography.
41 Subjects: including prealgebra, English, reading, elementary (k-6th)
|
{"url":"http://www.purplemath.com/grapevine_tx_prealgebra_tutors.php","timestamp":"2014-04-17T19:57:49Z","content_type":null,"content_length":"24157","record_id":"<urn:uuid:03675950-04a9-44ac-b5ed-e715defb9061>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Legal Theory Blog
Linda H. Edwards (William S. Boyd School of Law, UNLV) has posted Once Upon a Time in Law: Myth, Metaphor, and Authority on SSRN. Here is the abstract:
We have long accepted the role of narrative in fact statements and jury arguments, but in the inner sanctum of analyzing legal authority? Surely not. Yet cases, statutes, rules, and doctrines all
have stories of their own. When we talk about legal authority, using our best formal logic, we are actually swimming in a sea of narrative, oblivious to the water around us. As the old Buddhist
saying goes, 'we don’t know who discovered the ocean, but it probably wasn’t a fish.'
This article teases out several familiar archetypes hidden in discussions of cases and statutes. In the midst of seemingly routine law talk are stories of birth and death, battle and betrayal,
tricksters and champions. These stories are simultaneously true and false, world-shaping yet always incomplete. Their unnoticed influence over the law’s development can be powerful. But we so
seldom question familiar narratives, and these archetypes practically run in our veins. We should learn to recognize and interrogate these stories, attuned to their truths, alert to their
limitations, and ready when necessary to seek other more accurate and complete stories for the law.
|
{"url":"http://lsolum.typepad.com/legaltheory/2009/08/index.html","timestamp":"2014-04-24T14:07:58Z","content_type":null,"content_length":"84903","record_id":"<urn:uuid:2cafe7dd-721a-4470-9ae9-212f3d309960>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
|