content
stringlengths
86
994k
meta
stringlengths
288
619
How do you subtract a fraction problem give example? In arithmetic, long division is a standard division algorithm suitable for dividing simple or complex multidigit numbers that is simple enough to perform by hand. It breaks down a division problem into a series of easier steps. As in all division problems, one number, called the dividend, is divided by another, called the divisor, producing a result called the quotient. It enables computations involving arbitrarily large numbers to be performed by following a series of simple steps. The abbreviated form of long division is called short division, which is almost always used instead of long division when the divisor has only one digit. Inexpensive calculators and computers have become the most common way to solve division problems, eliminating a traditional mathematical exercise, and decreasing the educational opportunity to show how to do so by paper and pencil techniques. (Internally, those devices use one of a variety of division algorithms). In the United States, long division has been especially targeted for de-emphasis, or even elimination from the school curriculum, by reform mathematics, though traditionally introduced in the 4th or 5th grades. In English-speaking countries, long division does not use the slash (/) or obelus (÷) signs, instead displaying the dividend, divisor, and (once it is found) quotient in a tableau. The process is begun by dividing the left-most digit of the dividend by the divisor. The quotient (rounded down to an integer) becomes the first digit of the result, and the remainder is calculated (this step is notated as a subtraction). This remainder carries forward when the process is repeated on the following digit of the dividend (notated as 'bringing down' the next digit to the remainder). When all digits have been processed and no remainder is left, the process is complete. An example is shown below, representing the division of 500 by 4 (with a result of 125). In the above example, the first step is to find the shortest sequence of digits starting from the left end of the dividend, 500, that the divisor 4 goes into at least once; this shortest sequence in this example is simply the first digit, 5. The largest number that the divisor 4 can be multiplied by without exceeding 5 is 1, so the digit 1 is put above the 5 to start constructing the quotient. Next, the 1 is multiplied by the divisor 4, to obtain the largest whole number (4 in this case) that is a multiple of the divisor 4 without exceeding the 5; this product of 1 times 4 is 4, so 4 is placed underneath the 5. Next the 4 under the 5 is subtracted from the 5 to get the remainder, 1, which is placed under the 4 under the 5. This remainder 1 is necessarily smaller than the divisor 4. Next the first as-yet unused digit in the dividend, in this case the first digit 0 after the 5, is copied directly underneath itself and next to the remainder 1, to form the number 10. At this point the process is repeated enough times to reach a stopping point: The largest number by which the divisor 4 can be multiplied without exceeding 10 is 2, so 2 is written above the 0 that is next to the 5 – that is, directly above the last digit in the 10. Then the latest entry to the quotient, 2, is multiplied by the divisor 4 to get 8, which is the largest multiple of 4 that does not exceed 10; so 8 is written below 10, and the subtraction 10 minus 8 is performed to get the remainder 2, which is placed below the 8. This remainder 2 is necessarily smaller than the divisor 4. The next digit of the dividend (the last 0 in 500) is copied directly below itself and next to the remainder 2, to form 20. Then the largest number by which the divisor 4 can be multiplied without exceeding 20 is ascertained; this number is 5, so 5 is placed above the last dividend digit that was brought down (i.e., above the rightmost 0 in 500). Then this new quotient digit 5 is multiplied by the divisor 4 to get 20, which is written at the bottom below the existing 20. Then 20 is subtracted from 20, yielding 0, which is written below the 20. We know we are done now because two things are true: there are no more digits to bring down from the dividend, and the last subtraction result was 0. If the last remainder when we ran out of dividend digits had been something other than 0, there would have been two possible courses of action. (1) We could just stop there and say that the dividend divided by the divisor is the quotient written at the top with the remainder written at the bottom; equivalently we could write the answer as the quotient followed by a fraction that is the remainder divided by the divisor. Or, (2) we could extend the dividend by writing it as, say, 500.000... and continue the process (using a decimal point in the quotient directly above the decimal point in the dividend), in order to get a decimal answer, as in the following example. In this example, the decimal part of the result is calculated by continuing the process beyond the units digit, "bringing down" zeros as being the decimal part of the dividend. This example also illustrates that, at the beginning of the process, a step that produces a zero can be omitted. Since the first digit 1 is less than the divisor 4, the first step is instead performed on the first two digits 12. Similarly, if the divisor were 13, one would perform the first step on 127 rather than 12 or 1. mixed mode division must be used. Consider dividing 50 miles 600 yards into 37 pieces: Each of the four columns is worked in turn. Starting with the miles: 50/37 = 1 remainder 13. No further division is possible, so perform a long multiplication by 1,760 to convert miles to yards, the result is 22,880 yards. Carry this to the top of the yards column and add it to the 600 yards in the dividend giving 23,480. Long division of 23,480 / 37 now proceeds as normal yielding 634 with remainder 22. The remainder is multiplied by 3 to get feet and carried up to the feet column. Long division of the feet gives 1 remainder 29 which is then multiplied by twelve to get 348 inches. Long division continues with the final remainder of 15 inches being shown on the result line. The same method and layout is used for binary, octal and hexadecimal. An address range of 0xf412df divided into 0x12 parts is: Binary is of course trivial because each digit in the result can only be 1 or 0: When the quotient is not an integer and the division process is extended beyond the decimal point, one of two things can happen. (1) The process can terminate, which means that a remainder of 0 is reached; or (2) a remainder could be reached that is identical to a previous remainder that occurred after the decimal points were written. In the latter case, continuing the process would be pointless, because from that point onward the same sequence of digits would appear in the quotient over and over. So a bar is drawn over the repeating sequence to indicate that it repeats forever. China, Japan and India use the same notation as English-speakers. Elsewhere, the same general principles are used, but the figures are often arranged differently. In Latin America (except Mexico, Colombia, Venezuela and Brazil), the calculation is almost exactly the same, but is written down differently as shown below with the same two examples used above. Usually the quotient is written under a bar drawn under the divisor. A long vertical line is sometimes drawn to the right of the calculations. and In Mexico, the US notation is used, except that only the result of the subtraction is annotated and the calculation is done mentally, as shown below: In Brazil, Venezuela and Colombia, the European notation (see below) is used, except that the quotient is not separated by a vertical line, as shown below: Same procedure applies in Mexico, only the result of the subtraction is annotated and the calculation is done mentally. In Spain, Italy, France, Portugal, Romania, Turkey, Greece, Belgium, and Russia, the divisor is to the right of the dividend, and separated by a vertical bar. The division also occurs in the column, but the quotient (result) is written below the divider, and separated by the horizontal line. In France, a long vertical bar separates the dividend and subsequent subtractions from the quotient and divisor, as in the example below of 6359 divided by 17, which is 374 with a remainder of 1. Decimal numbers are not divided directly, the dividend and divisor are multiplied by a power of ten so that the division involves two whole numbers. Therefore, if one were dividing 12,7 by 0,4 (commas being used instead of decimal points), the dividend and divisor would first be changed to 127 and 4, and then the division would proceed as above. In Germany, the notation of a normal equation is used for dividend, divisor and quotient (cf. first section of Latin American countries above, where it's done virtually the same way): The same notation is adopted in Denmark, Norway, Macedonia, Poland, Croatia, Slovenia, Hungary, Czech Republic, Slovakia, , Vietnam and in Serbia. In the Netherlands, the following notation is used: Long division of integers can easily be extended to include non-integer dividends, as long as they are rational. This is because every rational number has a recurring decimal expansion. The procedure can also be extended to include divisors which have a finite or terminating decimal expansion (i.e. decimal fractions). In this case the procedure involves multiplying the divisor and dividend by the appropriate power of ten so that the new divisor is an integer – taking advantage of the fact that a ÷ b = (ca) ÷ (cb) – and then proceeding as above. A generalised version of this method called polynomial long division is also used for dividing polynomials (sometimes using a shorthand version called synthetic division). In arithmetic, subtraction is one of the four basic binary operations; it is the inverse of addition, meaning that if we start with any number and add any number and then subtract the same number we added, we return to the number we started with. Subtraction is denoted by a minus sign in infix notation, in contrast to the use of the plus sign for addition. Since subtraction is not a commutative operator, the two operands are named. The traditional names for the parts of the formula are minuend (c) − subtrahend (b) = difference (a). Subtraction is used to model four related processes: In mathematics, it is often useful to view or even define subtraction as a kind of addition, the addition of the additive inverse. We can view 7 − 3 = 4 as the sum of two terms: 7 and -3. This perspective allows us to apply to subtraction all of the familiar rules and nomenclature of addition. Subtraction is not associative or commutative—in fact, it is anticommutative and left-associative—but addition of signed numbers is both. Imagine a line segment of length b with the left end labeled a and the right end labeled c. Starting from a, it takes b steps to the right to reach c. This movement to the right is modeled mathematically by addition: From c, it takes b steps to the left to get back to a. This movement to the left is modeled by subtraction: Now, a line segment labeled with the numbers 1, 2, and 3. From position 3, it takes no steps to the left to stay at 3, so 3 − 0 = 3. It takes 2 steps to the left to get to position 1, so 3 − 2 = 1. This picture is inadequate to describe what would happen after going 3 steps to the left of position 3. To represent such an operation, the line must be extended. To subtract arbitrary natural numbers, one begins with a line containing every natural number (0, 1, 2, 3, 4, 5, 6, ...). From 3, it takes 3 steps to the left to get to 0, so 3 − 3 = 0. But 3 − 4 is still invalid since it again leaves the line. The natural numbers are not a useful context for subtraction. The solution is to consider the integer number line (..., −3, −2, −1, 0, 1, 2, 3, ...). From 3, it takes 4 steps to the left to get to −1: There are some cases where subtraction as a separate operation becomes problematic. For example, 3 − (−2) (i.e. subtract −2 from 3) is not immediately obvious from either a natural number view or a number line view, because it is not immediately clear what it means to move −2 steps to the left or to take away −2 apples. One solution is to view subtraction as addition of signed numbers. Extra minus signs simply denote additive inversion. Then we have 3 − (−2) = 3 + 2 = 5. This also helps to keep the ring of integers "simple" by avoiding the introduction of "new" operators such as subtraction. Ordinarily a ring only has two operations defined on it; in the case of the integers, these are addition and multiplication. A ring already has the concept of additive inverses, but it does not have any notion of a separate subtraction operation, so the use of signed addition as subtraction allows us to apply the ring axioms to subtraction without needing to prove anything. There are various algorithms for subtraction, and they differ in their suitability for various applications. A number of methods are adapted to hand calculation; for example, when making change, no actual subtraction is performed, but rather the change-maker counts forward. For machine calculation, the method of complements is preferred, whereby the subtraction is replaced by an addition in a modular arithmetic. Methods used to teach subtraction to elementary school varies from country to country, and within a country, different methods are in fashion at different times. In what is, in the U.S., called traditional mathematics, a specific process is taught to students at the end of the 1st year or during the 2nd year for use with multi-digit whole numbers, and is extended in either the fourth or fifth grade to include decimal representations of fractional numbers. Some American schools currently teach a method of subtraction using borrowing and a system of markings called crutches][. Although a method of borrowing had been known and published in textbooks prior, apparently the crutches are the invention of William A. Brownell who used them in a study in November 1937][. This system caught on rapidly, displacing the other methods of subtraction in use in America at that time. Some European schools employ a method of subtraction called the Austrian method, also known as the additions method. There is no borrowing in this method. There are also crutches (markings to aid memory), which vary by country][. Both these methods break up the subtraction as a process of one digit subtractions by place value. Starting with a least significant digit, a subtraction of subtrahend: from minuend where each s[i] and m[i] is a digit, proceeds by writing down m[1] − s[1], m[2] − s[2], and so forth, as long as s[i] does not exceed m[i]. Otherwise, m[i] is increased by 10 and some other digit is modified to correct for this increase. The American method corrects by attempting to decrease the minuend digit m[i+1] by one (or continuing the borrow leftwards until there is a non-zero digit from which to borrow). The European method corrects by increasing the subtrahend digit s[i+1] by one. Example: 704 − 512. The minuend is 704, the subtrahend is 512. The minuend digits are m[3] = 7, m[2] = 0 and m[1] = 4. The subtrahend digits are s[3] = 5, s[2] = 1 and s[1] = 2. Beginning at the one's place, 4 is not less than 2 so the difference 2 is written down in the result's one place. In the ten's place, 0 is less than 1, so the 0 is increased to 10, and the difference with 1, which is 9, is written down in the ten's place. The American method corrects for the increase of ten by reducing the digit in the minuend's hundreds place by one. That is, the 7 is struck through and replaced by a 6. The subtraction then proceeds in the hundreds place, where 6 is not less than 5, so the difference is written down in the result's hundred's place. We are now done, the result is 192. The Austrian method does not reduce the 7 to 6. Rather it increases the subtrahend hundred's digit by one. A small mark is made near or below this digit (depending on the school). Then the subtraction proceeds by asking what number when increased by 1, and 5 is added to it, makes 7. The answer is 1, and is written down in the result's hundred's place. There is an additional subtlety in that the student always employs a mental subtraction table in the American method. The Austrian method often encourages the student to mentally use the addition table in reverse. In the example above, rather than adding 1 to 5, getting 6, and subtracting that from 7, the student is asked to consider what number, when increased by 1, and 5 is added to it, makes 7. When subtracting two numbers with units, they must have the same unit. In most cases the difference will have the same unit as the original numbers. One exception is when subtracting two numbers with percentage as unit. In this case, the difference will have percentage points as Elementary arithmetic is the simplified portion of arithmetic which includes the operations of addition, subtraction, multiplication, and division. Elementary arithmetic starts with the natural numbers and the written symbols (digits) which represent them. The process for combining a pair of these numbers with the four basic operations traditionally relies on memorized results for small values of numbers, including the contents of a multiplication table to assist with multiplication and division. Elementary arithmetic also includes fractions and negative numbers, which can be represented on a number line. The abacus is an early mechanical device for performing elementary arithmetic, which is still used in many parts of Asia. Modern calculating tools which perform elementary arithmetic operations include cash registers, electronic calculators, and computers. Digits are the entire set of symbols used to represent numbers. In a particular numeral system, a single digit represents a different amount than any other digit, although the symbols in the same numeral system might vary between cultures. In modern usage, the Arabic numerals are the most common set of symbols, and the most frequently used form of these digits is the Western style. Each single digit matches the following amounts: 0, zero. Used in the absence of objects to be counted. For example, a different way of saying "there are no sticks here", is to say "the number of sticks here is 0". 1, one. Applied to a single item. For example, here is one stick: I 2, two. Applied to a pair of items. Here are two sticks: I I 3, three. Applied to three items. Here are three sticks: I I I 4, four. Applied to four items. Here are four sticks: I I I I 5, five. Applied to five items. Here are five sticks: I I I I I 6, six. Applied to six items. Here are six sticks: I I I I I I 7, seven. Applied to seven items. Here are seven sticks: I I I I I I I 8, eight. Applied to eight items. Here are eight sticks: I I I I I I I I 9, nine. Applied to nine items. Here are nine sticks: I I I I I I I I I Any numeral system defines the value of all numbers which contain more than one digit, most often by addition of the value for adjacent digits. The Hindu–Arabic numeral system includes positional notation to determine the value for any numeral. In this type of system, the increase in value for an additional digit includes one or more multiplications with the radix value and the result is added to the value of an adjacent digit. With Arabic numerals, the radix value of ten produces a value of twenty-one (equal to 2×10 + 1) for the numeral "21". An additional multiplication with the radix value occurs for each additional digit, so the numeral "201" represents a value of two-hundred-and-one (equal to 2×10×10 + 0×10 + 1). The elementary level of study typically includes understanding the value of individual whole numbers using Arabic numerals with a maximum of seven digits, and performing the four basic operations using Arabic numerals with a maximum of four digits each. When two numbers are added together, the result is called a sum. The two numbers being added together are called addends. Suppose you have two bags, one bag holding five apples and a second bag holding three apples. Grabbing a third, empty bag, move all the apples from the first and second bags into the third bag. The third bag now holds eight apples. This illustrates the combination of three apples and five apples is eight apples; or more generally: "three plus five is eight" or "three plus five equals eight" or "eight is the sum of three and five". Numbers are abstract, and the addition of a group of three things to a group of five things will yield a group of eight things. Addition is a regrouping: two sets of objects which were counted separately are put into a single group and counted together: the count of the new group is the "sum" of the separate counts of the two original groups. This operation of combining is only one of several possible meanings that the mathematical operation of addition can have. Other meanings for addition include: Symbolically, addition is represented by the "plus sign": +. So the statement "three plus five equals eight" can be written symbolically as . The order in which two numbers are added does not matter, so . This is the commutative property of addition. To add a pair of digits using the table, find the intersection of the row of the first digit with the column of the second digit: the row and the column intersect at a square containing the sum of the two digits. Some pairs of digits add up to two-digit numbers, with the tens-digit always being a 1. In the addition algorithm the tens-digit of the sum of a pair of digits is called the "carry digit". For simplicity, consider only numbers with three digits or less. To add a pair of numbers (written in Arabic numerals), write the second number under the first one, so that digits line up in columns: the rightmost column will contain the ones-digit of the second number under the ones-digit of the first number. This rightmost column is the ones-column. The column immediately to its left is the tens-column. The tens-column will have the tens-digit of the second number (if it has one) under the tens-digit of the first number (if it has one). The column immediately to the left of the tens-column is the hundreds-column. The hundreds-column will line up the hundreds-digit of the second number (if there is one) under the hundreds-digit of the first number (if there is one). After the second number has been written down under the first one so that digits line up in their correct columns, draw a line under the second (bottom) number. Start with the ones-column: the ones-column should contain a pair of digits: the ones-digit of the first number and, under it, the ones-digit of the second number. Find the sum of these two digits: write this sum under the line and in the ones-column. If the sum has two digits, then write down only the ones-digit of the sum. Write the "carry digit" above the top digit of the next column: in this case the next column is the tens-column, so write a 1 above the tens-digit of the first number. If both first and second number each have only one digit then their sum is given in the addition table, and the addition algorithm is unnecessary. Then comes the tens-column. The tens-column might contain two digits: the tens-digit of the first number and the tens-digit of the second number. If one of the numbers has a missing tens-digit then the tens-digit for this number can be considered to be a 0. Add the tens-digits of the two numbers. Then, if there is a carry digit, add it to this sum. If the sum was 18 then adding the carry digit to it will yield 19. If the sum of the tens-digits (plus carry digit, if there is one) is less than ten then write it in the tens-column under the line. If the sum has two digits then write its last digit in the tens-column under the line, and carry its first digit (which should be a 1) over to the next column: in this case the hundreds-column. If none of the two numbers has a hundreds-digit then if there is no carry digit then the addition algorithm has finished. If there is a carry digit (carried over from the tens-column) then write it in the hundreds-column under the line, and the algorithm is finished. When the algorithm finishes, the number under the line is the sum of the two numbers. If at least one of the numbers has a hundreds-digit then if one of the numbers has a missing hundreds-digit then write a 0 digit in its place. Add the two hundreds-digits, and to their sum add the carry digit if there is one. Then write the sum of the hundreds-column under the line, also in the hundreds column. If the sum has two digits then write down the last digit of the sum in the hundreds-column and write the carry digit to its left: on the thousands-column. Say one wants to find the sum of the numbers 653 and 274. Write the second number under the first one, with digits aligned in columns, like so: Then draw a line under the second number and put a plus sign. The addition starts with the ones-column. The ones-digit of the first number is 3 and of the second number is 4. The sum of three and four is seven, so write a 7 in the ones-column under the line: Next, the tens-column. The tens-digit of the first number is 5, and the tens-digit of the second number is 7, and five plus seven is twelve: 12, which has two digits, so write its last digit, 2, in the tens-column under the line, and write the carry digit on the hundreds-column above the first number: Next, the hundreds-column. The hundreds-digit of the first number is 6, while the hundreds-digit of the second number is 2. The sum of six and two is eight, but there is a carry digit, which added to eight is equal to nine. Write the 9 under the line in the hundreds-column: No digits (and no columns) have been left unadded, so the algorithm finishes, and The result of the addition of one to a number is the successor of that number. Examples: the successor of zero is one, the successor of one is two, the successor of two is three, the successor of ten is eleven. Every natural number has a successor. The predecessor of the successor of a number is the number itself. For example, five is the successor of four therefore four is the predecessor of five. Every natural number except zero has a predecessor. If a number is the successor of another number, then the first number is said to be larger than the other number. If a number is larger than another number, and if the other number is larger than a third number, then the first number is also larger than the third number. Example: five is larger than four, and four is larger than three, therefore five is larger than three. But six is larger than five, therefore six is also larger than three. But seven is larger than six, therefore seven is also larger than three ... therefore eight is larger than three ... therefore nine is larger than three, etc. If two non-zero natural numbers are added together, then their sum is larger than either one of them. Example: three plus five equals eight, therefore eight is larger than three () and eight is larger than five (). The symbol for "larger than" is >. If a number is larger than another one, then the other is smaller than the first one. Examples: three is smaller than eight () and five is smaller than eight (). The symbol for smaller than is <. A number cannot be at the same time larger and smaller than another number. Neither can a number be at the same time larger than and equal to another number. Given a pair of natural numbers, one and only one of the following cases must be true: To count a group of objects means to assign a natural number to each one of the objects, as if it were a label for that object, such that a natural number is never assigned to an object unless its predecessor was already assigned to another object, with the exception that zero is not assigned to any object: the smallest natural number to be assigned is one, and the largest natural number assigned depends on the size of the group. It is called the count and it is equal to the number of objects in that group. The process of counting a group is the following: When the counting is finished, the last value of the count will be the final count. This count is equal to the number of objects in the group. Often, when counting objects, one does not keep track of what numerical label corresponds to which object: one only keeps track of the subgroup of objects which have already been labeled, so as to be able to identify unlabeled objects necessary for Step 2. However, if one is counting persons, then one can ask the persons who are being counted to each keep track of the number which the person's self has been assigned. After the count has finished it is possible to ask the group of persons to file up in a line, in order of increasing numerical label. What the persons would do during the process of lining up would be something like this: each pair of persons who are unsure of their positions in the line ask each other what their numbers are: the person whose number is smaller should stand on the left side and the one with the larger number on the right side of the other person. Thus, pairs of persons compare their numbers and their positions, and commute their positions as necessary, and through repetition of such conditional commutations they become ordered. Subtraction is the mathematical operation which describes a reduced quantity. The result of this operation is the difference between two numbers. As with addition, subtraction can have a number of interpretations, such as: As with addition, there are other possible interpretations, such as motion. Symbolically, the minus sign ("−") represents the subtraction operation. So the statement "five minus three equals two" is also written as . In elementary arithmetic, subtraction uses smaller positive numbers for all values to produce simpler solutions. Unlike addition, subtraction is not commutative, so the order of numbers in the operation will change the result. Therefore, each number is provided a different distinguishing name. The first number (5 in the previous example) is formally defined as the minuend and the second number (3 in the previous example) as the subtrahend. The value of the minuend is larger than the value of the subtrahend so that the result is a positive number, but a smaller value of the minuend will result in negative numbers. There are several methods to accomplish subtraction. The method which is in the United States of America referred to as traditional mathematics taught elementary school students to subtract using methods suitable for hand calculation. The particular method used varies from country from country, and within a country, different methods are in fashion at different times. Reform mathematics is distinguished generally by the lack of preference for any specific technique, replaced by guiding 2nd-grade students to invent their own methods of computation, such as using properties of negative numbers in the case of TERC. American schools currently teach a method of subtraction using borrowing and a system of markings called crutches. Although a method of borrowing had been known and published in textbooks prior, apparently the crutches are the invention of William A. Browell, who used them in a study in November 1937 [1]. This system caught on rapidly, displacing the other methods of subtraction in use in America at that time. Students in some European countries are taught, and some older Americans employ, a method of subtraction called the Austrian method, also known as the additions method. There is no borrowing in this method. There are also crutches (markings to aid the memory) which [probably] vary according to country. In the method of borrowing, a subtraction such as will accomplish the ones-place subtraction of 9 from 6 by borrowing a 10 from 80 and adding it to the 6. The problem is thus transformed into effectively. This is indicated by striking through the 8, writing a small 7 above it, and writing a small 1 above the 6. These markings are called crutches. The 9 is then subtracted from 16, leaving 7, and the 30 from the 70, leaving 40, or 47 as the result. In the additions method, a 10 is borrowed to make the 6 into 16, in preparation for the subtraction of 9, just as in the borrowing method. However, the 10 is not taken by reducing the minuend, rather one augments the subtrahend. Effectively, the problem is transformed into . Typically a crutch of a small one is marked just below the subtrahend digit as a reminder. Then the operations proceed: 9 from 16 is 7; and 40 (that is, ) from 80 is 40, or 47 as the result. The additions method seem to be taught in two variations, which differ only in psychology. Continuing the example of , the first variation attempts to subtract 9 from 6, and then 9 from 16, borrowing a 10 by marking near the digit of the subtrahend in the next column. The second variation attempts to find a digit which, when added to 9, gives 6, and recognizing that is not possible, gives 16, and carrying the 10 of the 16 as a one marking near the same digit as in the first method. The markings are the same; it is just a matter of preference as to how one explains its appearance. As a final caution, the borrowing method gets a bit complicated in cases such as , where a borrow cannot be made immediately, and must be obtained by reaching across several columns. In this case, the minuend is effectively rewritten as , by taking a 100 from the hundreds, making ten 10s from it, and immediately borrowing that down to nine 10s in the tens column and finally placing a 10 in the ones column. When two numbers are multiplied together, the result is called a product. The two numbers being multiplied together are called factors. Suppose there are five red bags, each one containing three apples. Now grabbing an empty green bag, move all the apples from all five red bags into the green bag. Now the green bag will have fifteen apples. Thus the product of five and three is fifteen. This can also be stated as "five times three is fifteen" or "five times three equals fifteen" or "fifteen is the product of five and three". Multiplication can be seen to be a form of repeated addition: the first factor indicates how many times the second factor should be added onto itself; the final sum being the product. Symbolically, multiplication is represented by the multiplication sign: ×. So the statement "five times three equals fifteen" can be written symbolically as In some countries, and in more advanced arithmetic, other multiplication signs are used, e.g. . In some situations, especially in algebra, where numbers can be symbolized with letters, the multiplication symbol may be omitted; e.g. xy means . The order in which two numbers are multiplied does not matter, so that, for example, three times four equals four times three. This is the commutative property of multiplication. To multiply a pair of digits using the table, find the intersection of the row of the first digit with the column of the second digit: the row and the column intersect at a square containing the product of the two digits. Most pairs of digits produce two-digit numbers. In the multiplication algorithm the tens-digit of the product of a pair of digits is called the "carry digit". Consider a multiplication where one of the factors has only one digit, whereas the other factor has an arbitrary quantity of digits. Write down the multi-digit factor, then write the single-digit factor under the last digit of the multi-digit factor. Draw a horizontal line under the single-digit factor. Henceforth, the single-digit factor will be called the "multiplier" and the multi-digit factor will be called the "multiplicand". Suppose for simplicity that the multiplicand has three digits. The first digit is the hundreds-digit, the middle digit is the tens-digit, and the last, rightmost, digit is the ones-digit. The multiplier only has a ones-digit. The ones-digits of the multiplicand and multiplier form a column: the ones-column. Start with the ones-column: the ones-column should contain a pair of digits: the ones-digit of the multiplicand and, under it, the ones-digit of the multiplier. Find the product of these two digits: write this product under the line and in the ones-column. If the product has two digits, then write down only the ones-digit of the product. Write the "carry digit" as a superscript of the yet-unwritten digit in the next column and under the line: in this case the next column is the tens-column, so write the carry digit as the superscript of the yet-unwritten tens-digit of the product (under the line). If both first and second number each have only one digit then their product is given in the multiplication table, and the multiplication algorithm is unnecessary. Then comes the tens-column. The tens-column so far contains only one digit: the tens-digit of the multiplicand (though it might contain a carry digit under the line). Find the product of the multiplier and the tens-digits of the multiplicand. Then, if there is a carry digit (superscripted, under the line and in the tens-column), add it to this product. If the resulting sum is less than ten then write it in the tens-column under the line. If the sum has two digits then write its last digit in the tens-column under the line, and carry its first digit over to the next column: in this case the hundreds column. If the multiplicand does not have a hundreds-digit then if there is no carry digit then the multiplication algorithm has finished. If there is a carry digit (carried over from the tens-column) then write it in the hundreds-column under the line, and the algorithm is finished. When the algorithm finishes, the number under the line is the product of the two numbers. If the multiplicand has a hundreds-digit, find the product of the multiplier and the hundreds-digit of the multiplicand, and to this product add the carry digit if there is one. Then write the resulting sum of the hundreds-column under the line, also in the hundreds column. If the sum has two digits then write down the last digit of the sum in the hundreds-column and write the carry digit to its left: on the thousands-column. Say one wants to find the product of the numbers 3 and 729. Write the single-digit multiplier under the multi-digit multiplicand, with the multiplier under the ones-digit of the multiplicand, like so: Then draw a line under the multiplier and put a multiplication symbol. Multiplication starts with the ones-column. The ones-digit of the multiplicand is 9 and the multiplier is 3. The product of 3 and 9 is 27, so write a 7 in the ones-column under the line, and write the carry-digit 2 as a superscript of the yet-unwritten tens-digit of the product under the line: Next, the tens-column. The tens-digit of the multiplicand is 2, the multiplier is 3, and three times two is six. Add the carry-digit, 2, to the product, 6, to obtain 8. Eight has only one digit: no carry-digit, so write in the tens-column under the line. You can erase the two now. Next, the hundreds-column. The hundreds-digit of the multiplicand is 7, while the multiplier is 3. The product of 3 and 7 is 21, and there is no previous carry-digit (carried over from the tens-column). The product 21 has two digits: write its last digit in the hundreds-column under the line, then carry its first digit over to the thousands-column. Since the multiplicand has no thousands-digit, then write this carry-digit in the thousands-column under the line (not superscripted): No digits of the multiplicand have been left unmultiplied, so the algorithm finishes, and Given a pair of factors, each one having two or more digits, write both factors down, one under the other one, so that digits line up in columns. For simplicity consider a pair of three-digits numbers. Write the last digit of the second number under the last digit of the first number, forming the ones-column. Immediately to the left of the ones-column will be the tens-column: the top of this column will have the second digit of the first number, and below it will be the second digit of the second number. Immediately to the left of the tens-column will be the hundreds-column: the top of this column will have the first digit of the first number and below it will be the first digit of the second number. After having written down both factors, draw a line under the second factor. The multiplication will consist of two parts. The first part will consist of several multiplications involving one-digit multipliers. The operation of each one of such multiplications was already described in the previous multiplication algorithm, so this algorithm will not describe each one individually, but will only describe how the several multiplications with one-digit multipliers shall be coordinated. The second part will add up all the subproducts of the first part, and the resulting sum will be the product. First part. Let the first factor be called the multiplicand. Let each digit of the second factor be called a multiplier. Let the ones-digit of the second factor be called the "ones-multiplier". Let the tens-digit of the second factor be called the "tens-multiplier". Let the hundreds-digit of the second factor be called the "hundreds-multiplier". Start with the ones-column. Find the product of the ones-multiplier and the multiplicand and write it down in a row under the line, aligning the digits of the product in the previously-defined columns. If the product has four digits, then the first digit will be the beginning of the thousands-column. Let this product be called the "ones-row". Then the tens-column. Find the product of the tens-multiplier and the multiplicand and write it down in a row—call it the "tens-row"—under the ones-row, but shifted one column to the left. That is, the ones-digit of the tens-row will be in the tens-column of the ones-row; the tens-digit of the tens-row will be under the hundreds-digit of the ones-row; the hundreds-digit of the tens-row will be under the thousands-digit of the ones-row. If the tens-row has four digits, then the first digit will be the beginning of the ten-thousands-column. Next, the hundreds-column. Find the product of the hundreds-multiplier and the multiplicand and write it down in a row—call it the "hundreds-row"—under the tens-row, but shifted one more column to the left. That is, the ones-digit of the hundreds-row will be in the hundreds-column; the tens-digit of the hundreds-row will be in the thousands-column; the hundreds-digit of the hundreds-row will be in the ten-thousands-column. If the hundreds-row has four digits, then the first digit will be the beginning of the hundred-thousands-column. After having down the ones-row, tens-row, and hundreds-row, draw a horizontal line under the hundreds-row. The multiplications are over. Second part. Now the multiplication has a pair of lines. The first one under the pair of factors, and the second one under the three rows of subproducts. Under the second line there will be six columns, which from right to left are the following: ones-column, tens-column, hundreds-column, thousands-column, ten-thousands-column, and hundred-thousands-column. Between the first and second lines, the ones-column will contain only one digit, located in the ones-row: it is the ones-digit of the ones-row. Copy this digit by rewriting it in the ones-column under the second line. Between the first and second lines, the tens-column will contain a pair of digits located in the ones-row and the tens-row: the tens-digit of the ones-row and the ones-digit of the tens-row. Add these digits up and if the sum has just one digit then write this digit in the tens-column under the second line. If the sum has two digits then the first digit is a carry-digit: write the last digit down in the tens-column under the second line and carry the first digit over to the hundreds-column, writing it as a superscript to the yet-unwritten hundreds-digit under the second line. Between the first and second lines, the hundreds-column will contain three digits: the hundreds-digit of the ones-row, the tens-digit of the tens-row, and the ones-digit of the hundreds-row. Find the sum of these three digits, then if there is a carry-digit from the tens-column (written in superscript under the second line in the hundreds-column) then add this carry-digit as well. If the resulting sum has one digit then write it down under the second line in the hundreds-column; if it has two digits then write the last digit down under the line in the hundreds-column, and carry the first digit over to the thousands-column, writing it as a superscript to the yet-unwritten thousands-digit under the line. Between the first and second lines, the thousands-column will contain either two or three digits: the hundreds-digit of the tens-row, the tens-digit of the hundreds-row, and (possibly) the thousands-digit of the ones-row. Find the sum of these digits, then if there is a carry-digit from the hundreds-column (written in superscript under the second line in the thousands-column) then add this carry-digit as well. If the resulting sum has one digit then write it down under the second line in the thousands-column; if it has two digits then write the last digit down under the line in the thousands-column, and carry the first digit over to the ten-thousands-column, writing it as a superscript to the yet-unwritten ten-thousands-digit under the line. Between the first and second lines, the ten-thousands-column will contain either one or two digits: the hundreds-digit of the hundreds-column and (possibly) the thousands-digit of the tens-column. Find the sum of these digits (if the one in the tens-row is missing think of it as a 0), and if there is a carry-digit from the thousands-column (written in superscript under the second line in the ten-thousands-column) then add this carry-digit as well. If the resulting sum has one digit then write it down under the second line in the ten-thousands-column; if it has two digits then write the last digit down under the line in the ten-thousands-column, and carry the first digit over to the hundred-thousands-column, writing it as a superscript to the yet-unwritten hundred-thousands digit under the line. However, if the hundreds-row has no thousands-digit then do not write this carry-digit as a superscript, but in normal size, in the position of the hundred-thousands-digit under the second line, and the multiplication algorithm is over. If the hundreds-row does have a thousands-digit, then add to it the carry-digit from the previous row (if there is no carry-digit then think of it as a 0) and write the single-digit sum in the hundred-thousands-column under the second line. The number under the second line is the sought-after product of the pair of factors above the first line. Let our objective be to find the product of 789 and 345. Write the 345 under the 789 in three columns, and draw a horizontal line under them: First part. Start with the ones-column. The multiplicand is 789 and the ones-multiplier is 5. Perform the multiplication in a row under the line: Then the tens-column. The multiplicand is 789 and the tens-multiplier is 4. Perform the multiplication in the tens-row, under the previous subproduct in the ones-row, but shifted one column to the left: Next, the hundreds-column. The multiplicand is once again 789, and the hundreds-multiplier is 3. Perform the multiplication in the hundreds-row, under the previous subproduct in the tens-row, but shifted one (more) column to the left. Then draw a horizontal line under the hundreds-row: Second part. Now add the subproducts between the first and second lines, but ignoring any superscripted carry-digits located between the first and second lines. The answer is In mathematics, especially in elementary arithmetic, division is an arithmetic operation which is the inverse of multiplication. Specifically, if c times b equals a, written: where b is not zero, then a divided by b equals c, written: For instance, since In the above expression, a is called the dividend, b the divisor and c the quotient. Division by zero (i.e. where the divisor is zero) is not defined. Division is most often shown by placing the dividend over the divisor with a horizontal line, also called a vinculum, between them. For example, a divided by b is written This can be read out loud as "a divided by b" or "a over b". A way to express division all on one line is to write the dividend, then a slash, then the divisor, like this: This is the usual way to specify division in most computer programming languages since it can easily be typed as a simple sequence of characters. A handwritten or typographical variation, which is halfway between these two forms, uses a solidus (fraction slash) but elevates the dividend, and lowers the divisor: Any of these forms can be used to display a fraction. A common fraction is a division expression where both dividend and divisor are integers (although typically called the numerator and denominator), and there is no implication that the division needs to be evaluated further. A more basic way to show division is to use the obelus (or division sign) in this manner: This form is infrequent except in basic arithmetic. The obelus is also used alone to represent the division operation itself, for instance, as a label on a key of a calculator. In some non-English-speaking cultures, "a divided by b" is written a : b. However, in English usage the colon is restricted to expressing the related concept of ratios (then "a is to b"). With a knowledge of multiplication tables, two integers can be divided on paper using the method of long division. If the dividend has a fractional part (expressed as a decimal fraction), one can continue the algorithm past the ones place as far as desired. If the divisor has a decimal fractional part, one can restate the problem by moving the decimal to the right in both numbers until the divisor has no fraction. To divide by a fraction, multiply by the reciprocal (reversing the position of the top and bottom parts) of that fraction. Local standards usually define the educational methods and content included in the elementary level of instruction. In the United States and Canada, controversial subjects include the amount of calculator usage compared to manual computation and the broader debate between traditional mathematics and reform mathematics. In the United States, the 1989 NCTM standards led to curricula which de-emphasized or omitted much of what was considered to be elementary arithmetic in elementary school, and replaced it with emphasis on topics traditionally studied in college such as algebra, statistics and problem solving, and non-standard computation methods unfamiliar to most adults. An Egyptian fraction is the sum of distinct unit fractions, such as $\tfrac{1}{2}+\tfrac{1}{3}+\tfrac{1}{16}$. That is, each fraction in the expression has a numerator equal to 1 and a denominator that is a positive integer, and all the denominators differ from each other. The value of an expression of this type is a positive rational number a/b; for instance the Egyptian fraction above sums to 43/48. Every positive rational number can be represented by an Egyptian fraction. Sums of this type, and similar sums also including 2/3 and 3/4 as summands, were used as a serious notation for rational numbers by the ancient Egyptians, and continued to be used by other civilizations into medieval times. In modern mathematical notation, Egyptian fractions have been superseded by vulgar fractions and decimal notation. However, Egyptian fractions continue to be an object of study in modern number theory and recreational mathematics, as well as in modern historical studies of ancient mathematics. Egyptian fraction notation was developed in the Middle Kingdom of Egypt, altering the Old Kingdom's Eye of Horus numeration system. Five early texts in which Egyptian fractions appear were the Egyptian Mathematical Leather Roll, the Moscow Mathematical Papyrus, the Reisner Papyrus, the Kahun Papyrus and the Akhmim Wooden Tablet. A later text, the Rhind Mathematical Papyrus, introduced improved ways of writing Egyptian fractions. The Rhind papyrus was written by Ahmes and dates from the Second Intermediate Period; it includes a ntable of Egyptian fraction expansions for rational numbers 2/, as well as 84 word problems. Solutions to each problem were written out in scribal shorthand, with the final answers of all 84 problems being expressed in Egyptian fraction notation. 2/n tables similar to the one on the Rhind papyrus also appear on some of the other texts. However, as the Kahun Papyrus shows, vulgar fractions were also used by scribes within their calculations. To write the unit fractions used in their Egyptian fraction notation, in hieroglyph script, the Egyptians placed the hieroglyph (er, "[one] among" or possibly re, mouth) above a number to represent the reciprocal of that number. Similarly in hieratic script they drew a line over the letter representing the number. For example: The Egyptians had special symbols for 1/2, 2/3, and 3/4 that were used to reduce the size of numbers greater than 1/2 when such numbers were converted to an Egyptian fraction series. The remaining number after subtracting one of these special fractions was written using as a sum of distinct unit fractions according to the usual Egyptian fraction notation. The Egyptians also used an alternative notation modified from the Old Kingdom and based on the parts of the Eye of Horus to denote a special set of fractions of the form 1/2k (for k = 1, 2, ..., 6) and sums of these numbers, which are necessarily dyadic rational numbers. These "Horus-Eye fractions" were used in the Middle Kingdom in conjunction with the later notation for Egyptian fractions to subdivide a hekat, the primary ancient Egyptian volume measure for grain, bread, and other small quantities of volume, as described in the Akhmim Wooden Tablet. If any remainder was left after expressing a quantity in Eye of Horus fractions of a hekat, the remainder was written using the usual Egyptian fraction notation as multiples of a ro, a unit equal to 1/320 of a hekat. Modern historians of mathematics have studied the Rhind papyrus and other ancient sources in an attempt to discover the methods the Egyptians used in calculating with Egyptian fractions. In particular, study in this area has concentrated on understanding the tables of expansions for numbers of the form 2/ n in the Rhind papyrus. Although these expansions can generally be described as algebraic identities, the methods used by the Egyptians may not correspond directly to these identities. Additionally, the expansions in the table do not match any single identity; rather, different identities match the expansions for prime and for composite denominators, and more than one identity fits the numbers of each type: Egyptian fraction notation continued to be used in Greek times and into the Middle Ages (Struik 1967), despite complaints as early as Ptolemy's Almagest about the clumsiness of the notation compared to alternatives such as the Babylonian base-60 notation. An important text of medieval mathematics, the Liber Abaci (1202) of Leonardo of Pisa (more commonly known as Fibonacci), provides some insight into the uses of Egyptian fractions in the Middle Ages, and introduces topics that continue to be important in modern mathematical study of these series. The primary subject of the Liber Abaci is calculations involving decimal and vulgar fraction notation, which eventually replaced Egyptian fractions. Fibonacci himself used a complex notation for fractions involving a combination of a mixed radix notation with sums of fractions. Many of the calculations throughout Fibonacci's book involve numbers represented as Egyptian fractions, and one section of this book (Sigler 2002, chapter II.7) provides a list of methods for conversion of vulgar fractions to Egyptian fractions. If the number is not already a unit fraction, the first method in this list is to attempt to split the numerator into a sum of divisors of the denominator; this is possible whenever the denominator is a practical number, and Liber Abaci includes tables of expansions of this type for the practical numbers 6, 8, 12, 20, 24, 60, and 100. The next several methods involve algebraic identities such as $\tfrac{a}{ab-1}=\tfrac{1}{b}+\tfrac{1}{b(ab-1)}.$ For instance, Fibonacci represents the fraction $\tfrac{8}{11}$ by splitting the numerator into a sum of two numbers, each of which divides one plus the denominator: $\tfrac{8}{11}=\tfrac{6}{11}+\tfrac{2}{11}.$ Fibonacci applies the algebraic identity above to each these two parts, producing the expansion $\tfrac{8}{11}=\tfrac{1}{2}+\tfrac{1}{22}+\tfrac{1}{6}+\tfrac{1}{66}.$ Fibonacci describes similar methods for denominators that are two or three less than a number with many factors. In the rare case that these other methods all fail, Fibonacci suggests a greedy algorithm for computing Egyptian fractions, in which one repeatedly chooses the unit fraction with the smallest denominator that is no larger than the remaining fraction to be expanded: that is, in more modern notation, we replace a fraction x/y by the expansion where $\lceil \ldots \rceil$ represents the ceiling function. Fibonacci suggests switching to another method after the first such expansion, but he also gives examples in which this greedy expansion was iterated until a complete Egyptian fraction expansion was constructed: $\tfrac{4}{13}=\tfrac{1}{4}+\tfrac{1}{18}+\tfrac{1}{468}$ and $\tfrac{17}{29}=\tfrac{1}{2}+\tfrac{1}{12}+\ tfrac{1}{348}.$ As later mathematicians showed, each greedy expansion reduces the numerator of the remaining fraction to be expanded, so this method always terminates with a finite expansion. However, compared to ancient Egyptian expansions or to more modern methods, this method may produce expansions that are quite long, with large denominators, and Fibonacci himself noted the awkwardness of the expansions produced by this method. For instance, the greedy method expands while other methods lead to the much better expansion Sylvester's sequence 2, 3, 7, 43, 1807, ... can be viewed as generated by an infinite greedy expansion of this type for the number one, where at each step we choose the denominator $\lfloor y/x\rfloor+1$ instead of $\lceil y/x\rceil$, and sometimes Fibonacci's greedy algorithm is attributed to Sylvester. After his description of the greedy algorithm, Fibonacci suggests yet another method, expanding a fraction $a/b$ by searching for a number c having many divisors, with $b/2 < c < b$, replacing $a/b$ by $ac/bc$, and expanding $ac$ as a sum of divisors of $bc$, similar to the method proposed by Hultsch and Bruins to explain some of the expansions in the Rhind papyrus. Modern number theorists have studied many different problems related to Egyptian fractions, including problems of bounding the length or maximum denominator in Egyptian fraction representations, finding expansions of certain special forms or in which the denominators are all of some special type, the termination of various methods for Egyptian fraction expansion, and showing that expansions exist for any sufficiently dense set of sufficiently smooth numbers. Some notable problems remain unsolved with regard to Egyptian fractions, despite considerable effort by mathematicians. Guy (2004) describes these problems in more detail and lists numerous additional open problems. In mathematics, especially in elementary arithmetic, division (÷) is an arithmetic operation. Specifically, if b times c equals a, written: where b is not zero, then a divided by b equals c, written: For instance, since In the expression a ÷ b = c, a is called the dividend or numerator, b the divisor or denominator and the result c is called the quotient. Conceptually, division describes two distinct but related settings. Partitioning involves taking a set of size a and forming b groups that are equal in size. The size of each group formed, c, is the quotient of a and b. Quotative division involves taking a set of size a and forming groups of size c. The number of groups of this size that can be formed, b, is the quotient of a and c. Teaching division usually leads to the concept of fractions being introduced to students. Unlike addition, subtraction, and multiplication, the set of all integers is not closed under division. Dividing two integers may result in a remainder. To complete the division of the remainder, the number system is extended to include fractions or rational numbers as they are more generally called. Division is often shown in algebra and science by placing the dividend over the divisor with a horizontal line, also called a vinculum or fraction bar, between them. For example, a divided by b is written This can be read out loud as "a divided by b", "a by b" or "a over b". A way to express division all on one line is to write the dividend (or numerator), then a slash, then the divisor (or denominator), like this: This is the usual way to specify division in most computer programming languages since it can easily be typed as a simple sequence of ASCII characters. A typographical variation halfway between these two forms uses a solidus (fraction slash) but elevates the dividend, and lowers the divisor: Any of these forms can be used to display a fraction. A fraction is a division expression where both dividend and divisor are integers (although typically called the numerator and denominator), and there is no implication that the division must be evaluated further. A second way to show division is to use the obelus (or division sign), common in arithmetic, in this manner: This form is infrequent except in elementary arithmetic. ISO 80000-2-9.6 states it should not be used. The obelus is also used alone to represent the division operation itself, as for instance as a label on a key of a calculator. In some non-English-speaking cultures, "a divided by b" is written a : b. This notation was introduced in 1631 by William Oughtred in his Clavis Mathematicae and later popularized by Gottfried Wilhelm Leibniz. However, in English usage the colon is restricted to expressing the related concept of ratios (then "a is to b"). In elementary mathematics the notation $b)~a$ or $b)\overline{~a~}$ is used to denote a divided by b. This notation was first introduced by Michael Stifel in Arithmetica integra, published in 1544. Division is often introduced through the notion of "sharing out" a set of objects, for example a pile of sweets, into a number of equal portions. Distributing the objects several at a time in each round of sharing to each portion leads to the idea of "chunking", i.e., division by repeated subtraction. More systematic and more efficient (but also more formalised and more rule-based, and more removed from an overall holistic picture of what division is achieving), a person who knows the multiplication tables can divide two integers using pencil and paper using the method of short division, if the divisor is simple. Long division is used for larger integer divisors. If the dividend has a fractional part (expressed as a decimal fraction), one can continue the algorithm past the ones place as far as desired. If the divisor has a fractional part, we can restate the problem by moving the decimal to the right in both numbers until the divisor has no fraction. A person can calculate division with an abacus by repeatedly placing the dividend on the abacus, and then subtracting the divisor the offset of each digit in the result, counting the number of divisions possible at each offset. A person can use logarithm tables to divide two numbers, by subtracting the two numbers' logarithms, then looking up the antilogarithm of the result. A person can calculate division with a slide rule by aligning the divisor on the C scale with the dividend on the D scale. The quotient can be found on the D scale where it is aligned with the left index on the C scale. The user is responsible, however, for mentally keeping track of the decimal point. Modern computers compute division by methods that are faster than long division: see Division algorithm. In modular arithmetic, some numbers have a multiplicative inverse with respect to the modulus. We can calculate division by multiplication in such a case. This approach is useful in computers that do not have a fast division instruction. The division algorithm is a mathematical theorem that precisely expresses the outcome of the usual process of division of integers. In particular, the theorem asserts that integers called the quotient q and remainder r always exist and that they are uniquely determined by the dividend a and divisor d, with d ≠ 0. Formally, the theorem is stated as follows: There exist unique integers q and r such that a = qd + r and 0 ≤ r < | d |, where | d | denotes the absolute value of d. Division of integers is not closed. Apart from division by zero being undefined, the quotient is not an integer unless the dividend is an integer multiple of the divisor. For example 26 cannot be divided by 11 to give an integer. Such a case uses one of five approaches: Dividing integers in a computer program requires special care. Some programming languages, such as C, treat integer division as in case 5 above, so the answer is an integer. Other languages, such as MATLAB and every computer algebra system return a rational number as the answer, as in case 3 above. These languages provide also functions to get the results of the other cases, either directly of from the result of case 3. Names and symbols used for integer division include div, /, \, and %. Definitions vary regarding integer division when the dividend or the divisor is negative: rounding may be toward zero (so called T-division) or toward −∞ (F-division); rarer styles can occur – see Modulo operation for the details. Divisibility rules can sometimes be used to quickly determine whether one integer divides exactly into another. The result of dividing two rational numbers is another rational number when the divisor is not 0. We may define division of two rational numbers p/q and r/s by All four quantities are integers, and only p may be 0. This definition ensures that division is the inverse operation of multiplication. Division of two real numbers results in another real number when the divisor is not 0. It is defined such a/b = c if and only if a = cb and b ≠ 0. Division of any number by zero (where the divisor is zero) is undefined. This is because zero multiplied by any finite number always results in a product of zero. Entry of such an expression into most calculators produces an error message. Dividing two complex numbers results in another complex number when the divisor is not 0, defined thus: All four quantities are real numbers. r and s may not both be 0. Division for complex numbers expressed in polar form is simpler than the definition above: Again all four quantities are real numbers. r may not be 0. One can define the division operation for polynomials in one variable over a field. Then, as in the case of integers, one has a remainder. See Euclidean division of polynomials, and, for hand-written computation, polynomial long division or synthetic division. One can define a division operation for matrices. The usual way to do this is to define , where denotes the inverse of B, but it is far more common to write out explicitly to avoid confusion. Because matrix multiplication is not commutative, one can also define a left division or so-called backslash-division as . For this to be well defined, need not exist, however does need to exist. To avoid confusion, division as defined by is sometimes called right division or slash-division in this context. Note that with left and right division defined this way, is in general not the same as and nor is the same as , but and . To avoid problems when and/or do not exist, division can also be defined as multiplication with the pseudoinverse, i.e., and , where and denote the pseudoinverse of A and B. In abstract algebras such as matrix algebras and quaternion algebras, fractions such as ${a \over b}$ are typically defined as $a \cdot {1 \over b}$ or $a \cdot b^{-1}$ where $b$ is presumed an invertible element (i.e., there exists a multiplicative inverse $b^{-1}$ such that $bb^{-1} = b^{-1}b = 1$ where $1$ is the multiplicative identity). In an integral domain where such elements may not exist, division can still be performed on equations of the form $ab = ac$ or $ba = ca$ by left or right cancellation, respectively. More generally "division" in the sense of "cancellation" can be done in any ring with the aforementioned cancellation properties. If such a ring is finite, then by an application of the pigeonhole principle, every nonzero element of the ring is invertible, so division by any nonzero element is possible in such a ring. To learn about when algebras (in the technical sense) have a division operation, refer to the page on division algebras. In particular Bott periodicity can be used to show that any real normed division algebra must be isomorphic to either the real numbers R, the complex numbers C, the quaternions H, or the octonions O. The derivative of the quotient of two functions is given by the quotient rule: There is no general method to integrate the quotient of two functions. A unit fraction is a rational number written as a fraction where the numerator is one and the denominator is a positive integer. A unit fraction is therefore the reciprocal of a positive integer, 1/n . Examples are 1/1, 1/2, 1/3, 1/4 etc. Multiplying any two unit fractions results in a product that is another unit fraction: However, adding, subtracting, or dividing two unit fractions produces a result that is generally not a unit fraction: Unit fractions play an important role in modular arithmetic, as they may be used to reduce modular division to the calculation of greatest common divisors. Specifically, suppose that we wish to perform divisions by a value x, modulo y. In order for division by x to be well defined modulo y, x and y must be relatively prime. Then, by using the extended Euclidean algorithm for greatest common divisors we may find a and b such that from which it follows that or equivalently Thus, to divide by x (modulo y) we need merely instead multiply by a . Any positive rational number can be written as the sum of unit fractions, in multiple ways. For example, The ancient Egyptians used sums of distinct unit fractions in their notation for more general rational numbers, and so such sums are often called Egyptian fractions. There is still interest today in analyzing the methods used by the ancients to choose among the possible representations for a fractional number, and to calculate with such representations. The topic of Egyptian fractions has also seen interest in modern number theory; for instance, the Erdős–Graham conjecture and the Erdős–Straus conjecture concern sums of unit fractions, as does the definition of Ore's harmonic numbers. In geometric group theory, triangle groups are classified into Euclidean, spherical, and hyperbolic cases according to whether an associated sum of unit fractions is equal to one, greater than one, or less than one respectively. Many well-known infinite series have terms that are unit fractions. These include: The Hilbert matrix is the matrix with elements It has the unusual property that all elements in its inverse matrix are integers. Similarly, Richardson (2001) defined a matrix with elements where F[i] denotes the ith Fibonacci number. He calls this matrix the Filbert matrix and it has the same property of having an integer inverse. Two fractions are called adjacent if their difference is a unit fraction. In a uniform distribution on a discrete space, all probabilities are equal unit fractions. Due to the principle of indifference, probabilities of this form arise frequently in statistical calculations. Additionally, Zipf's law states that, for many observed phenomena involving the selection of items from an ordered sequence, the probability that the nth item is selected is proportional to the unit fraction 1/n. The energy levels of photons that can be absorbed or emitted by a hydrogen atom are, according to the Rydberg formula, proportional to the differences of two unit fractions. An explanation for this phenomenon is provided by the Bohr model, according to which the energy levels of electron orbitals in a hydrogen atom are inversely proportional to square unit fractions, and the energy of a photon is quantized to the difference between two levels. Arthur Eddington argued that the fine structure constant was a unit fraction, first 1/136 then 1/137. This contention has been falsified, given that current estimates of the fine structure constant are (to 6 significant digits) 1/137.036. Addition is a mathematical operation that represents the total amount of objects together in a collection. It is signified by the plus sign (+). For example, in the picture on the right, there are 3 + 2 apples—meaning three apples and two apples together, which is a total of 5 apples. Therefore, 3 + 2 = 5. Besides counting fruits, addition can also represent combining other physical and abstract quantities using different kinds of objects: negative numbers, fractions, irrational numbers, vectors, decimals, functions, matrices and more. Addition follows several important patterns. It is commutative, meaning that order does not matter, and it is associative, meaning that when one adds more than two numbers, order in which addition is performed does not matter (see Summation). Repeated addition of 1 is the same as counting; addition of 0 does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication. All of these rules can be proven, starting with the addition of natural numbers and generalizing up through the real numbers and beyond. General binary operations that continue these patterns are studied in abstract algebra. Performing addition is one of the simplest numerical tasks. Addition of very small numbers is accessible to toddlers; the most basic task, 1 + 1, can be performed by infants as young as five months and even some animals. In primary education, students are taught to add numbers in the decimal system, starting with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancient abacus to the modern computer, where research on the most efficient implementations of addition continues to this day. Addition is written using the plus sign "+" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example, There are also situations where addition is "understood" even though no symbol appears: The sum of a series of related numbers can be expressed through capital sigma notation, which compactly denotes iteration. For example, The numbers or the objects to be added in general addition are called the terms, the addends, or the summands; this terminology carries over to the summation of multiple terms. This is to be distinguished from factors, which are multiplied. Some authors call the first addend the augend. In fact, during the Renaissance, many authors did not consider the first addend an "addend" at all. Today, due to the commutative property of addition, "augend" is rarely used, and both terms are generally called addends. All of this terminology derives from Latin. "Addition" and "add" are English words derived from the Latin verb addere, which is in turn a compound of ad "to" and dare "to give", from the Proto-Indo-European root "to give"; thus to add is to give to. Using the gerundive suffix -nd results in "addend", "thing to be added". Likewise from augere "to increase", one gets "augend", "thing to be increased". "Sum" and "summand" derive from the Latin noun summa "the highest, the top" and associated verb summare. This is appropriate not only because the sum of two positive numbers is greater than either, but because it was once common to add upward, contrary to the modern practice of adding downward, so that a sum was literally higher than the addends. Addere and summare date back at least to Boethius, if not to earlier Roman writers such as Vitruvius and Frontinus; Boethius also used several other terms for the addition operation. The later Middle English terms "adden" and "adding" were popularized by Chaucer. Addition is used to model countless physical processes. Even for the simple case of adding natural numbers, there are many possible interpretations and even more visual representations. Possibly the most fundamental interpretation of addition lies in combining sets: This interpretation is easy to visualize, with little danger of ambiguity. It is also useful in higher mathematics; for the rigorous definition it inspires, see Natural numbers below. However, it is not obvious how one should extend this version of addition to include fractional numbers or negative numbers. One possible fix is to consider collections of objects that can be easily divided, such as pies or, still better, segmented rods. Rather than just combining collections of segments, rods can be joined end-to-end, which illustrates another conception of addition: adding not the rods but the lengths of the rods. A second interpretation of addition comes from extending an initial length by a given length: The sum a + b can be interpreted as a binary operation that combines a and b, in an algebraic sense, or it can be interpreted as the addition of b more units to a. Under the latter interpretation, the parts of a sum a + b play asymmetric roles, and the operation a + b is viewed as applying the unary operation +b to a. Instead of calling both a and b addends, it is more appropriate to call a the augend in this case, since a plays a passive role. The unary view is also useful when discussing subtraction, because each unary addition operation has an inverse unary subtraction operation, and vice versa. Addition is commutative, meaning that one can reverse the terms in a sum left-to-right, and the result is the same as the last one. Symbolically, if a and b are any two numbers, then The fact that addition is commutative is known as the "commutative law of addition". This phrase suggests that there are other commutative laws: for example, there is a commutative law of multiplication. However, many binary operations are not commutative, such as subtraction and division, so it is misleading to speak of an unqualified "commutative law". A somewhat subtler property of addition is associativity, which comes up when one tries to define repeated addition. Should the expression be defined to mean (a + b) + c or a + (b + c)? That addition is associative tells us that the choice of definition is irrelevant. For any three numbers a, b, and c, it is true that For example, (1 + 2) + 3 = 3 + 3 = 6 = 1 + 5 = 1 + (2 + 3). Not all operations are associative, so in expressions with other operations like subtraction, it is important to specify the order of operations. When adding zero to any number, the quantity does not change; zero is the identity element for addition, also known as the additive identity. In symbols, for any a, This law was first identified in Brahmagupta's Brahmasphutasiddhanta in 628 AD, although he wrote it as three separate laws, depending on whether a is negative, positive, or zero itself, and he used words rather than algebraic symbols. Later Indian mathematicians refined the concept; around the year 830, Mahavira wrote, "zero becomes the same as what is added to it", corresponding to the unary statement 0 + a = a. In the 12th century, Bhaskara wrote, "In the addition of cipher, or subtraction of it, the quantity, positive or negative, remains the same", corresponding to the unary statement a + 0 = a. In the context of integers, addition of one also plays a special role: for any integer a, the integer (a + 1) is the least integer greater than a, also known as the successor of a. Because of this succession, the value of some a + b can also be seen as the $b^{th}$ successor of a, making addition iterated succession. To numerically add physical quantities with units, they must first be expressed with common units. For example, if a measure of 5 feet is extended by 2 inches, the sum is 62 inches, since 60 inches is synonymous with 5 feet. On the other hand, it is usually meaningless to try to add 3 meters and 4 square meters, since those units are incomparable; this sort of consideration is fundamental in dimensional analysis. Studies on mathematical development starting around the 1980s have exploited the phenomenon of habituation: infants look longer at situations that are unexpected. A seminal experiment by Karen Wynn in 1992 involving Mickey Mouse dolls manipulated behind a screen demonstrated that five-month-old infants expect 1 + 1 to be 2, and they are comparatively surprised when a physical situation seems to imply that 1 + 1 is either 1 or 3. This finding has since been affirmed by a variety of laboratories using different methodologies. Another 1992 experiment with older toddlers, between 18 to 35 months, exploited their development of motor control by allowing them to retrieve ping-pong balls from a box; the youngest responded well for small numbers, while older subjects were able to compute sums up to 5. Even some nonhuman animals show a limited ability to add, particularly primates. In a 1995 experiment imitating Wynn's 1992 result (but using eggplants instead of dolls), rhesus macaques and cottontop tamarins performed similarly to human infants. More dramatically, after being taught the meanings of the Arabic numerals 0 through 4, one chimpanzee was able to compute the sum of two numerals without further training. Typically, children first master counting. When given a problem that requires that two items and three items be combined, young children model the situation with physical objects, often fingers or a drawing, and then count the total. As they gain experience, they learn or discover the strategy of "counting-on": asked to find two plus three, children count three past two, saying "three, four, five" (usually ticking off fingers), and arriving at five. This strategy seems almost universal; children can easily pick it up from peers or teachers. Most discover it independently. With additional experience, children learn to add more quickly by exploiting the commutativity of addition by counting up from the larger number, in this case starting with three and counting "four, five." Eventually children begin to recall certain addition facts ("number bonds"), either through experience or rote memorization. Once some facts are committed to memory, children begin to derive unknown facts from known ones. For example, a child asked to add six and seven may know that 6+6=12 and then reason that 6+7 is one more, or 13. Such derived facts can be found very quickly and most elementary school student eventually rely on a mixture of memorized and derived facts to add fluently. The prerequisite to addition in the decimal system is the fluent recall or derivation of the 100 single-digit "addition facts". One could memorize all the facts by rote, but pattern-based strategies are more enlightening and, for most people, more efficient: As students grow older, they commit more facts to memory, and learn to derive other facts rapidly and fluently. Many students never commit all the facts to memory, but can still find any basic fact quickly. The standard algorithm for adding multidigit numbers is to align the addends vertically and add the columns, starting from the ones column on the right. If a column exceeds ten, the extra digit is "carried" into the next column. An alternate strategy starts adding from the most significant digit on the left; this route makes carrying a little clumsier, but it is faster at getting a rough estimate of the sum. There are many other alternative methods. Analog computers work directly with physical quantities, so their addition mechanisms depend on the form of the addends. A mechanical adder might represent two addends as the positions of sliding blocks, in which case they can be added with an averaging lever. If the addends are the rotation speeds of two shafts, they can be added with a differential. A hydraulic adder can add the pressures in two chambers by exploiting Newton's second law to balance forces on an assembly of pistons. The most common situation for a general-purpose analog computer is to add two voltages (referenced to ground); this can be accomplished roughly with a resistor network, but a better design exploits an operational amplifier. Addition is also fundamental to the operation of digital computers, where the efficiency of addition, in particular the carry mechanism, is an important limitation to overall performance. Blaise Pascal invented the mechanical calculator in 1642, it was the first operational adding machine. It made use of an ingenious gravity-assisted carry mechanism. It was the only operational mechanical calculator in the 17th century and the earliest automatic, digital computers. Pascal's calculator was limited by its carry mechanism which forced its wheels to only turn one way, so it could add but, to subtract, the operator had to use of the method of complements which required as many steps as an addition. Pascal was followed by Giovanni Poleni who built the second functional mechanical calculator in 1709, a calculating clock, which was made of wood and which could, once setup, multiply two numbers automatically. Adders execute integer addition in electronic digital computers, usually using binary arithmetic. The simplest architecture is the ripple carry adder, which follows the standard multi-digit algorithm. One slight improvement is the carry skip design, again following human intuition; one does not perform all the carries in computing 999 + 1, but one bypasses the group of 9s and skips to the answer. Since they compute digits one at a time, the above methods are too slow for most modern purposes. In modern digital computers, integer addition is typically the fastest arithmetic instruction, yet it has the largest impact on performance, since it underlies all the floating-point operations as well as such basic tasks as address generation during memory access and fetching instructions during branching. To increase speed, modern designs calculate digits in parallel; these schemes go by such names as carry select, carry lookahead, and the Ling pseudocarry. Almost all modern implementations are, in fact, hybrids of these last three designs. Unlike addition on paper, addition on a computer often changes the addends. On the ancient abacus and adding board, both addends are destroyed, leaving only the sum. The influence of the abacus on mathematical thinking was strong enough that early Latin texts often claimed that in the process of adding "a number to a number", both numbers vanish. In modern times, the ADD instruction of a microprocessor replaces the augend with the sum but preserves the addend. In a high-level programming language, evaluating a + b does not change either a or b; if the goal is to replace a with the sum this must be explicitly requested, typically with the statement a = a + b. Some languages such as C or C++ allow this to be abbreviated as a += b. To prove the usual properties of addition, one must first define addition for the context in question. Addition is first defined on the natural numbers. In set theory, addition is then extended to progressively larger sets that include the natural numbers: the integers, the rational numbers, and the real numbers. (In mathematics education, positive fractions are added before negative numbers are even considered; this is also the historical route) There are two popular ways to define the sum of two natural numbers a and b. If one defines natural numbers to be the cardinalities of finite sets, (the cardinality of a set is the number of elements in the set), then it is appropriate to define their sum as follows: Here, A U B is the union of A and B. An alternate version of this definition allows A and B to possibly overlap and then takes their disjoint union, a mechanism that allows common elements to be separated out and therefore counted twice. The other popular definition is recursive: Again, there are minor variations upon this definition in the literature. Taken literally, the above definition is an application of the Recursion Theorem on the poset N2. On the other hand, some sources prefer to use a restricted Recursion Theorem that applies only to the set of natural numbers. One then considers a to be temporarily "fixed", applies recursion on b to define a function "a + ", and pastes these unary operations for all a together to form the full binary operation. This recursive formulation of addition was developed by Dedekind as early as 1854, and he would expand upon it in the following decades. He proved the associative and commutative properties, among others, through mathematical induction; for examples of such inductive proofs, see Addition of natural numbers. The simplest conception of an integer is that it consists of an absolute value (which is a natural number) and a sign (generally either positive or negative). The integer zero is a special third case, being neither positive nor negative. The corresponding definition of addition must proceed by cases: Although this definition can be useful for concrete problems, it is far too complicated to produce elegant general proofs; there are too many cases to consider. A much more convenient conception of the integers is the Grothendieck group construction. The essential observation is that every integer can be expressed (not uniquely) as the difference of two natural numbers, so we may as well define an integer as the difference of two natural numbers. Addition is then defined to be compatible with subtraction: Addition of rational numbers can be computed using the least common denominator, but a conceptually simpler definition involves only integer addition and multiplication: The commutativity and associativity of rational addition is an easy consequence of the laws of integer arithmetic. For a more rigorous and general discussion, see field of fractions. A common construction of the set of real numbers is the Dedekind completion of the set of rational numbers. A real number is defined to be a Dedekind cut of rationals: a non-empty set of rationals that is closed downward and has no greatest element. The sum of real numbers a and b is defined element by element: This definition was first published, in a slightly modified form, by Richard Dedekind in 1872. The commutativity and associativity of real addition are immediate; defining the real number 0 to be the set of negative rationals, it is easily seen to be the additive identity. Probably the trickiest part of this construction pertaining to addition is the definition of additive inverses. Unfortunately, dealing with multiplication of Dedekind cuts is a case-by-case nightmare similar to the addition of signed integers. Another approach is the metric completion of the rational numbers. A real number is essentially defined to be the a limit of a Cauchy sequence of rationals, lim a[n]. Addition is defined term by term: This definition was first published by Georg Cantor, also in 1872, although his formalism was slightly different. One must prove that this operation is well-defined, dealing with co-Cauchy sequences. Once that task is done, all the properties of real addition follow immediately from the properties of rational numbers. Furthermore, the other arithmetic operations, including multiplication, have straightforward, analogous definitions. There are many binary operations that can be viewed as generalizations of the addition operation on the real numbers. The field of abstract algebra is centrally concerned with such generalized operations, and they also appear in set theory and category theory. In linear algebra, a vector space is an algebraic structure that allows for adding any two vectors and for scaling vectors. A familiar vector space is the set of all ordered pairs of real numbers; the ordered pair (a,b) is interpreted as a vector from the origin in the Euclidean plane to the point (a,b) in the plane. The sum of two vectors is obtained by adding their individual coordinates: This addition operation is central to classical mechanics, in which vectors are interpreted as forces. In modular arithmetic, the set of integers modulo 12 has twelve elements; it inherits an addition operation from the integers that is central to musical set theory. The set of integers modulo 2 has just two elements; the addition operation it inherits is known in Boolean logic as the "exclusive or" function. In geometry, the sum of two angle measures is often taken to be their sum as real numbers modulo 2π. This amounts to an addition operation on the circle, which in turn generalizes to addition operations on many-dimensional tori. The general theory of abstract algebra allows an "addition" operation to be any associative and commutative operation on a set. Basic algebraic structures with such an addition operation include commutative monoids and abelian groups. A far-reaching generalization of addition of natural numbers is the addition of ordinal numbers and cardinal numbers in set theory. These give two different generalizations of addition of natural numbers to the transfinite. Unlike most addition operations, addition of ordinal numbers is not commutative. Addition of cardinal numbers, however, is a commutative operation closely related to the disjoint union operation. In category theory, disjoint union is seen as a particular case of the coproduct operation, and general coproducts are perhaps the most abstract of all the generalizations of addition. Some coproducts, such as Direct sum and Wedge sum, are named to evoke their connection with addition. Subtraction can be thought of as a kind of addition—that is, the addition of an additive inverse. Subtraction is itself a sort of inverse to addition, in that adding x and subtracting x are inverse functions. Given a set with an addition operation, one cannot always define a corresponding subtraction operation on that set; the set of natural numbers is a simple example. On the other hand, a subtraction operation uniquely determines an addition operation, an additive inverse operation, and an additive identity; for this reason, an additive group can be described as a set that is closed under subtraction. Multiplication can be thought of as repeated addition. If a single term x appears in a sum n times, then the sum is the product of n and x. If n is not a natural number, the product may still make sense; for example, multiplication by −1 yields the additive inverse of a number. In the real and complex numbers, addition and multiplication can be interchanged by the exponential function: This identity allows multiplication to be carried out by consulting a table of logarithms and computing addition by hand; it also enables multiplication on a slide rule. The formula is still a good first-order approximation in the broad context of Lie groups, where it relates multiplication of infinitesimal group elements with addition of vectors in the associated Lie algebra. There are even more generalizations of multiplication than addition. In general, multiplication operations always distribute over addition; this requirement is formalized in the definition of a ring. In some contexts, such as the integers, distributivity over addition and the existence of a multiplicative identity is enough to uniquely determine the multiplication operation. The distributive property also provides information about addition; by expanding the product (1 + 1)(a + b) in both ways, one concludes that addition is forced to be commutative. For this reason, ring addition is commutative in general. Division is an arithmetic operation remotely related to addition. Since a/b = a(b−1), division is right distributive over addition: (a + b) / c = a / c + b / c. However, division is not left distributive over addition; 1/ (2 + 2) is not the same as 1/2 + 1/2. The maximum operation "max (a, b)" is a binary operation similar to addition. In fact, if two nonnegative numbers a and b are of different orders of magnitude, then their sum is approximately equal to their maximum. This approximation is extremely useful in the applications of mathematics, for example in truncating Taylor series. However, it presents a perpetual difficulty in numerical analysis, essentially since "max" is not invertible. If b is much greater than a, then a straightforward calculation of (a + b) − b can accumulate an unacceptable round-off error, perhaps even returning zero. See also Loss of significance. The approximation becomes exact in a kind of infinite limit; if either a or b is an infinite cardinal number, their cardinal sum is exactly equal to the greater of the two. Accordingly, there is no subtraction operation for infinite cardinals. Maximization is commutative and associative, like addition. Furthermore, since addition preserves the ordering of real numbers, addition distributes over "max" in the same way that multiplication distributes over addition: For these reasons, in tropical geometry one replaces multiplication with addition and addition with maximization. In this context, addition is called "tropical multiplication", maximization is called "tropical addition", and the tropical "additive identity" is negative infinity. Some authors prefer to replace addition with minimization; then the additive identity is positive infinity. Tying these observations together, tropical addition is approximately related to regular addition through the logarithm: which becomes more accurate as the base of the logarithm increases. The approximation can be made exact by extracting a constant h, named by analogy with Planck's constant from quantum mechanics, and taking the "classical limit" as h tends to zero: In this sense, the maximum operation is a dequantized version of addition. Incrementation, also known as the successor operation, is the addition of 1 to a number. Summation describes the addition of arbitrarily many numbers, usually more than just two. It includes the idea of the sum of a single number, which is itself, and the empty sum, which is zero. An infinite summation is a delicate procedure known as a series. Counting a finite set is equivalent to summing 1 over the set. Integration is a kind of "summation" over a continuum, or more precisely and generally, over a differentiable manifold. Integration over a zero-dimensional manifold reduces to summation. Linear combinations combine multiplication and summation; they are sums in which each term has a multiplier, usually a real or complex number. Linear combinations are especially useful in contexts where straightforward addition would violate some normalization rule, such as mixing of strategies in game theory or superposition of states in quantum mechanics. Convolution is used to add two independent random variables defined by distribution functions. Its usual definition combines integration, subtraction, and multiplication. In general, convolution is useful as a kind of domain-side addition; by contrast, vector addition is a kind of range-side addition. Mathematical analysis is a branch of mathematics that includes the theories of differentiation, integration, measure, limits, infinite series, and analytic functions. These theories are usually studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis. Analysis may be distinguished from geometry. However, it can be applied to any space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space). Early results in analysis were implicitly present in the early days of ancient Greek mathematics. For instance, an infinite geometric sum is implicit in Zeno's paradox of the dichotomy. Later, Greek mathematicians such as Eudoxus and Archimedes made more explicit, but informal, use of the concepts of limits and convergence when they used the method of exhaustion to compute the area and volume of regions and solids. In India, the 12th century mathematician Bhāskara II gave examples of the derivative and used what is now known as Rolle's theorem. Elementary arithmetic is the simplified portion of arithmetic which includes the operations of addition, subtraction, multiplication, and division. Elementary arithmetic starts with the natural numbers and the written symbols (digits) which represent them. The process for combining a pair of these numbers with the four basic operations traditionally relies on memorized results for small values of numbers, including the contents of a multiplication table to assist with multiplication and division. Quote notation is a number system for representing rational numbers which was designed to be attractive for use in computer architecture. In a typical computer architecture, the representation and manipulation of rational numbers is a complex topic. In Quote notation, arithmetic operations take particularly simple, consistent forms, and can produce exact answers with no roundoff error. Quote notation’s arithmetic algorithms work with a typical right-to-left direction, in which the addition, subtraction, and multiplication algorithms have the same complexity for natural numbers, and division is easier than a typical division algorithm. In mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integer part and another reciprocal, and so on. In a finite continued fraction (or terminated continued fraction), the iteration/recursion is terminated after finitely many steps by using an integer in lieu of another continued fraction. In contrast, an infinite continued fraction is an infinite expression. In either case, all integers in the sequence, other than the first, must be positive. The integers a[i] are called the coefficients or terms of the continued fraction. Continued fractions have a number of remarkable properties related to the Euclidean algorithm for integers or real numbers. Every rational number pq has two closely related expressions as a finite continued fraction, whose coefficients a[i] can be determined by applying the Euclidean algorithm to (p,q). The numerical value of an infinite continued fraction will be irrational; it is defined from its infinite sequence of integers as the limit of a sequence of values for finite continued fractions. Each finite continued fraction of the sequence is obtained by using a finite prefix of the infinite continued fraction's defining sequence of integers. Moreover, every irrational number α is the value of a unique infinite continued fraction, whose coefficients can be found using the non-terminating version of the Euclidean algorithm applied to the incommensurable values α and 1. This way of expressing real numbers (rational and irrational) is called their continued fraction representation. Related Websites:
{"url":"http://answerparty.com/question/answer/how-do-you-subtract-a-fraction-problem-give-example","timestamp":"2014-04-18T04:05:37Z","content_type":null,"content_length":"152301","record_id":"<urn:uuid:c557bfa0-040d-4c82-a7a3-b568ccb37d95>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Computer proofs in group theory , 1991 "... We have formally specified a substantial subset of the MC68020, a widely used microprocessor built by Motorola, within the mathematical logic of the automated reasoning system Nqthm, i.e., the Boyer-Moore Theorem Prover [4]. Using this MC68020 specification, we have mechanically checked the correctn ..." Cited by 32 (2 self) Add to MetaCart We have formally specified a substantial subset of the MC68020, a widely used microprocessor built by Motorola, within the mathematical logic of the automated reasoning system Nqthm, i.e., the Boyer-Moore Theorem Prover [4]. Using this MC68020 specification, we have mechanically checked the correctness of MC68020 machine code programs for Euclid's GCD, Hoare's Quick Sort, binary search, and other well-known algorithms. The machine code for these examples was generated using the Gnu C and the Verdix Ada compilers. We have developed an extensive library of proven lemmas to facilitate automated reasoning about machine code programs. We describe a two stage methodology we use to do our machine code proofs. , 1990 "... We briefly review a mechanical theorem-prover for a logic of recursive functions over finitely generated objects including the integers, ordered pairs, and symbols. The prover, known both as NQTHM and as the Boyer-Moore prover, contains a mechanized principle of induction and implementations of line ..." Cited by 24 (0 self) Add to MetaCart We briefly review a mechanical theorem-prover for a logic of recursive functions over finitely generated objects including the integers, ordered pairs, and symbols. The prover, known both as NQTHM and as the Boyer-Moore prover, contains a mechanized principle of induction and implementations of linear resolution, rewriting, and arithmetic decision procedures. We describe some applications of the prover, including a proof of the correct implementation of a higher level language on a microprocessor defined at the gate level. We also describe the ongoing project of recoding the entire prover as an applicative function within its own logic. - Journal of Automated Reasoning , 1996 "... Abstract. Fairly deep results of Zermelo-Frænkel (ZF) set theory have been mechanized using the proof assistant Isabelle. The results concern cardinal arithmetic and the Axiom of Choice (AC). A key result about cardinal multiplication is κ ⊗ κ = κ, where κ is any infinite cardinal. Proving this resu ..." Cited by 16 (9 self) Add to MetaCart Abstract. Fairly deep results of Zermelo-Frænkel (ZF) set theory have been mechanized using the proof assistant Isabelle. The results concern cardinal arithmetic and the Axiom of Choice (AC). A key result about cardinal multiplication is κ ⊗ κ = κ, where κ is any infinite cardinal. Proving this result required developing theories of orders, order-isomorphisms, order types, ordinal arithmetic, cardinals, etc.; this covers most of Kunen, Set Theory, Chapter I. Furthermore, we have proved the equivalence of 7 formulations of the Well-ordering Theorem and 20 formulations of AC; this covers the first two chapters of Rubin and Rubin, Equivalents of the Axiom of Choice, and involves highly technical material. The definitions used in the proofs are , 1999 "... The concept of locales for Isabelle enables local definition and assumption for interactive mechanical proofs. Furthermore, dependent types are constructed in Isabelle/HOL for first class representation of structure. These two concepts are introduced briefly. Although each of them has proved use ..." Cited by 13 (2 self) Add to MetaCart The concept of locales for Isabelle enables local definition and assumption for interactive mechanical proofs. Furthermore, dependent types are constructed in Isabelle/HOL for first class representation of structure. These two concepts are introduced briefly. Although each of them has proved useful in itself, their real power lies in combination. This paper illustrates by examples from abstract algebra how this combination works and argues that it enables modular reasoning. , 1995 "... A growing corpus of mathematics has been checked by machine. Researchers have constructed computer proofs of results in logic [23], number theory [22], group theory [25],-calculus [9], etc. An especially wide variety of results have been mechanised using the Mizar Proof Checker and published in the ..." Add to MetaCart A growing corpus of mathematics has been checked by machine. Researchers have constructed computer proofs of results in logic [23], number theory [22], group theory [25],-calculus [9], etc. An especially wide variety of results have been mechanised using the Mizar Proof Checker and published in the Mizar journal [6]. However,
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1656749","timestamp":"2014-04-20T13:40:42Z","content_type":null,"content_length":"22201","record_id":"<urn:uuid:c9d9140b-1d50-4a21-99d2-067f116eb682>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Edinburgh Psychology R-users Structural Equation Modeling The SEM package To do SEM in R, you need to load either OpenMx, or John Fox's sem package. The former is extremely powerful and flexible, the latter allows a range of fairly simple SEM designs to be specified with an easy to learn syntax. Judea Pearl has made big changes to people's thinking about causality over the last decade, placing thinking in causal terms on a firm mathematical footing. Directed acyclic graphs are key to this change, and SEM uses DAGs. Because of this, the common stand off between economics and statistics, exemplified in comments such as this A cynical view of SEMs is that their popularity in the social sciences re flects the legitimacy that the models appear to lend to causal interpretation of observational data, when in fact such interpretation is no less problematic than for other kinds of regression models applied to observational data. John Fox on SEM (from a useful Appendix) is undergoing a re-think. A key change is the recognition that a correlation is not merely "not causation", but rather reflects an unresolved causal Hypothesis testing We can choose between models. Thus without knowing where the truth is, we can move towards it by moving to better fitting models, according to their fit. Fit statistics "There is a veritable cottage industry in ad-hoc fit indices and their evaluation," says John Fox. From R's sem package you get: • Model $\chi^2$, with a test of the likehood of this given $H_0$. Represents a comparison of the model implied covariance matrix with the original sample covariance matrix. You want a low $\chi^2$ , high p. For sufficient sample sizes, it's argued that p will always be less than 0.05 despite being a good fit. • $\chi^2$ (null model). This is the same as above, except the implied matrix is just the diagonal of the sample matrix, with other cells set to zero. • Goodness-of-fit index (GFI). See AGFI. • Adjusted goodness-of-fit index (AGFI; "it is probably fair to say that the GFI and AGFI are of little pratical value," says John Fox's Appendix) • RMSEA index, with 90% CIs ("perhaps more attractive than the others," says John Fox.) It compares the model to a saturated population model; $\le 0.05$ is good • Bentler-Bonnett NFI • Tucker-Lewis NNFI • Bentler CFI • SRMR • Bayesian Information Criterion (BIC) ("In contrast with ad-hoc fi t indices […] has a sound statistical basis," says Fox) Non-gaussian variables. You can fit models of ordinal variables using polychoric correlations and bootstrapping to come up with respectable estimates of standard error. See (Fox 2006, p. 481). Useful literature page revision: 30, last edited: 12 Mar 2012 22:15
{"url":"http://psy-ed.wikidot.com/sem","timestamp":"2014-04-20T03:10:48Z","content_type":null,"content_length":"21814","record_id":"<urn:uuid:0ed88b65-e33d-4714-9cd7-6535cbf65fcb>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert kilofarad to nano farad - Conversion of Measurement Units ›› Convert kilofarad to nanofarad [SI standard] Did you mean to convert kilofarad to nanofarad [international] nanofarad [SI standard] ›› More information from the unit converter How many kilofarad in 1 nano farad? The answer is 1.0E-12. We assume you are converting between kilofarad and nanofarad [SI standard]. You can view more details on each measurement unit: kilofarad or nano farad The SI derived unit for capacitance is the farad. 1 farad is equal to 0.001 kilofarad, or 1000000000 nano farad. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between kilofarads and nanofarads. Type in your own numbers in the form to convert the units! ›› Definition: Kilofarad The SI prefix "kilo" represents a factor of 10^3, or in exponential notation, 1E3. So 1 kilofarad = 10^3 farads. ›› Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more! This page was loaded in 0.0039 seconds.
{"url":"http://www.convertunits.com/from/kilofarad/to/nano+farad","timestamp":"2014-04-18T18:12:45Z","content_type":null,"content_length":"20490","record_id":"<urn:uuid:cec8d6a5-cc7f-4894-bd20-9c4b288420dc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Integration by parts query. Getting 0 as answer, think its wrong, help!! May 3rd 2012, 01:37 PM #1 Feb 2011 Integration by parts query. Getting 0 as answer, think its wrong, help!! I'm asked to integrate e^2xcosxdx. I have used intefration by parts letting u=e^2x and dv=cosxdx. This results in e^2xsinx-the integral of 2sinxe^2x. I then integrated by parts again and the overall result was if I = integral of e^2xcosxdx then I= e^2xsinxdx-e^2xsinxdx + 0.5( I) 0.5I = 0 Therefore I=0 Is this wrong? Re: Integration by parts query. Getting 0 as answer, think its wrong, help!! I'm asked to integrate e^2xcosxdx. I have used intefration by parts letting u=e^2x and dv=cosxdx. This results in e^2xsinx-the integral of 2sinxe^2x. I then integrated by parts again and the overall result was if -I = integral of e^2xcosxdx then I= e^2xsinxdx-e^2xsinxdx + 0.5( I) 0.5I = 0Therefore I=0 Is this wrong? You are making a sign error. Add that last integral to both sides and then divide by 2. Re: Integration by parts query. Getting 0 as answer, think its wrong, help!! Still not sure where I'm going wrong. Could you explain further? Thanks Re: Integration by parts query. Getting 0 as answer, think its wrong, help!! You should end up with the same integration but with a minus sign on the right side. You move that integration to the left hand side and then divide by two. So you should end up with something like: I= .......... -I (move right hand integration to left hand side) 2I = .............. (divide by two) I = 1/2(..........) (Final answer) Sorry i didn't do the integration. In class right now =p Re: Integration by parts query. Getting 0 as answer, think its wrong, help!! Re: Integration by parts query. Getting 0 as answer, think its wrong, help!! $\int e^{2x}\cos{x} \, dx$ $u = \cos{x} , du = -\sin{x} \, dx$ $dv = e^{2x} \, dx , v = \frac{1}{2}e^{2x}$ $\int e^{2x}\cos{x} \, dx = \frac{1}{2}e^{2x}\cos{x} + \frac{1}{2}\int e^{2x} \sin{x} \, dx$ $p = \sin{x} , dp = \cos{x} \, dx$ $dq = e^{2x} \, dx , q = \frac{1}{2}e^{2x}$ $\int e^{2x}\cos{x} \, dx = \frac{1}{2}e^{2x}\cos{x} + \frac{1}{2}\left[\frac{1}{2}e^{2x}\sin{x} - \frac{1}{2}\int e^{2x}\cos{x} \, dx\right]$ $\int e^{2x}\cos{x} \, dx = \frac{1}{2}e^{2x}\cos{x} + \frac{1}{4}e^{2x}\sin{x} - \frac{1}{4}\int e^{2x}\cos{x} \, dx$ $\frac{5}{4} \int e^{2x}\cos{x} \, dx = \frac{1}{2}e^{2x}\cos{x} + \frac{1}{4}e^{2x}\sin{x} + C$ $\int e^{2x}\cos{x} \, dx = \frac{2}{5}e^{2x}\cos{x} + \frac{1}{5}e^{2x}\sin{x} + C$ Re: Integration by parts query. Getting 0 as answer, think its wrong, help!! Thanks a lot, got it wrong way around! May 3rd 2012, 01:44 PM #2 May 3rd 2012, 02:06 PM #3 Feb 2011 May 3rd 2012, 02:13 PM #4 May 2012 May 3rd 2012, 02:16 PM #5 May 2012 May 3rd 2012, 02:34 PM #6 May 3rd 2012, 02:45 PM #7 Feb 2011
{"url":"http://mathhelpforum.com/calculus/198312-integration-parts-query-getting-0-answer-think-its-wrong-help.html","timestamp":"2014-04-16T05:28:56Z","content_type":null,"content_length":"52068","record_id":"<urn:uuid:a206ac5e-fdcb-461a-be15-405742962109>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Augmenting Genetic Algorithms with Memory to Solve Traveling Salesman Problems, Sushil J. Louis, Gong Li, Dept. of Computer Science University of Nevada, (1997). . Eshelman, L.J., "The CHC Adaptive Search Algorithm: How to Have Safe Search When Engaging in Nontraditional Genetic Recombination", In: Foundations of Genetic Algorithms, San Mateo, CA: Morgan Kaufmann Publishers, G.J.E. Rawlins, ed., pp. 265-283, (1991). . Fogel, D.B., "An Evolutionary Approach to the Traveling Salesman Problem", Biological Cybernetics, 60 (2) , pp. 139-144, (1988). . Grefenstette, J., et al., "Genetic Algorithms for the Traveling Salesman Problem", Proceedings of an International Conference on Genetic Algorithms and their Applications, pp. 160-168, (Jul. 1985). . Kalantari, B., et al., "An algorithm for the traveling salesman problem with pickup and delivery customers", European Journal of Operational Research, 22 (3), pp. 377-386, (1985). . Mitchell, M., "Genetic Algorithms: An Overview", In: Introduction to Genetic Algorithms, Chapter 1, Cambridge, Mass: The MIT Press, M. Mitchell, ed., pp. 1-34, (1996). . Yip, P.P., et al., "A New Approach to the Traveling Salesman Problem", Proceedings of the International Joint Conference on Neural Networks, vol. 3 of 2, Nagoya Congress Center, Japan, pp. 1569-1572, (Oct. 1993)..
{"url":"http://patents.com/us-6904421.html","timestamp":"2014-04-19T17:41:31Z","content_type":null,"content_length":"78554","record_id":"<urn:uuid:71dcf5b8-e2aa-4ed8-82a0-69e31fadb2cc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2011/570 Degree of regularity for HFE-Jintai Ding and Thorsten KleinjungAbstract: In this paper, we prove a closed formula for the degree of regularity of the family of HFE- (HFE Minus) multivariate public key cryptosystems over a finite field of size $q$. The degree of regularity of the polynomial system derived from an HFE- system is less than or equal to \begin{eqnarray*} \frac{(q-1)(\lfloor \log_q(D-1)\rfloor +a)}2 +2 & & \text{if $q$ is even and $r+a$ is odd,} \\ \frac{(q-1)(\lfloor \log_q(D-1)\rfloor+a+1)}2 +2 & & \text{otherwise.} \end{eqnarray*} Here $q$ is the base field size, $D$ the degree of the HFE polynomial, $r=\lfloor \log_q(D-1)\rfloor +1$ and $a$ is the number of removed equations (Minus number). This allows us to present an estimate of the complexity of breaking the HFE Challenge 2: \vskip .1in \begin{itemize} \item the complexity to break the HFE Challenge 2 directly using algebraic solvers is about $2^{96}$. \end{itemize} Category / Keywords: public-key cryptography / multivariate, degree of regularityDate: received 21 Oct 2011Contact author: jintai ding at gmail comAvailable format(s): PDF | BibTeX Citation Version: 20111025:170048 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2011/570","timestamp":"2014-04-17T06:47:08Z","content_type":null,"content_length":"2609","record_id":"<urn:uuid:519a2a30-773a-4deb-99ef-78453bbdc2f7>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Hillsborough, CA Science Tutor Find a Hillsborough, CA Science Tutor I am working on my PhD in Classics from the University of Pittsburgh, while living in the Bay Area. I have received my Master's in Classics from UPitt and graduated cum laude from Pomona College with a double major in Classics and Geology. I have taught college-level Latin, semesters 1-3 as a TA and TF in Pittsburgh, as well as courses in classical mythology and Roman civilization. 5 Subjects: including physical science, geology, Latin, Greek ...As an undergrad at Harvey Mudd, I helped design and teach a class on the software and hardware co-design of a GPS system, which was both a challenging and rewarding experience. I offer tutoring for all levels of math and science as well as test preparation. I will also proofread and help with technical writing, as I believe good communication skills are very important. 27 Subjects: including physics, elementary math, differential equations, linear algebra ...I also trained my students in composing well-written philosophy papers. I currently teach Executive Functioning (organizational and study skills) through a tutoring agency that I work with. Besides my experience directly tutoring it, I have received approximately 10 hours of direct training in this area from a tutor/mentor. 29 Subjects: including ACT Science, philosophy, reading, English ...I have a passion for teaching English and Spanish as a second language and for sharing my knowledge about culture. About me: -M.A. TESOL (Teaching English to Speakers of Other Languages), B.A. in Cultural Anthropology and Latin American Studies. -3 Years of experience teaching Academic English to international students. -7 years of experience tutoring Spanish to all levels. 13 Subjects: including anthropology, Spanish, English, reading ...I specialize in tutoring high school mathematics, such as geometry, algebra, precalculus, and calculus, as well as AP physics. In addition, I have significant experience tutoring students in lower division college mathematics courses such as calculus, multivariable calculus, linear algebra and d... 25 Subjects: including physics, calculus, physical science, astronomy Related Hillsborough, CA Tutors Hillsborough, CA Accounting Tutors Hillsborough, CA ACT Tutors Hillsborough, CA Algebra Tutors Hillsborough, CA Algebra 2 Tutors Hillsborough, CA Calculus Tutors Hillsborough, CA Geometry Tutors Hillsborough, CA Math Tutors Hillsborough, CA Prealgebra Tutors Hillsborough, CA Precalculus Tutors Hillsborough, CA SAT Tutors Hillsborough, CA SAT Math Tutors Hillsborough, CA Science Tutors Hillsborough, CA Statistics Tutors Hillsborough, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Hillsborough_CA_Science_tutors.php","timestamp":"2014-04-19T17:45:16Z","content_type":null,"content_length":"24536","record_id":"<urn:uuid:f53c2609-e092-4dc9-a1df-cd8d7b67ea90>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
The n-Category Café Last Person Standing Posted by David Corfield Tim Gowers is engaged in a new venture in open source mathematics. As one might expect from a leading representative of the ‘problem-solving’ culture, Gowers has proposed a blog-based group problem solving challenge. He motivates his choice of problem thus: Does the problem split naturally into subtasks? That is, is it parallelizable? I’m actually not completely sure that that’s what I’m aiming for. A massively parallelizable project would be something more like the classification of finite simple groups, where one or two people directed the project and parcelled out lots of different tasks to lots of different people, who go off and work individually. But I’m interested in the question of whether it is possible for lots of people to solve one single problem rather than lots of people to solve one problem each. Coincidently, Alexandre Borovik, my colleague on the A Dialogue on Infinity project, came to Kent on Wednesday and spoke about the classification of finite simple groups in his talk ‘Philosophy of Mathematics as Seen by a Mathematician’. Alexandre expressed his fears that parts of mathematics may be becoming to complicated for us. He considered the classification of finite simple groups, whose proof is spread over 15000 pages. There is a project afoot to reduce this to a ‘mere’ 12 volumes of around 250 pages each, so 3000 pages. But these will be extremely dense pages, not at all the kind of thing people will find it easy to learn from. So what happens when the generation of mathematicians involved in the classification project retires? Apparently, Inna Capdeboscq is the youngest person out of the few people who understand things thoroughly enough to repair any discovered breaches in the proof. Alexandre estimated that in twenty years she will be the last such non-retired mathematician in the world. He doubts any youngsters will take on the baton. Now is this a (belated) bout of turn of the century pessimism, or is there some intrinsic complexity in the classification which cannot be compressed? Do we have examples of long proofs from earlier times which have been enormously simplified, not just through redefinition? On a positive note, Alexandre mentioned something we’ve discussed before at the Café, that a radical rethink of this area may be possible. Recall the suggestion that some of the sporadic simple groups are flukes, and should rather be seen as belonging to a larger family, some of whose members ‘happen’ to be groups. Monstrous moonshine is associated with many of the sporadics, and there is evidence for the participation of some of those thought to be outside its scope – the pariahs. All this ties in with a fantastic amount of mathematics, so just possibly new ideas will emerge to allow for the classification of the larger family. However, I had the impression from Alexandre that none of this will help enormously simplify the original classification. So I wonder what would be the effect on future mathematicians using the result if there was nobody left alive who understood it fully. Would it be used with misgiving? Posted at February 2, 2009 9:46 AM UTC Re: Last Person Standing I was extremely intrigued by Freek Wiedijk’s article on the computerization of formal proof-checking in the December 2008 issue of the Notices, in which he suggests that In a few decades it will no longer take one week to formalize a page from an undergraduate textbook. Then that time will have dropped to a few hours. Also then the formalization will be quite close to what one finds in such a textbook. When this happens we will see a quantum leap, and suddenly all mathematicians will start using formalization for their proofs. When the part of refereeing a mathematical article that consists of checking its correctness takes more time than formalizing the contents of the paper would take, referees will insist on getting a formalized version before they want to look at a paper. Perhaps this may provide an answer? Certainly in the particular case of finite simple groups, the timing seems unlikely; it would probably require some youngsters to take on the massive project of understanding and formalizing the proof even once formalization becomes tractable. But at least in principle, having a long proof completely formalized could enable us to continue using it with confidence even after no living human fully understands it any more. Posted by: Mike Shulman on February 2, 2009 6:25 PM | Permalink | Reply to this Re: Last Person Standing … even after no living human fully understands it any more. This leads into Doron Zeilberger's opinion: that any mathematics simple enough to be understood by mere humans will be an utterly trivial part of the mathematics understood by computers in less than a century. Although I don't know Zeilberger's opinion of the classification of finite simple groups, I suppose he might consider it the far edge of what humans can grasp and be only too glad that we have already realised that it's not worth our time to keep it fully understand by humans. Posted by: Toby Bartels on February 6, 2009 9:49 PM | Permalink | Reply to this Re: Last Person Standing My worry with the classification is not so much the reliability of the proof, since any errors are like not to mess up anything too significant. (Which is to say, if we’re missing a simple group we’re probably only missing one, and it probably has all the same properties as the others. Fixing any results that depend on classification won’t require understanding the proof so much as understanding the new example.) What worries me is more sociological. How did this project virtually kill off finite group theory as a topic for young researchers, and how can other fields avoid this fate? A research program that 60 years later finds itself with no young researchers who understand it is very sad. Mathematicians who did so much great work deserve the immortality of having their work live on, and in this case it looks like they may not. Posted by: Noah Snyder on February 2, 2009 7:02 PM | Permalink | Reply to this Re: Last Person Standing I think a corollary to the sociological problem that Noah mentioned is the potential loss of interesting ideas and thought processes that are captured in this body of work. I’m speculating wildly here, but some of these may be useful in broader stretches of mathematics, and without experts to navigate the literature, they become much less available. It reminds me of a theory I had heard about the lack of mathematical progress in the Roman empire - many people had access to the literature, but there was no research community to help explicate it. Posted by: Scott Carnahan on February 2, 2009 10:55 PM | Permalink | Reply to this Re: Last Person Standing While we’re talking about finite groups, anyone know a readable exposition of the odd order theorem? I tried looking at the original paper once and it was very intimidating. Posted by: Noah Snyder on February 2, 2009 7:06 PM | Permalink | Reply to this Re: Last Person Standing Some Wiki articles are well-informed. This article traces group theory papers from 17 pages, to 255 pages, to over 1000 pages. —- “The simplified proof has been published in two books: (Bender and Glauberman 1995) and (Peterfalvi 2000). This simplified proof is still very hard, and is about the same length as the original proof (but is written in a more leisurely style). (Gonthier et al. 2006) have begun a long-term project to produce a computer verified formal proof of the theorem… It takes a professional group theorist about a year of hard work to understand the proof Posted by: Stephen Harris on February 2, 2009 9:14 PM | Permalink | Reply to this Re: Last Person Standing Are there any signs of conceptual connections to other branches? Does the local analysis of the group theorists have anything do do with other forms of localization in mathematics. Is there anything natural about quasithin-ness? It is always possible that Atiyah was right: So I don’t think it makes much difference to mathematics to know that there are different kinds of simple groups or not. It is a nice intellectual endpoint, but I don’t think it has any fundamental importance. Though he did later say: FINITE GROUPS. This brings us to finite groups, and that reminds me: the classification of finite simple groups is something where I have to make an admission. Some years ago I was interviewed, when the finite simple group story was just about finished, and I was asked what I thought about it. I was rash enough to say I did not think it was so important. My reason was that the classification of finite simple groups told us that most simple groups were the ones we knew, and there was a list of a few exceptions. In some sense that closed the field, it did not open things up. When things get closed down instead of getting opened up, I do not get so excited, but of course a lot of my friends who work in this area were very, very cross. I had to wear a sort of bulletproof vest after that! There is one saving grace. I did actually make the point that in the list of the so-called “sporadic groups”, the biggest was given the name of the “Monster”. I think the discovery of this Monster alone is the most exciting output of the classification. It turns out that the Monster is an extremely interesting animal and it is still being understood now. It has unexpected connections with large parts of other parts of mathematics, with elliptic modular functions, and even with theoretical physics and quantum field theory. This was an interesting by-product of the classification. Classifications by themselves, as I say, close the door; but the Monster opened up a door. Posted by: David Corfield on February 3, 2009 9:34 AM | Permalink | Reply to this Re: Last Person Standing If young mathematicians aren’t learning the proof techniques behind the classification of finite simple groups, maybe it’s not because these techniques are too hard. Maybe it’s because they don’t see interesting new results that can be proved using these techniques. Wiles’ proof of Fermat’s Last Theorem was hard, and so was Perelman’s proof of the Poincaré conjecture — but those didn’t scare off the youngsters. In the first case, there were bigger problems sitting nearby, still left to tackle: for starters, the Taniyama–Shimura–Weil conjecture, now called the Modularity Theorem because it’s been proved by a group of mathematicians including Wiles’ student Richard Taylor, who helped fill a hole in Wiles’ proof. In the second case, Perelman didn’t fill in the details of an important step, his ‘Theorem 7.4’. Three groups jumped in to give different proofs of this, completing his proof not just of the Poincaré conjecture but of Thurston’s Geometrization Conjecture. I don’t understand the remaining open problems to which the ideas developed by Wiles and Perelman apply. But I’m pretty sure they exist! For example, see this Ricci flow conference held in Paris last summer, or this conference on workshop on modularity held at MSRI in the fall of 2006. So what about finite simple groups? As David notes, there are big open problems sitting next to the classification of finite simple groups: we need to more deeply understand Monstrous Moonshine… and Moonshine Beyond the Monster. Let’s hope that these problems eventually force people to revisit the classification theorem, and either find a simpler proof, or find ways to use the existing proof techniques to do new and interesting things. Posted by: John Baez on February 4, 2009 2:38 AM | Permalink | Reply to this Re: Last Person Standing Is there evidence that the sporadics and non-sporadics are very different beasts? Moonshine would suggest so, unless there’s an equivalent of moonshine for non-sporadics, perhaps known but not thought as such. Then there’s the odd property of the large Schur multiplier of PSL(3, 4), and the thought this is an indication that something sporadic in nature happened to fall into a non-sporadic family. Hmm, so does PSL(3, 4) have moonshine? Posted by: David Corfield on February 4, 2009 9:49 AM | Permalink | Reply to this Aschbacher,arsmathematica, Corfield, Gelfand, Borcherds, Zeilberger, Peano, Chaitin, Mauchly, and hypertext, Re: Last Person Standing In Fall 2006 I discussed the following with MICHAEL ASCHBACHER (Shaler Arthur Hanisch Professor of Mathematics, Caltech; B.S., California Institute of Technology, 1966; Ph.D., University of Wisconsin, 1969): What Kind of Thing is a Sporadic Simple Group? September 24th, 2006 by Walt David Corfield discusses some speculation originally from Israel Gelfand: Sporadic simple groups are not groups, they are objects from a still unknown infinite family, some number of which happened to be groups, just by chance. (In David’s terminology, that means that sporadic finite simple groups are not a natural kind.) I used to believe this very same thing, so I find it interesting that others have speculated the same thing. A couple of years ago, though, I came across a remark by Michael Aschbacher that made me rethink my view: the classification of finite simple groups is primarily an asymptotic result. Every sufficiently large finite simple group is either cyclic, alternating, or a group of Lie type. Results that are true only for large enough parameter values are common enough that the existence of small-value counterexamples does not require special explanation. For example, the classification of simple modular Lie algebras looks completely different over small characteristics than it does over large characteristics. The best known results for number theoretic results such as Waring’s problem and Goldbach’s conjecture are asymptotic. Small numbers are just bad news. Later, richard borcherds Says: May 30th, 2007 at 1:32 pm Computers might be able to do real math eventually, but they still have a very long way to go. They are really good at certain restricted problems, such as running algorithms to evaluate special classes of sums and integrals (as in Zeilberger’s work) or checking lots of cases (as in the 4 color theorem or the Kepler conjecture) or even searching for proofs in very restricted first order theories, but none of these problems come anywhere near finding serious mathematical proofs in interesting theories such as Peano arithmetic. Rather than find proofs by themselves, computers might be quite good at finding formal proofs with human assistance, with a human guiding the direction of the proof (which computers are bad at), and the computer filling in tiresome routine details (which humans are bad at). This would be useful for something like the classification of finite simple groups, where the proof is so long that humans cannot reliably check it. In response to that, I spoke again with Prof. Aschbacher. Jonathan Vos Post Says: May 30th, 2007 at 2:43 pm Yes. I have discussed this recently with Michael Aschbacher, at Caltech where I got my Math degree, he being author of “The Status of the Classification of the Finite Simple Groups”, Notices of the American Mathematical Society, August 2004. They’ve apparently (I have to take their word for it) filled the gaps of the proof initiated so long ago, and when John H. Conway was on the team (I’d spoken to him then). Coincidently, I was at a Math tea at Caltech yesterday, joking about the 26 sporadic groups being a “kabbalistic coincidence” — or perhaps examples of some incompletely glimpsed set of more complicated mathematical objects which are forced to be Groups for reasons not yet clear. Some people deny that there are “coincidences” in Mathematics. Gregory Chaitin insists that Mathematics is filled with coincidences, so that most truths are true for no reason. That sounds like the beginning of a human-guided deep theorem-proving project to me. Humans supply gestalt intuition that we don’t know how to axiomatize or algorithmize. Humans did not stop playing Chess when Deep Blue became able to beat a world champion. The computer is crunching fast; the human looks deep. The human has the edge in Go, which takes deeper search. So, as I say, “yes.” I agree with you that we should each (human and machine) be doing what we’re best at, together. After all, that’s what the right-brain / left-brain hemisphere architecture does. When John Mauchley (and J. Presper Eckert) built the BINAC under top security for the USAF, delivered 1949, it was the first dual processor. Mauchley told me that the brain hemisphere structure had evolved, and was probably good for more than we knew. He and I were introduced by Ted Nelson, father of Hypertext, in 1973, while I was developing the first hypertext for PC’s (before Apple, IBM, and Tandy made PCs). We demoed our system at the world’s personal computer conference in Philadelphia, 1976. So the human-computer teamwork is something I’ve been working in for 40 years. Do you suspect that a human/computer partnership (Including you, of course) will get to the bottom of quantum field theory? Morally, all the towers of concepts in this thread are related, and experts have opined on how some combinations of future people and future computers may accomplish the true purpose of Mathematics: Until then, we are a school of multitentacled invertebrates in the ocean of theorems, blinded by a black cloud of our own ink, speculating on the ocean in which we are immersed. Posted by: Jonathan Vos Post on February 7, 2009 12:18 PM | Permalink | Reply to this I’m confused Jonathan: I found your comment confusing. In Fall 2006 I discussed the following with MICHAEL ASCHBACHER (…): This is followed by a quotation from post by Walt, the author of Ars Mathematica. (I’m not sure why you link to the blog but not to the particular blog post.) So I reckon this is the thing you discussed with Aschbacher. What did he say? Later in your comment, you say In response to that, I spoke again with Prof. Aschbacher. Jonathan Vos Post Says: May 30th, 2007 at 2:43 pm Yes. I have discussed this recently with Michael Aschbacher, at Caltech where I got my Math degree, he being author of “The Status of the Classification of the Finite Simple Groups”, Notices of the American Mathematical Society, August 2004. They’ve apparently (I have to take their word for it) filled the gaps of the proof initiated so long ago, and when John H. Conway was on the team (I’d spoken to him then). This looks like it might have been taken from an email to someone other than Michael Aschbacher, or a comment on another blog about classification of finite simple groups – it’s hard to tell. In any case, I don’t see anything from Aschbacher himself, or from Conway for that matter; it would have been interesting to hear what these experts might have to say on the topic being discussed in Last Person Standing. What I do see is (1) a kabbalistic joke, (2) a general thought of Chaitin’s, (3) Vos Post responding “yes” to something, I can’t tell precisely to what or to whom, (4) a remark by Mauchley from a private conversation, and, finally, (5) the question “Do you suspect that a human/computer partnership (Including you, of course) will get to the bottom of quantum field theory?” apparently addressed to someone, perhaps a blog author or email recipient, but I don’t know who it’s supposed to be. I don’t mind the philosophical ruminations as such, but what did Aschbacher actually say on either of the two occasions you mentioned? Posted by: Todd Trimble on February 7, 2009 2:01 PM | Permalink | Reply to this I was half asleep, sorry; Re: I’m confused Todd Trimble: I guess that I proved the lemma that when your dog wakes you up at 4:00 a.m., after 4 hours sleep, you should not try cutting and pasting from old emails cut and pasted from blogs. Aschbacher never emailed me. I told him of David Corfield’s remarks on Israel Gelfand, face-to-face. Or maybe gave him a printout. Aschbacher agreed with Gelfand’s speculation as posible, and added that there may be other kinds og Simple Groups that we’re just not smart of enough to have conceived. Then there was a blog comment to Borcherds, which I omitted, and his emailed reply. I’d sent him: May 4th, 2007 I am Richard Borcherds, a mathematician, currently trying to figure out what quantum field theory is. Laurens Gunnarsen Says: May 29th, 2007 at 5:36 pm May I ask what you think of so-called “experimental mathematics?” In particular, do you agree with Doron Zeilberger (c.f. the current issue of the MAA journal, FOCUS) that we should expect and welcome the advent of software that will relieve us of the “burden” of proving things? Oh, and once we’re relieved of this “burden,” will we really have very much left to do? # Jonathan Vos Post Says: May 30th, 2007 at 2:53 am Doron Zeilberger has provided work which is delightful and deep. But my background [takes off math hat briefly] as a Science Fiction author makes me see the human-machine future in a more nuanced way. I prefer the definition and examples of Experimental Mathematics by Jonathan Borwein et al. What is see is lovely, creative, and not sausage-making. It is, analogously to the space program, or a symphony orchestra, good teamwork between humans and machines. See the definitional material, examples, and editorial board of: I do not see computers through a Terminator lens. I prefer Utopia to Dystopia. Software, I hope, will not leave us “With Folded Hands.” [a reference to the famous Jack Williamson story of human incentives destroyed by over-helpful robots] Then the Gregory Chaitin remark came from a long conversation we’d had at the International Conference on Complex Systems, which was more about Leibnitz’s way of telling if one is in a “lawful universe” by counting the number of natural laws. Again, morally, everything in this n-Category Thread and everything that I mentioned are braided together. But I did a bad job of indicating the connections. I saw Aschbacher again last week, but we mostly talked about 100 people having just been laid off at Caltech, whether there would be a genuine hiring freeze for faculty and post-docs, and how little fiction authors are usually paid. And the ex-Combinatorist in Obama’s administration. Posted by: Jonathan Vos Post on February 7, 2009 4:48 PM | Permalink | Reply to this Re: Last Person Standing Aschbacher’s comment reminds me of a question I’ve been wondering about for a few years, ever since I started teaching group theory: How much shorter is the proof that there are only finitely many sporadic groups than the proof of the full classification theorem? Posted by: James on February 8, 2009 7:39 AM | Permalink | Reply to this Re: Last Person Standing I asked a group theorist. His reply: “Not shorter at all. We would like to be able to say much shorter but there is no way with present methods.” Posted by: James on February 10, 2009 9:36 PM | Permalink | Reply to this Re: Aschbacher,arsmathematica, Corfield, Gelfand, Borcherds, Zeilberger, Peano, Chaitin, Mauchly, and hypertext, Re: Last Person Standing Regarding coincidence in mathematics, I believe in that. Zeilberger, again, has a good opinion addressing it. I would define a ‘coincidence’ as a situation that has no simpler explanation than the bare facts (although our intuition may look for one). Thus coincidences can occur in objective circumstances like mathematics. (Ironically, I came to this definition reading the rantings of a paranoid schizophrenic explaining why nothing is a coincidence.) Posted by: Toby Bartels on February 12, 2009 12:01 AM | Permalink | Reply to this contingent beauty I’d like to hear Zeilberger distinguish between ‘contingent beauty’ and ‘contingent ugliness’. I assume he must have the latter concept. Surely there are ‘brute facts’ which are not pretty, do not fit into any pattern, and do not have an interesting explanation. Mind you his threshold is quite low. Is it really a beautiful fact that in e decimal digits 3-6 are repeated in digits 7-10? e = 2.718281828459… If we had evolved with 8 fingers, there’d be very little chance we’d count it as beautiful. Posted by: David Corfield on February 13, 2009 8:39 AM | Permalink | Reply to this Re: contingent beauty I’m not sure there’s a contingent ugliness implied by a contingent beauty, but rather the concepts are ‘connected beauty’ and ‘contingent beauty’? Things that don’t exhibit any quality which we ascribe as beautiful don’t seem to be divided into those things that have incredibly horribly convoluted connections and those that don’t have any connections of any kind at all (which is probably impossible anyway). Inicidentally, there’s another way of looking at the coincidences situation: for situations that are deeply connected one could “search intelligently” by modifying bits of the reasoning to expand the set of things that are connected and hence discover new relationships (although obviously they can also be discovered by chance observation and later analysed). If there are lots of beautiful (and potentially useful) relationships that are purely coincidental then the only way to discover them is by someone for some reason generating enough of both of the things that someone sees they look to be “beautifully related”. For some reason I find that thought mildly depressing. Posted by: bane on February 13, 2009 1:55 PM | Permalink | Reply to this Re: contingent beauty If not a ‘contingent ugliness’ there must be a ‘contingent non-beauty’, otherwise what is the ‘beauty’ bit doing? Why not just say ‘contingent’ contrasted with ‘connected’? (I still like my Zeilberger almost seems to find the contingent more beautiful than the related. But surely we have to stop somewhere. That the seventeenth and eighteenth decimal digits of $e$ and $\pi$ are identical is surely not beautiful, though this two digit matching happens earlier than expected. So what makes for the beautiful aspect of contingent beauty? Posted by: David Corfield on February 16, 2009 11:43 AM | Permalink | Reply to this Re: contingent beauty I expect Zeilberger used the $\dots18281828\dots$ example because most people would agree that that at least is just a coincidence. Then he can say that Ramanujan's theorem mentioned next to it may be similarly just a coincidence. But yeah, this does beg the question of what makes a coincidence (just or otherwise) beautiful. Perhaps beauty is whatever our intuition expects an explanation for (rather subjective, as beauty is often thought to be), so contingent beauty is that which we expect to have an explanation but which is really just a coincidence. Posted by: Toby Bartels on February 16, 2009 11:25 PM | Permalink | Reply to this Einstein’s theological take on this; Re: contingent beauty Am I the only one here who watched “House” tonight and heard the quote (which I’ve verified)?: “Coincidence is God’s way of remaining anonymous.” – Albert Einstein [The World As I See It]. That leads to the (to me) annoying part near the end of the novel (not the film) “Contact” by Carl Sagan, where the graphic of an enormous circle is found hidden in the digits of pi, and this is cited as evidence of divinity? Annoyed me because Sagan was playing games with us, after a previously reasonable debate between Science and Faith. The film had parts which annoyed me too, and my wife, but were the favorite parts of an Irish Catholic friend of mine. Well, these discussions on beauty, creation, contingency, and Math are not likely to reach a perfect consensus. But fun! Posted by: Jonathan Vos Post on February 17, 2009 6:23 AM | Permalink | Reply to this Re: contingent beauty …so contingent beauty is that which we expect to have an explanation but which is really just a coincidence. I think you’re on to something with that. Maybe if I thought $e$’s digit repetition could have an explanation I would find it prettier. Changing ideas of what’s plausibly explicable should affect aesthetics then. On the other hand, there’s a similar case where we do have something of an explanation – the repetition of the first two pairs of digits in $\sqrt{2} = 1.41421356...,$ which could be put down to $\sqrt{2} = 7/5 \times (1 + 1/49)^{1/2}.$ Not a terribly beautiful explanation I admit, twice 49 being close enough to 100. André Joyal’s example of a miracle, an extreme case of an unexplained event which one would expect to be explicable, was that the algebraic closure of $\mathbb{R}$ is an extension of such small Posted by: David Corfield on February 17, 2009 9:36 AM | Permalink | Reply to this A prime is a prime is a prime; Re: contingent beauty I may be mangling Prof. Gregory Benford’s aphorism through faulty memory, but a near paraphrase is: “Don’t rely on rules of thumb, when other intelligent beings may have a different number of thumbs.” There are never-ending argument on the Web between people who think they’ve found deep truths in something that others dismiss as mere artifacts of writing numbers base 10. Things true in ANY base are more likely to matter. Once, after some slight of hand I showed my smarter-than-me Physics professor wife, she asked “is that true if the primes are in some other base?” Then we both fell silent, wondering who would be the first to say (it doesn’t matter; a prime’s a prime.” But why limit ourselves to conventional bases? Knuth promotes base -3. Factorial bases are nice. Natural log is more natural than log base 10, isn’t it? I recently added something to the OEIS: A155967 Binary transpose primes. Integers of k^2 bits which, when written row by row as a square matrix and then read column by column, are primes once transformed. I could not tell if I’d found something in a “sweet spot” at the shallow end of the pool: elementary, original, nontrivial, or whether nobody cares because binary as contingent. But then R. J. Mathar wrote a nice little MAPLE program and extended my list of examples, and so I wonder. This could have been dreamed up any time in the past couple of centuries. Have I found an interesting transformation, or tripped on a random lump of rock? Yes, Zeilberger’s essay is a gem. But the epistemolocial and ontological conundra of Mathematics are arguable enough already, so that when Aesthetics is added to the mix, anything can happen! Posted by: Jonathan Vos Post on February 14, 2009 3:44 AM | Permalink | Reply to this Re: Aschbacher,arsmathematica, Corfield, Gelfand, Borcherds, Zeilberger, Peano, Chaitin, Mauchly, and hypertext, Re: Last Person Standing Toby wrote: I would define a ‘coincidence’ as a situation that has no simpler explanation than the bare facts (although our intuition may look for one). That sounds related to the Kolmogorov-Chaitin definition of a bit string of length $L$ as ‘algorithmically random’ if it cannot be printed by a program written in binary code with length less than $L - k$ for some constant $k$. Everyone points out that the constant $k$ depends on the programming language — but it also depends on how much of a paranoid you are: the more paranoid, the smaller your $k$. If your $k$ is negative, you’re really in trouble, because you think nothing is random: you’ll even be satisfied with an explanation more complicated than the facts to be explained. There’s some danger in having too large a value of $k$, too, but people don’t talk about that as much. Posted by: John Baez on February 15, 2009 1:42 AM | Permalink | Reply to this The APALing Woody Allen; Re: Aschbacher,arsmathematica, Corfield, Gelfand, Borcherds, Zeilberger, Peano, Chaitin, Mauchly, and hypertext, Re: Last Person Standing The question is not “Are you paranoid?” but “Are you paranoid ENOUGH?” I agree that Toby’s definition “sounds related to the Kolmogorov-Chaitin definition of a bit string of length L as ‘algorithmically random’ if it cannot be printed by a program written in binary code with length less than L−k for some constant k.” This opens the door to epistemological (“what do we know, how do we know it, and do we know that we know it) and ontological (does this pattern exist in the real world, or just in my mind) applications of the cluster of theories in “Advances in Minimum Description Length: Theory and Applications” by Peter D. Grünwald, In Jae Myung, Mark A. Pitt, 2005, 444 pages. “The book concludes with examples of how to apply MDL in research settings that range from bioinformatics and machine learning to psychology.” I think that the mathematical philosophers in the n-Catgory Cafe are going a levl deeper into foundational abstraction than this book. And, if we think about coincidences between coincidences, an infinite number of levels in the limit. Woody Allen joked that the best treatment for the paranoid is to hire people to follow him around, because now he is by definition non-delusional, and thus cured. Papers such as “Paranoia and Narcissism in Psychoanalytic Theory: Contributions of Self Psychology to the Theory and Therapy of the Paranoid Disorders” by Thomas A. Aronson, M.D. support the conjecture that a certain minimum degree of paranoia is essential to the development of the child’s identity, in partitioning the self from the parents, perhaps beginning when the child realizes that the parents sometimes lie. The child develops a “theory of mind.” My work for at least a half-dozen years with Professor Phil Fellman on Mathematical Disinformation Theory probes this reality of multiple agents with complex motives who are not just unreliable but actively machiavellianly giving the most misleading signals possible. I think that Fitch’s Paradox of Knowability can be resolved by the next step beyond Arbitrary Public Announcement Logic (APAL) Posted by: Jonathan Vos Post on February 15, 2009 6:09 PM | Permalink | Reply to this fixed URL; Re: The APALing Woody Allen; Re: Aschbacher,arsmathematica, Corfield, Gelfand, Borcherds, Zeilberger, Peano, Chaitin, Mauchly, and hypertext, Re: Last Person Standing [sorry, I screwed up the last sentence and its link]: I think that Fitch’s Paradox of Knowability can be resolved by the next step beyond Arbitrary Public Announcement Logic (APAL) a dynamic logic that extends epistemic logic – if we add paranoia by agents as to levels of distrust of what’s said by other agents. Posted by: Jonathan Vos Post on February 15, 2009 7:20 PM | Permalink | Reply to this 3rd try to get URL right;Re: fixed URL; Re: The APALing Woody Allen; Re: Aschbacher,arsmathematica, Corfield, Gelfand, Borcherds, Zeilberger, Peano, Chaitin, Mauchly, and hypertext, Re: Last Person 3rd time’s the charm? Undecidability for Arbitrary Public Announcement Logic by Tim French 1 Hans van Ditmarsch 2 1School of Computer Science and Software Engineering, University of Western Australia 2Computer Science, University of Otago, and IRIT, France Arbitrary Public Announcement Logic (APAL) is a dynamic logic that extends epistemic logic with a public announcement operator to represent the update corresponding to a public announcement; an arbitrary announcement operator that quantifies over APAL was introduced by Balbiani, Baltag, van Ditmarsch, Herzig, Hoshi and de Lima in 2007 (TARK) as an extension of public announcement logic. An journal version (‘Knowable’ as ‘known after an announcement’) is forthcoming [JVP: Out as hardcopy now] in the Review of Symbolic Logic…. The link I gave is an extension that differs from mine; which (the one linked to, in 36 powerpoint pages) summarizes and concludes: 1 action model execution is a refinement 2 decidable (and for extensions too) 3 expressivity known (via encoding to bisimulation quantified logics, roughly comparable with mu-calculus) 4 complexity open 5 axiomatization open and hard (in quantifiying over a more general set of announcements we sacrifice the witnessing formulas that were used in the APAL axiomatization) Posted by: Jonathan Vos Post on February 15, 2009 7:46 PM | Permalink | Reply to this Re: Last Person Standing Do we have examples of long proofs from earlier times which have been enormously simplified, not just through redefinition? What about Gödel’s theorems? I don’t know what form his original proof of the Completeness theorem took, but surely Henkin’s version of the proof is at least conceptually much simpler, and gives the Löwenheim-Skolem theorem as a trivial consequence. And for the Incompleteness theorem, haven’t the results of Turing and others resulted in a serious simplification of the proof, even if not a And haven’t there also been serious improvements on the Erdös/Selberg elementary proofs of the prime number theorem in the past 50 years? Posted by: Kenny Easwaran on February 10, 2009 9:29 PM | Permalink | Reply to this Re: Last Person Standing From an interview with Atiyah: Has the time passed when deep and important theorems in mathematics can be given short proofs? In the past, there are many such examples, e.g., Abel’s one-page proof of the addition theorem of algebraic differentials or Goursat’s proof of Cauchy’s integral theorem. ATIYAH I do not think that at all! Of course, that depends on what foundations you are allowed to start from. If we have to start from the axioms of mathematics, then every proof will be very long. The common framework at any given time is constantly advancing; we are already at a high platform. If we are allowed to start within that framework, then at every stage there are short One example from my own life is this famous problem about vector fields on spheres solved by Frank Adams where the proof took many hundreds of pages. One day I discovered how to write a proof on a postcard. I sent it over to Frank Adams and we wrote a little paper which then would fit on a bigger postcard. But of course that used some K-theory; not that complicated in itself. You are always building on a higher platform; you have always got more tools at your disposal that are part of the lingua franca which you can use. In the old days you had a smaller base: If you make a simple proof nowadays, then you are allowed to assume that people know what group theory is, you are allowed to talk about Hilbert space. Hilbert space took a long time to develop, so we have got a much bigger vocabulary, and with that we can write more poetry. Posted by: David Corfield on February 16, 2009 4:44 PM | Permalink | Reply to this Tao on Lax as Miraculous; Re: Last Person Standing In Bulletin of the AMS, Vol.46, No.1, Jan 2009, p.10, of Terry Taos’s wonderful survey “Why Are Solitons Stable?” he says of the inverse scattering approach: “This is a vast subject that can be viewed from many different algebraic and geometric perspectives; we shall content ourselves with describing the approach based on Lax pairs, which has the advantage of simplicity, provided that one is willing to accept a rather miraculous algebraic identity….” So, beauty from something that looks at first like a weird coincidence, which on further analysis is so deep that it appears a miracle, even to a genius such as Tao! Surely this matters very much, both in the Physics and the Mathematics persepctives. Posted by: Jonathan Vos Post on February 19, 2009 5:40 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2009/02/last_person_standing.html","timestamp":"2014-04-16T07:57:02Z","content_type":null,"content_length":"78232","record_id":"<urn:uuid:b372713d-dc2f-444c-9820-44e56d7a6da9>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: [HM] RE: HISTORIA MATEMATICA V6 #190 Replies: 0 [HM] RE: HISTORIA MATEMATICA V6 #190 Posted: Mar 6, 2006 11:04 AM Dear James Landau: no, of course Bourbaki was not the first to notice the need. Probably the first relevant instance was in Dedekind, who was of the opinion that "nothing is more dangerous in mathematics, than to accept existences without sufficient proof" (letter to Lipschitz, 1876; see also his famous letter to Keferstein, 1890). He realized the need for a proposition securing the existence of an infinite set (proposition no. 66 in his booklet -- not no. 6, is this a mistake in Suppes?), the big difference with us is that he thought he could prove it by purely logical means. Dedekind seems to have been motivated by Bolzano's "Paradoxien des Unendlichen" (1851) in giving his proof, and probably also in realizing the need for it. Hilbert, Russell, and many others seem to have accepted Dedekind's proof. But after the publication of the Russell paradox in 1903, widely publicized in the books of Russell and Frege, criticism of Dedekind began to appear. One of the first instances is in Hilbert's paper 'On the foundations of logic and arithmetic', published in 1905 (included in van Heijenoort, "From Frege to Gödel", 1967). Subsequently, the proposition was transformed into an axiom by Hilbert's colleague Ernst Zermelo. In Zermelo's axiomatization paper of 1908 (also in van Heijenoort), he calls it "Dedekind's axiom" as a way of acknowledging the importance of that precedent. The axiom systems for set theory became popular during the 1920s and 1930s. Also within other systems usual at the time (type theory), people adopted explicitly an axiom of infinity -- see e.g. the famous papers on mathematical logic by Gödel (1931) and Tarski (1933 and 1935). There were also interesting philosophical discussions about this topic, e.g. by Ramsey. By the time of Bourbaki's presentation, it was only too well known that one needed this axiom. Best wishes to all, and peace on earth, Jose Ferreiros Several weeks ago Alexander Zenkin (alexzen@com2com.ru), in a post to HM which unfortunately I did not keep, said that Bourbaki stated the need for Axiom of Infinty: There exists an infinite set Was this argument first made by Bourbaki? In Patrick Suppes _Axiomatic Set Theory_ 2nd (?) edition New York: Dover The attempt to prove the existence of an infinite set of objects has a rather bizarre and sometimes tortured history. Proposition No. 6 of Dedekind's famous Was sind und was sollen die Zahlen?, first published in 1888, asserts that there is an infinite system. (Dedekind's systems correspond to our sets.) [footnote] A similar argument is to be found in Bolzano [Paradoxien des Unendlichen, Liepsiz 1851] section 13 <end quote>
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1346745","timestamp":"2014-04-19T05:03:31Z","content_type":null,"content_length":"16328","record_id":"<urn:uuid:4a404181-c96d-420d-8202-46a4824fdee5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
how to calculate knee point voltage of current transformer? Question how to calculate knee point voltage of current transformer? Question Submitted By :: Dilip I also faced this Question!! Rank Answer Posted By Re: how to calculate knee point voltage of current transformer? Answer 10 % increase in voltage gives you 50 % increase in 0 Rajendra Soni # 1 excitation current is called knee point voltage. To measure this first demagnetise the CT and apply voltage gradually from secondary keeping primary winding open circuited. while doing this above phenomeneo will be obsesrved. Is This Answer Correct ? 126 Yes 21 No Re: how to calculate knee point voltage of current transformer? Answer to calculate knee point voltage of current transformer by 0 Saleh Abdulsalam Trabelsi # 2 formal Vknee point = Isecandary(Rct+ Rcable+ Rburden) Rct= resistenc of current transformer Rcable =resistenc of cable =2Rcable Rburden =burden of relay IF Vknee point< manufacture voltage of current transformer current transformer is good IF Vknee point> manufacture voltage of current transformer current transformer must be change Is This Answer Correct ? 64 Yes 37 No Re: how to calculate knee point voltage of current transformer? Answer 10 % increase in voltage gives you nearly 50 % increase in 0 Balamurugan R # 3 excitation current is called knee point voltage or breakdown voltage. To measure this first demagnetise the CT and apply voltage gradually(like 10% or 20% step) from secondary keeping primary should be open circuited. Is This Answer Correct ? 40 Yes 10 No Re: how to calculate knee point voltage of current transformer? Answer For calculating knee point voltage there are two methods, 0 S.arunkumar # 4 they are 1.Low method 2.High method 1.low impedance method 2.High impedance method Then, Rs=If/Is(Rct+Rl) Is,In-Rated Sec. Current of CT Rct-CT resistance Rl-Max. one way lead reistance from CT to Relay Is This Answer Correct ? 9 Yes 21 No Re: how to calculate knee point voltage of current transformer? Answer The knee-point voltage will depend on the size and class of 0 Pankaj # 5 the CT, and it can be very high on some CTs (several hundreds of volts) Be very cautious, you will be working on live equipment during the test. Now you can just plot the current and voltage values on a graph. This is the mag-curve of the CT. IEC classify the knee point voltage as the point where a 10% increase in voltage results in a 50% increase in current. Is This Answer Correct ? 11 Yes 2 No Re: how to calculate knee point voltage of current transformer? Answer I am not familiar with the term saturation test but it seems 0 Pankaj Joshi # 6 from the above posts like an exciting current test to obtain the knee-point of the CT. With this test you can also obtain the exciting curve (mag-curve) of the CT. You'll need a variable voltage, (easiest obtain from a variac - must be able to obtain the suitable voltage) a voltmeter, an ammeter, a pencil and paper.You'll have to measure the voltage and the current while you do the test. Apply the voltage to the secondary winding with the primary and other windings being open-circuited. Increase the voltage until you reach a point where a small increase in voltage results in a big increase in current. Now slowly decrease the voltage to a few measuring points, while you measure the current. (write it down) Something like: 205V - 1.1A 200V - 0.6A 190V - 0.238A 180V - 0.149A 170V - 0.114A 150V - 0.08A 125V - 0.061A 100V - 0.049A 50V - 0.028V 25V - 0.017A 0V - 0A You have to decrease your voltage slowly to zero volt to demagnetize the CT-core. BEWARE The knee-point voltage will depend on the size and class of the CT, and it can be very high on some CTs (several hundreds of volts) Be very cautious, you will be working on live equipment during the test. Now you can just plot the current and voltage values on a graph. This is the mag-curve of the CT. IEC classify the knee point voltage as the point where a 10% increase in voltage results in a 50% increase in current. Is This Answer Correct ? 30 Yes 9 No Re: how to calculate knee point voltage of current transformer? Answer The knee-point voltage will depend on the size and class of 0 Narendra Solanki # 7 the CT, and it can be very high on some CTs Be very cautious, you will be working on live equipment during the test. Now you can just plot the current and voltage values on a graph. This is the mag-curve of the CT. IEC classify the knee point voltage as the point where a 10% increase in voltage results in a 50% increase in current. Is This Answer Correct ? 5 Yes 5 No Re: how to calculate knee point voltage of current transformer? Answer Is the answer given below is correct? 0 Sunny # 8 For calculating knee point voltage there are two methods, they are 1.Low method 2.High method 1.low impedance method 2.High impedance method Then, Rs=If/Is(Rct+Rl) Is,In-Rated Sec. Current of CT Rct-CT resistance Rl-Max. one way lead reistance from CT to Relay Is This Answer Correct ? 7 Yes 4 No Re: how to calculate knee point voltage of current transformer? Answer knee point Voltage = Burden x aLf 0 Gusu # 9 ------------- E.g:10VA 5P20 CT. To Get the Knee Point Voltage => 10VA x 20 --------- = 10 x 20 5P ------- = 40V Is This Answer Correct ? 21 Yes 44 No Re: how to calculate knee point voltage of current transformer? Answer Calculation: 0 Arvind Negi # 10 1) Fault Current at Primary Side of CT (kA) = MVA/*100(% Z*1.732*KLV) 7.55 2) Fault Current at Secondary Side of CT, If (A)= 3) Cable Lead Resistance, RL (Ohm)= 1.02 4) Knee Point Voltage Vk (Volt) = 2If(RCT+2RL) REMARKS : - Knee Point Voltage of CT provided by CGL for REF CT = 115V and for Differental CT = 200V are in order. Is This Answer Correct ? 6 Yes 4 No 12 >>
{"url":"http://www.allinterview.com/showanswers/56349.html","timestamp":"2014-04-19T14:46:51Z","content_type":null,"content_length":"52542","record_id":"<urn:uuid:9a6b4291-b52c-4537-9fe8-31ab344773eb>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
Fast Monte-Carlo Algorithms for Matrix Multiplication Geometric Network Analysis Tools Michael W. Mahoney Stanford University MMDS, June 2010 ( For more info, see: http:// cs.stanford.edu/people/mmahoney/ or Google on “Michael Mahoney”) Networks and networked data Lots of “networked” data!! Interaction graph model of • technological networks networks: – AS, power-grid, road networks • Nodes represent “entities” • biological networks • Edges represent “interaction” – food-web, protein networks between pairs of entities • social networks – collaboration networks, friendships • information networks – co-citation, blog cross-postings, advertiser-bidded phrase graphs... • language networks – semantic networks... • ... Micro-markets in sponsored search “keyword-advertiser graph” Goal: Find isolated markets/clusters with sufficient money/clicks with sufficient coherence. Ques: Is this even possible? What is the CTR/ROI of “sports 1.4 Million Advertisers 10 million keywords Question: Is this visualization evidence for the schematic on the left? What do these networks “look” like? Popular approaches to large network data Heavy-tails and power laws (at large size-scales): • extreme heterogeneity in local environments, e.g., as captured by degree distribution, and relatively unstructured otherwise • basis for preferential attachment models, optimization-based models, power-law random graphs, etc. Local clustering/structure (at small size-scales): • local environments of nodes have structure, e.g., captures with clustering coefficient, that is meaningfully “geometric” • basis for small world models that start with global “geometry” and add random edges to get small diameter and preserve local “geometry” Popular approaches to data more generally Use geometric data analysis tools: • Low-rank methods - very popular and flexible • “Kernel” and “manifold” methods - use other distances, e.g., diffusions or nearest neighbors, to find “curved” low- dimensional spaces These geometric data analysis tools: • View data as a point cloud in Rn, i.e., each of the m data points is a vector in Rn • Based on SVD*, a basic vector space structural result • Geometry gives a lot -- scalability, robustness, capacity control, basis for inference, etc. *perhaps in an implicitly-defined infinite-dimensional non-linearly transformed feature space Can these approaches be combined? These approaches are very different: • network is a single data point---not a collection of feature vectors drawn from a distribution, and not really a matrix • can’t easily let m or n (number of data points or features) go to infinity---so nearly every such theorem fails to apply Can associate matrix with a graph, vice versa, but: • often do more damage than good • questions asked tend to be very different • graphs are really combinatorial things* *But, graph geodesic distance is a metric, and metric embeddings give fast approximation algorithms in worst-case CS analysis! • Large networks and different perspectives on data • Approximation algorithms as “experimental probes” • Graph partitioning: good test case for different approaches to data • Geometric/statistical properties implicit in worst-case algorithms • An example of the theory • Local spectral graph partitioning as an optimization problem • Exploring data graphs locally: practice follows theory closely • An example of the practice • Local and global clustering structure in very large networks • Strong theory allows us to make very strong applied claims Graph partitioning A family of combinatorial optimization problems - want to partition a graph’s nodes into two sets s.t.: • Not much edge weight across the cut (cut quality) • Both sides contain a lot of nodes Several standard formulations: • Graph bisection (minimum cut with 50-50 balance) • -balanced bisection (minimum cut with 70-30 balance) • cutsize/min{|A|,|B|}, or cutsize/(|A||B|) (expansion) • cutsize/min{Vol(A),Vol(B)}, or cutsize/(Vol(A)Vol(B)) (conductance or N-Cuts) All of these formalizations are NP-hard! Later: size-resolved conductance: algs can have non-obvious size-dependent behavior! Why graph partitioning? Graph partitioning algorithms: • capture a qualitative notion of connectedness • well-studied problem, both in theory and practice • many machine learning and data analysis applications • good “hydrogen atom” to work through the method (since spectral and max flow methods embed in very different places) We really don’t care about exact solution to intractable problem: • output of approximation algs is not something we “settle for” • randomized/approximation algorithms give “better” answers than exact solution Exptl Tools: Probing Large Networks with Approximation Algorithms Idea: Use approximation algorithms for NP-hard graph partitioning problems as experimental probes of network structure. Spectral - (quadratic approx) - confuses “long paths” with “deep cuts” Multi-commodity flow - (log(n) approx) - difficulty with expanders SDP - (sqrt(log(n)) approx) - best in theory Metis - (multi-resolution for mesh-like graphs) - common in practice X+MQI - post-processing step on, e.g., Spectral of Metis Metis+MQI - best conductance (empirically) Local Spectral - connected and tighter sets (empirically, regularized communities!) We are not interested in partitions per se, but in probing network structure. Analogy: What does a protein look like? Three possible representations (all-atom; backbone; and solvent-accessible surface) of the three-dimensional structure of the protein triose phosphate Experimental Procedure: • Generate a bunch of output data by using the unseen object to filter a known input • Reconstruct the unseen object given the output signal and what we know about the artifactual properties of the input signal. • Large networks and different perspectives on data • Approximation algorithms as “experimental probes” • Graph partitioning: good test case for different approaches to data • Geometric/statistical properties implicit in worst-case algorithms • An example of the theory • Local spectral graph partitioning as an optimization problem • Exploring data graphs locally: practice follows theory closely • An example of the practice • Local and global clustering structure in very large networks • Strong theory allows us to make very strong applied claims Recall spectral graph partitioning • Relaxation of: The basic optimization • Solvable via the eigenvalue • Sweep cut of second eigenvector Local spectral partitioning ansatz Mahoney, Orecchia, and Vishnoi (2010) Primal program: Dual program: • Find a cut well-correlated with the Interpretation: seed vector s - geometric notion of • Embedding a combination of scaled correlation between cuts! complete graph Kn and complete • If s is a single node, this relaxes: graphs T and T (KT and KT) - where the latter encourage cuts near (T,T). Main results (1 of 2) Mahoney, Orecchia, and Vishnoi (2010) Theorem: If x* is an optimal solution to LocalSpectral, it is a GPPR* vector for parameter , and it can be computed as the solution to a set of linear equations. (1) Relax non-convex problem to convex SDP (2) Strong duality holds for this SDP (3) Solution to SDP is rank one (from comp. slack.) (4) Rank one solution is GPPR vector. **GPPR vectors generalize Personalized PageRank, e.g., with negative teleportation - think of it as a more flexible regularization tool to use to “probe” networks. Main results (2 of 2) Mahoney, Orecchia, and Vishnoi (2010) Theorem: If x* is optimal solution to LocalSpect(G,s,), one can find a cut of conductance 8(G,s,) in time O(n lg n) with sweep cut of x*. Upper bound, as usual from sweep cut & Cheeger. Theorem: Let s be seed vector and correlation parameter. For all sets of nodes T s.t. ’ :=<s,sT>D2 , we have: (T) (G,s,) if ’, and (T) (’/)(G,s,) if ’ . Lower bound: Spectral version of flow- improvement algs. Other “Local” Spectral and Flow and “Improvement” Methods Local spectral methods - provably-good local version of global spectral ST04: truncated”local” random walks to compute locally-biased cut ACL06/Chung08 : locally-biased PageRank vector/heat-kernel vector Flow improvement methods - Given a graph G and a partition, find a “nearby” cut that is of similar quality: GGT89: find min conductance subset of a “small” partition LR04,AL08: find “good” “nearby” cuts using flow-based methods Optimization ansatz ties these two together (but is not strongly local in the sense that computations depend on the size of the output). Illustration on small graphs • Similar results if we do local random walks, truncated PageRank, and heat kernel diffusions. • Often, it finds “worse” quality but “nicer” partitions than flow-improve methods. (Tradeoff we’ll see later.) Illustration with general seeds • Seed vector doesn’t need to correspond to cuts. • It could be any vector on the nodes, e.g., can find a cut “near” low- degree vertices with si = -(di-dav), i[n]. • Large networks and different perspectives on data • Approximation algorithms as “experimental probes” • Graph partitioning: good test case for different approaches to data • Geometric/statistical properties implicit in worst-case algorithms • An example of the theory • Local spectral graph partitioning as an optimization problem • Exploring data graphs locally: practice follows theory closely • An example of the practice • Local and global clustering structure in very large networks • Strong theory allows us to make very strong applied claims Conductance, Communities, and NCPPs Let A be the adjacency matrix of G=(V,E). The conductance of a set S of nodes is: The Network Community Profile (NCP) Plot of the graph is: Since algorithms often have non-obvious size- dependent behavior. Just as conductance captures the “gestalt” notion of cluster/community quality, the NCP plot measures cluster/community quality as a function of size. NCP is intractable to compute --> use approximation algorithms! Widely-studied small social networks Zachary’s karate club Newman’s Network Science “Low-dimensional” graphs (and expanders) d-dimensional meshes RoadNet-CA NCPP for common generative models Preferential Attachment Copying Model RB Hierarchical Geometric PA Large Social and Information Networks Typical example of our findings Leskovec, Lang, Dasgupta, and Mahoney (WWW 2008 & arXiv 2008) General relativity collaboration network (4,158 nodes, 13,422 edges) Community score Community size 27 Large Social and Information Networks Leskovec, Lang, Dasgupta, and Mahoney (WWW 2008 & arXiv 2008 & WWW 2010) LiveJournal Epinions Focus on the red curves (local spectral algorithm) - blue (Metis+Flow), green (Bag of whiskers), and black (randomly rewired network) for consistency and cross-validation. Other clustering methods Leskovec, Lang, Dasgupta, and Mahoney (WWW 2008 & arXiv 2008 & WWW 2010) LRao conn Lrao disconn Lower and upper bounds Lower bounds on conductance can be computed from: Spectral embedding (independent of balance) SDP-based methods (for volume-balanced partitions) Algorithms find clusters close to theoretical lower bounds 12 clustering objective functions* Leskovec, Lang, Dasgupta, and Mahoney (WWW 2008 & arXiv 2008 & WWW 2010) Clustering objectives: S Modularity: m-E(m) (Volume minus correction) Modularity Ratio: m-E(m) Volume: u d(u)=2m+c Edges cut: c Multi-criterion: n: nodes in S Conductance: c/(2m+c) (SA to Volume) m: edges in S Expansion: c/n c: edges pointing Density: 1-m/n2 CutRatio: c/n(N-n) outside S Normalized Cut: c/(2m+c) + c/2(M-m)+c Max ODF: max frac. of edges of a node pointing outside S Average-ODF: avg. frac. of edges of a node pointing outside Flake-ODF: frac. of nodes with mode than ½ edges inside *Many of hese typically come with a weaker theoretical understanding than conductance, but are similar/different in known ways for practitioners. 31 Multi-criterion objectives Leskovec, Lang, Dasgupta, and Mahoney (WWW 2008 & arXiv 2008 & WWW 2010) Qualitatively similar to conductance Expansion, NCut, Cut- ratio and Avg-ODF are Max-ODF prefers smaller clusters Flake-ODF prefers larger clusters Internal density is bad Cut-ratio has high Single-criterion objectives All measures are monotonic (for rather trivial reasons) prefers large clusters Ignores small clusters Because it basically captures Volume! Regularized and non-regularized communities (1 of 2) Conductance of bounding cut Diameter of the cluster Local Spectral External/internal conductance Lower is good • Metis+MQI (red) gives sets with better conductance. • Local Spectral (blue) gives tighter and more well-rounded sets. • Regularization is implicit in the steps of approximation algorithm. Regularized and non-regularized communities (2 of 2) Two ca. 500 node communities from Local Spectral Algorithm: Two ca. 500 node communities from Metis+MQI: Small versus Large Networks Leskovec, et al. (arXiv 2009); Mahdian-Xu 2007 Small and large networks are very different: “low-dimensional” core-periphery (also, an expander) E.g., fit these networks to Stochastic Kronecker Graph with “base” K=[a b; b c]: K1 = Relationship b/w small-scale structure and large-scale structure in social/information networks is not reproduced (even qualitatively) by popular models • This relationship governs many things: diffusion of information; routing and decentralized search; dynamic properties; etc., etc., etc. • This relationship also governs (implicitly) the applicability of nearly every common data analysis tool in these applications • Local structures are locally “linear” or meaningfully-Euclidean -- do not propagate to more expander-like or hyperbolic global size-scales • Good large “communities” (as usually conceptualized i.t.o. inter- versus intra- connectivity) don’t really exist Approximation algorithms as “experimental probes”: • Geometric and statistical properties implicit in worst-case approximation algorithms - based on very strong theory • Graph partitioning is good “hydrogen atom” - for understanding algorithmic versus statistical perspectives more generally Applications to network data: • Local-to-global properties not even qualitatively correct in existing models, graphs used for validation, intuition, etc. • Informatics graphs are good “hydrogen atom” for development of geometric network analysis tools more generally
{"url":"http://www.docstoc.com/docs/132825909/Fast-Monte-Carlo-Algorithms-for-Matrix-Multiplication","timestamp":"2014-04-16T19:23:03Z","content_type":null,"content_length":"72403","record_id":"<urn:uuid:c578c38b-7bd0-4c7e-ac5c-16b860d6cfd8>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry Function Derivative May 8th 2008, 05:57 AM #1 Trigonometry Function Derivative Q: $f(x) = \mathrm{arcsin} x, \ -1 \le x \le 1$. (a)Evaluate $f \left( - \frac{1}{2} \right)$. (b) Find an equation of the tangent to the curve with equation $y=f(x)$ at the point where $x = \frac{1}{\sqrt{2}}$. I've done part (a) but struggling with part (b). May I have some help please. The equation of the tangent to the curve at a point $(a,\,f(a))$ is $y=f(a)+(x-a)\cdot f'(a)$. In your case, it becomes $y=f\left(\frac{1}{\sqrt{2}}\right)+\left(x-\frac{1}{\sqrt{2}}\right)\cdot But...I still don't get it very well. I understand that the coordinates to consider is $\left( \frac{1}{\sqrt{2}}, \frac{\pi}{2} \right)$. Can someone work out the gradient? I think I might have got that wrong. Q: $f(x) = \mathrm{arcsin} x, \ -1 \le x \le 1$. (a)Evaluate $f \left( - \frac{1}{2} \right)$. (b) Find an equation of the tangent to the curve with equation $y=f(x)$ at the point where $x = \frac{1}{\sqrt{2}}$. I've done part (a) but struggling with part (b). May I have some help please. The equation of the tangent to a curve whose equation is f(x) is : Here, $a=\frac{1}{\sqrt{2}}$ $f(x)=\arcsin(x) \Longrightarrow f'(x)=\frac{1}{\sqrt{1-x^2}}$ --> tangent : $y=\frac{1}{\sqrt{1-(\frac{1}{\sqrt{2}})^2}} \cdot (x-\frac{1}{\sqrt{2}})+f(\frac{1}{\sqrt{2}})$ Another way of getting the gradient: Gradient of Function y = f'(x) is $\frac{dy}{dx}$ $y = arcsin(x)$ $x = sin(y)$ $\frac{dx}{dy} = cos(y)$ $\frac{dy}{dx} = sec(y)$ but y = arcsin(x) $\frac{dy}{dx} = sec(arcsin(x))$ at $x = \frac{1}{\sqrt2}$ $\frac{dy}{dx} = sec(\frac{\pi}{4})$ = $\sqrt{2}$ May 8th 2008, 06:04 AM #2 May 8th 2008, 06:08 AM #3 May 8th 2008, 06:10 AM #4 May 8th 2008, 06:20 AM #5 Super Member Jan 2008
{"url":"http://mathhelpforum.com/calculus/37626-trigonometry-function-derivative.html","timestamp":"2014-04-16T19:35:36Z","content_type":null,"content_length":"47560","record_id":"<urn:uuid:8f5ee989-ad24-4746-a47f-8d41e58c61fe>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
1. A Multiplexer Has Four Data Select Lines, A3-A0. ... | Chegg.com 1. A multiplexer has four data select lines, A3-A0. How many input signals are multiplexed? 2. What is the value of n for a generalized n-line demultiplexer? 3. What is VHDL assignment operation for Y4 in the 3-to-8 decoder circuit? A. Y<=Not (A and (NOT B) AND (NOT C)) B. Y<=NOT(A AND B AND (NOT C)) C. Y<= NOT ((NOT A) OR (NOT B) OR C) D. Y<=NOT ((NOT A AND (NOT B) AND C) Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/1-multiplexer-four-data-select-lines-a3-a0-many-input-signals-multiplexed-2-value-n-genera-q4054365","timestamp":"2014-04-19T11:05:09Z","content_type":null,"content_length":"21694","record_id":"<urn:uuid:e8bfe468-e7df-4b48-abc1-df24f2254974>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Stata 8 graph bugs? [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Stata 8 graph bugs? From "Michael Blasnik" <michael.blasnik@verizon.net> To <statalist@hsphsun2.harvard.edu> Subject st: Stata 8 graph bugs? Date Tue, 11 Nov 2003 13:21:35 -0500 I've been using the Stata 8 graphics intensively for the past few weeks and am finding that I can create lots of really great graphics. However, I am also finding lots of little bugs and some big ones. I was wondering whether other users are experiencing the same things or whether I just can't figure out how to do things. I am using Stata 8.2 SE fully updated on Win XP Pro. 1) I can't get rid of boxes surrounding legends using either the lstyle(none), lwid(none), or lcol(none) suboptions within the region suboption of the legend option 2) I sometimes get grid lines ( from xlab(,grid)) that appear on top of the line connecting my first data series but under the line connecting the second data series on the same graph. It doesn't matter where I place the xlab option. 3) The gmin and gmax suboptions of xlabel and ylabel used with grid usually have no effect. Even in some cases where there is a lot of space between the maximum data label and the axis scale line the gmax option does not result in a grid line. I have had to use the xline( , lstyle(grid)) option to get the last grid line 4) xtitles seem to run into xlabels on most graphs I create, forcing me to almost always use the margin(t+1) suboption on the xtitle command 5) I get difficult to decipher error messages for some graphs within do files (sometimes class-related errors, sometimes unmatched braces errors that I can't find) . Typically, the error messages appear only the first time the do file is called and the graph appears to be drawn correctly. On subsequent calls of the same do file, the error messages usually disappear. In some do file where I loop through the same graph command for many subsets of the data, I see the error messages only for the first subset and then all of the other subsets run without problem. 6) I have experienced more Stata crashes in the past week (3 or 4) using the graph commands than I have experienced in the previous 10+ years of using Stata. In one case, the graphics color schemes from Stata apparently began affecting all other running applications and forced me to reboot (probably some MS problem is at the root of that). Are other users experiencing these problems? Is Stata aware of these problems? Working on solutions? Michael Blasnik * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2003-11/msg00285.html","timestamp":"2014-04-19T04:43:25Z","content_type":null,"content_length":"7460","record_id":"<urn:uuid:951a6605-c1e2-45dc-a606-0797e5c631a7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
Most Important Mathematics in Advanced AI/Robotics - Artificial Intelligence What would constitute the very most important mathematics used in AI and Robotics? Not just game AI, all AI, I realize the obvious answer here is "everything" but I am looking for something more specific. Like, I can see probability being used in machine learning, linear algebra is a given. Anything else?
{"url":"http://www.gamedev.net/topic/637779-most-important-mathematics-in-advanced-airobotics/","timestamp":"2014-04-20T00:58:07Z","content_type":null,"content_length":"128334","record_id":"<urn:uuid:b2f17972-98dc-47a7-a49f-42f3ce37f26c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
A Boggle-like Challenge - Page 2 - New Logic/Math PuzzlesA Boggle-like Challenge - Page 2 - New Logic/Math PuzzlesA Boggle-like Challenge - Page 2 - New Logic/Math Puzzles Welcome to BrainDen.com - Brain Teasers Forum Welcome to BrainDen.com - Brain Teasers Forum. Like most online communities you must register to post in our community, but don't worry this is a simple free process. To be a part of BrainDen Forums you may create a new account or sign in if you already have an account. As a member you could start new topics, reply to others, subscribe to topics/forums to get automatic updates, get your own profile and make new friends. Of course, you can also enjoy our collection of amazing optical illusions and cool math games. If you like our site, you may support us by simply clicking Google "+1" or Facebook "Like" buttons at the top. If you have a website, we would appreciate a little link to BrainDen. Thanks and enjoy the Den :-) 18 replies to this topic Posted 16 November 2012 - 11:59 AM Spoiler for Possible solution for the first part: Edited by brifri238, 16 November 2012 - 12:07 PM. Posted 16 November 2012 - 09:11 PM Curr3nt and brifri238 both found good adjacency hookups. I think phil1882 got into a bit of trouble trying to keep the dimension down to two or three. Also, curr3nt's solution makes for an easy second part solution. I had hoped that someone would go a bit further and hint at some algorithm for generating these things. I know of no way to generate a random one -- i.e. an algorithm which can choose one with probability 1/N where N is the total number of possible adjacencies. I don't even know how to find N. Thanks for working on it. Posted 17 November 2012 - 02:04 PM Spoiler for assuming curr3nts Posted 18 November 2012 - 07:41 AM Spoiler for assuming curr3nts N is not going to be that simple to calculate. Any given adjacency you find could have multiple possible cycles that could be found within it (So it may incorrectly be counted multiple times). Adjacencies don't need to be as neatly organized as curr3nt's answer was. Spoiler for Large range for N Spoiler for Anticlimactic way to generate with 1/N probability Posted 19 November 2012 - 07:58 PM Spoiler for Why wouldn't phil's approach for N work? Posted 19 November 2012 - 11:06 PM Spoiler for Why wouldn't phil's approach for N work? Spoiler for can you "cube" this? Posted 19 November 2012 - 11:39 PM Perhaps going down to a smaller alphabet will help shed light on this problem. I wrote a little program to generate all possible ways of making adjacencies on an alphabet of size 8 with each letter having 3 adjacencies. The number of these I got was 19,355. Posted 20 November 2012 - 01:47 AM Yay for integer sequences (found using your 19355 )... Spoiler for Not the answer, but some pretty numbers anyway Spoiler for Estimation of N based on list 3 + connected=cycle assumption Spoiler for Simple proof for connected=cycle in 2-regular graphs Time to look at 3-regular graphs... which aren't as simple and quite possibly don't have this property. Spoiler for Useful article... Posted 20 November 2012 - 04:21 PM Spoiler for can you "cube" this? Spoiler for You had me at... 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
{"url":"http://brainden.com/forum/index.php/topic/15434-a-boggle-like-challenge/page-2","timestamp":"2014-04-17T07:10:17Z","content_type":null,"content_length":"102400","record_id":"<urn:uuid:e6d6f010-cdd4-4061-9da3-261f856e65ce>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Portfolio Choices, Asset Prices, and Investment Advice This file is also available in Adobe Acrobat PDF format Chapter 1 1.1. The Subject of This Book THIS IS A BOOK about the effects of investors interacting in capital markets and the implications for those who advise individuals concerning savings and investment decisions. The subjects are often considered separately under titles such as portfolio choice and asset pricing. Portfolio choice refers to the ways in which investors do or should make decisions concerning savings and investments. Applications that are intended to describe what investors do are examples of positive economics. Far more common, however, are normative applications, designed to prescribe what investors should do. Asset pricing refers to the process by which the prices of financial assets are determined and the resulting relationships between expected returns and the risks associated with those returns in capital markets. Asset pricing theories or models are examples of positive or descriptive economics, since they attempt to describe relationships in the real world. In this book we take the view that these subjects cannot be adequately understood in isolation, for they are inextricably intertwined. As will be shown, asset prices are determined as part of the process through which investors make portfolio choices. Moreover, the appropriate portfolio choice for an individual depends crucially on available expected returns and risks associated with different investment strategies, and these depend on the manner in which asset prices are set. Our goal is to approach these issues more as one subject than as two. Accordingly, the book is intended for those who are interested in descriptions of the opportunities available in capital markets, those who make savings and investment decisions for themselves, and those who provide such services or advice to others. Academic researchers will find here a series of analyses of capital market conditions that go well beyond simple models that imply portfolio choices clearly inconsistent with observed behavior. A major focus throughout is on the effects on asset pricing when more realistic assumptions are made concerning investors’ situations and behavior. Investment advisors and investment managers will find a set of possible frameworks for making logical decisions, whether or not they believe that asset prices well reflect future prospects. It is crucial that investment professionals differentiate between investing and betting. We show that a well thought out model of asset pricing is an essential ingredient for sound investment practice. Without one, it is impossible to even know the extent and nature of bets incorporated in investment advice or management, let alone ensure that they are well founded. 1.2. Methods This book departs from much of the previous literature in the area in two important ways. First, the underlying view of the uncertain future is not based on the mean/variance approach advocated for portfolio choice by Markowitz (1952) and used as the basis for the original Capital Asset Pricing Model (CAPM) of Sharpe (1964), Lintner (1965), Mossin (1966), and Treynor (1999). Instead, we base our analyses on a straightforward version of the state/preference approach to uncertainty developed by Arrow (1953) extending the work of Arrow (1951) and Debreu (1951). Second, we rely extensively on the use of a program that simulates the process by which equilibrium can be reached in a capital market and provides extensive analysis of the resulting relationships between asset prices and future prospects. 1.2.1. The State/Preference Approach We utilize a state/preference approach with a discrete-time, discrete-outcome setting. Simply put, uncertainty is captured by assigning probabilities to alternative future scenarios or states of the world, each of which provides a different set of investment outcomes. This rules out explicit reliance on continuous-time formulations and continuous distributions (such as normal or log-normal), although one can use discrete approximations of such distributions. Discrete formulations make the mathematics much simpler. Many standard results in financial economics can be obtained almost trivially in such a setting. At least as important, discrete formulations can make the underlying economics of a situation more obvious. At the end of the day, the goal of the (social) science of financial economics is to describe the results obtained when individuals interact with one another. The goal of financial economics as a prescriptive tool is to help individuals make better decisions. In each case, the better we understand the economics of an analysis, the better equipped we are to evaluate its usefulness. The term state/preference indicates both that discrete states and times are involved, and that individuals’ preferences for consumption play a key role. Also included are other aspects, such as securities representing production outputs. 1.2.2. Simulation Simulation makes it possible to substitute computation for derivation. Instead of formulating complex algebraic models, then manipulating the resulting equations to obtain a closed-form solution equation, one can build a computer model of a marketplace populated by individuals, have them trade with one another until they do not wish to trade any more, then examine the characteristics of the resulting portfolios and asset prices. Simulations of this type have both advantages and disadvantages. They can be relatively easy to understand. They can also reflect more complex situations than must often be assumed if algebraic models are to be used. On the other hand, the relationship between the inputs and the outputs may be difficult to fully comprehend. Worse yet, it is hard if not impossible to prove a relationship via simulation, although it is possible to disprove one. Consider, for example, an assertion that when people have preferences of type A and securities of type B are available, equilibrium asset prices have characteristics of type C; that is, A + B ⇒ C. One can run a simulation with some people of type A and securities of type B and observe that the equilibrium asset prices are of type C. But this does not prove that such will always be the case. One can repeat the experiment with different people and securities, but always with people of type A and securities of type B. If in one or more cases the equilibrium is not of type C, the proposition (A + B ⇒ C) is disproven. But even if every simulation conforms with the proposition, it is not proven. The best that can be said is that if many simulations give the same result, one’s confidence in the truth of the proposition is increased. Simulation is thus at best a brute force way to derive propositions that may hold most or all of the time. But equilibrium simulation can be a powerful device. It can produce examples of considerable complexity and help people think deeply about the determinants of asset prices and portfolio choice. It can also be a powerful ally in bringing asset pricing analysis to more people. 1.2.3. The APSIM Program The simulation program used for all the examples in this book is called APSIM, which stands for Asset Pricing and Portfolio Choice Simulator. It is available without charge at the author’s Web site: www.wsharpe.com, along with workbooks for each of the cases covered. The program, associated workbooks, instructions, and source code can all be downloaded. Although the author has made every attempt to create a fast and reliable simulation program, no warranty can be given that the program is without error. Although reading C++ programming code for a complex program is not recommended for most readers, the APSIM source code does provide documentation for the results described here. In a simulation context, this can serve a function similar to that of formal proofs of results obtained with traditional algebraic models. 1.3. Pedagogy If you were to attend an MBA finance class at a modern university you would learn about subjects such as portfolio optimization, asset allocation analysis, the Capital Asset Pricing Model, risk-adjusted performance analysis, alpha and beta values, Sharpe Ratios, and index funds. All this material was built from Harry Markowitz’s view that an investor should focus on the expected return and risk of his or her overall portfolio and from the original Capital Asset Pricing Model that assumed that investors followed Markowitz’s advice. Such mean/variance analysis provides the foundation for many of the quantitative methods used by those who manage investment portfolios or assist individuals with savings and investment decisions. If you were to attend a Ph.D. finance class at the same university you would learn about no-arbitrage pricing, state claim prices, complete markets, spanning, asset pricing kernels, stochastic discount factors, and risk-neutral probabilities. All these subjects build on the view developed by Kenneth Arrow that an investor should consider alternative outcomes and the amount of consumption obtained in each possible situation. Techniques based on this type of analysis are used frequently by financial engineers, but far less often by investment managers and financial advisors. Much of the author’s published work is in the first category, starting with “Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk” (1964). The monograph Portfolio Theory and Capital Markets (1970) followed resolutely in the mean/variance tradition, although it did cover a few ideas from state/preference theory in one chapter. The textbook Investments (Sharpe 1978) was predominantly in the mean/variance tradition, although it did use some aspects of a state/preference approach when discussing option valuation. The most recent edition (Sharpe, Alexander, and Bailey 1999) has evolved significantly, but still rests on a mean/variance foundation. This is not an entirely happy state of affairs. There are strong arguments for viewing mean/variance analysis as a special case of a more general asset pricing theory (albeit a special case with many practical advantages). This suggests that it could be preferable to teach MBA students, investment managers, and financial advisors both general asset pricing and the special case of mean/ variance analysis. A major goal of this book is to show how this might be accomplished. It is thus addressed in part to those who could undertake such a task (teachers, broadly construed). It is also addressed to those who would like to understand more of the material now taught in the Ph.D. classroom but who lack some of the background to do so easily (students, broadly construed). 1.4. Peeling the Onion Capital markets are complex. We deal with stylized versions that lack many important features such as taxes, transactions costs, and so on. This is equivalent to introducing some of the principles of physics by assuming away the influences of friction. The justification is that one cannot hope to understand real capital markets without considering their behavior in simpler settings. While our simulated capital markets are far simpler than real ones, their features are not simple to fully understand. To deal with this we introduce material in a sequential manner, starting with key aspects of a very simple case, while glossing over many important ingredients. Then we slowly peel back layers of the onion, revealing more of the inner workings and moving to more complex cases. This approach can lead to a certain amount of frustration on the part of both author and reader. But in due course, most mysteries are resolved, seemingly unrelated paths converge, and the patient reader is rewarded. 1.5. References The material in this book builds on the work of many authors. Although some key works are referenced, most are not because of the enormity of the task. Fortunately, there is an excellent source for those interested in the history of the ideas that form the basis for much of this book: Mark Rubinstein’s A History of the Theory of Investments: My Annotated Bibliography (Rubinstein 2006), which is highly recommended for anyone seriously interested in investment theory. 1.6. Chapters A brief description of the contents of the remaining chapters follows. 1.6.1. Chapter 2: Equilibrium Chapter 2 presents the fundamental ideas of asset pricing in a one-period (twodate) equilibrium setting in which investors agree on the probabilities of alternative future states of the world. The major focus is on the advice often given by financial economists to their friends and relatives: avoid non-market risk and take on a desired amount of market risk to obtain higher expected return. We show that under the conditions in the chapter, this is consistent with equilibrium portfolio choice. 1.6.2. Chapter 3: Preferences Chapter 3 deals with investors’ preferences. We cover alternative ways in which an individual may determine the amount of a security to be purchased or sold, given its price. A key ingredient is the concept of marginal utility. There are direct relationships between investors’ marginal utilities and their portfolio choices. We cover cases that are consistent with some traditional financial planning advice, others that are consistent with mean/variance analysis, and yet others that are consistent with some features of the experimental results obtained by cognitive psychologists. 1.6.3. Chapter 4: Prices Chapter 4 analyzes the characteristics of equilibrium in a world in which investors agree on the probabilities of future states of the world, do not have sources of consumption outside the financial markets, and do not favor a given amount of consumption in one future state of the world over the same amount in another future state. The chapter also introduces the concept of a complete market, in which investors can trade atomistic securities termed state claims. Some of the key results of modern asset pricing theory are discussed, along with their preconditions and limitations. Implications for investors’ portfolio choices are also explored. We show that in this setting the standard counsel that an investor should avoid non-market risk and take on an appropriate amount of market risk to obtain higher expected return is likely to be good advice as long as available securities offer sufficient diversity. 1.6.4. Chapter 5: Positions Chapter 5 explores the characteristics of equilibrium and optimal portfolio choice when investors have diverse economic positions outside the financial markets or differ in their preferences for consumption in different possible states of the world. As in earlier chapters, we assume investors agree on the probabilities of alternative future outcomes. 1.6.5. Chapter 6: Predictions Chapter 6 confronts situations in which people disagree about the likelihood of different future outcomes. Active and passive approaches to investment management are discussed. The arguments for index funds are reviewed, along with one of the earliest published examples of a case in which the average opinion of a number of people provided a better estimate of a future outcome than the opinion of all but a few. We also explore the impact of differential information across investors and the effects of both biased and unbiased predictions. 1.6.6. Chapter 7: Protection Chapter 7 begins with a discussion of the type of investment product that offers “downside protection” and “upside potential.” Such a “protected investment product” is a derivative security because its return is based on the performance of a specified underlying asset or index. We show that a protected investment product based on a broad market index can play a useful role in a market in which some or all investors’ preferences have some of the characteristics found in behavioral studies. We also discuss the role that can be played in such a setting by other derivative securities such as put and call options. To illustrate division of investment returns we introduce a simple trust fund that issues securities with different payoff patterns. Finally, we discuss the results from an experiment designed to elicit information about the marginal utilities of real people. 1.6.7. Chapter 8: Advice The final chapter is based on the premise that most individual investors are best served through a division of labor, with investors assisted by investment professionals serving as advisors or portfolio managers. We review the demographic factors leading to an increased need for individuals to make savings and investment decisions and suggest the implications of the principle of comparative advantage for making such decisions efficiently. We then discuss the importance of understanding the differences between investing and betting and the need for investment advisors to have a logically consistent approach that takes into account the characteristics of equilibrium in financial markets. The chapter and the book conclude with a discussion of the key attributes of sound personal investment advice and an admonition that advisors and managers who make portfolio choices should have a clear view of the determination of asset prices. Return to Book Description File created: 8/7/2007 Questions and comments to: webmaster@pupress.princeton.edu Princeton University Press
{"url":"http://press.princeton.edu/chapters/s8272.html","timestamp":"2014-04-18T01:23:10Z","content_type":null,"content_length":"27165","record_id":"<urn:uuid:976c0854-8584-4ddb-ad36-8f4999c117e7>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
El Sobrante Algebra Tutor Find an El Sobrante Algebra Tutor ...I worked as a math tutor for a year between high school and college and continued to tutor math and physics throughout my undergraduate career. I specialize in tutoring high school mathematics, such as geometry, algebra, precalculus, and calculus, as well as AP physics. In addition, I have sign... 25 Subjects: including algebra 1, algebra 2, physics, statistics ...Palo Alto, CA Andreas was an excellent tutor for our son in Calculus this year at Stanford. He would never have done as well as he did without the talents and effort of Andreas. If you want the best, Andreas is your man! 41 Subjects: including algebra 1, algebra 2, calculus, geometry ...I teach workshops on essay writing for the ACT and given critical feedback on how to improve their essays based on past ACT essay prompts. I demonstrate to students how to break down many of the reading passages, science sections, and math problems to help eliminate wrong answer choices and use ... 34 Subjects: including algebra 1, algebra 2, reading, English ...I have taught 6th grade earth science and 8th grade physical science. I have tutored algebra 2, geometry, and Spanish as well as various sciences. I also have experience in the "Lindamood-Bell" literacy, comprehension, and math techniques. 24 Subjects: including algebra 2, algebra 1, chemistry, Spanish ...I'm currently a graduate student at CSUEB studying Mathematics. I plan on teaching at the community college level after I get my degree. I have been tutoring for 3 years at the community college level. 8 Subjects: including algebra 1, algebra 2, precalculus, reading
{"url":"http://www.purplemath.com/El_Sobrante_Algebra_tutors.php","timestamp":"2014-04-18T22:08:10Z","content_type":null,"content_length":"23659","record_id":"<urn:uuid:7d4e90d9-5210-4257-9a37-34aebf275f3f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
N - Puzzle The N-Puzzle is known in finite versions such as the 8-puzzle (a 3x3 board) and the 15-puzzle (a 4x4 board), and with various names like "sliding block", "tile puzzle", etc. Description: The N-Puzzle is a board game for a single player. It consists of (N^2- 1) numbered squared tiles in random order, and one blank space ("a missing tile"). The object of the puzzle is to rearrange the tiles in order by making sliding moves that use the empty space, using the fewest moves. Moves of the puzzle are made by sliding an adjacent tile into the empty space. Only tiles that are horizontally or vertically adjacent to the blank space (not diagonally adjacent) may be moved. Solvability: Half of the initial states for the N-Puzzle are impossible to resolve. This can be proved by an invariant of the board: The parity of permutations of all squares + the parity of the Manhattan distance of all squares. This is an invariant because each legal move changes both the parity of the permutation and the parity of the Manhattan distance. This invariant split the space of all possible states into two equivalence classes of solvable and unsolvable states. An O(N^2) algorithm to decide when an initial state of the N-puzzle is solvable can be found here: http:// cseweb.ucsd.edu/~ccalabro/essays/15_puzzle.pdf Algorithmic Problem: Input: N*N board with (N^2 – 1) numbered tiles and one blank space – representing an initial state. Output: a shortest sequence of moves that by following those moves all tiles will be in their goal positions. Algorithmic Solution: It has been proved that finding the solution of the general N-Puzzle is NP-complete (Ratner & Warmuth, 1986 ). The best known algorithm for solving the finite 8-puzzle version of the problem optimally is A*. Commonly, the heuristic employed in an A* search is the Manhattan Distance. Heuristics for the N-Puzzle: Samples for 8-puzzle start boards can be found here An open source Java Applet for solving the 8/15-Puzzle with A*
{"url":"http://heuristicswiki.wikispaces.com/N+-+Puzzle?responseToken=01e5b664f644c5bf4578e805c5e50b69e","timestamp":"2014-04-25T00:20:31Z","content_type":null,"content_length":"50368","record_id":"<urn:uuid:a4c5ce28-2985-489d-9b96-c0c0d99dcb7f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
Enhancement of Download Multi-User Multiple-Input Multiple-Output Wireless Communications Patent application title: Enhancement of Download Multi-User Multiple-Input Multiple-Output Wireless Communications Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A method implemented in a user equipment configured to be used in a multi-user (MU) multiple-input multiple-output (MIMO) wireless communications system is disclosed. In an aspect, the user equipment transmits to a base station a first channel state information (CSI) report determined according to a single-user (SU) MIMO rule and a second CSI report based on a residual error. A method implemented in a user equipment configured to be used in a multi-user (MU) multiple-input multiple-output (MIMO) wireless communications system, comprising: transmitting to a base station a first channel state information (CSI) report determined according to a single-user (SU) MIMO rule; and transmitting to the base station a second CSI report based on a residual error. The method of claim 1, wherein the first CSI report is transmitted in a first interval configured for SU CSI reporting and the second CSI report is transmitted in a second interval configured for the second CSI report. The method of claim 1, wherein the first CSI report and the second CSI report are transmitted in a common interval configured for CSI reporting. The method of claim 1, wherein the first CSI report is transmitted in a first interval configured for SU CSI reporting and the first CSI report and the second CSI report are transmitted in a second interval configured for combined CSI reporting. The method of claim 1, further comprising: receiving at least one of sequence configuration information and sub-sequence configuration information from the base station, wherein the sequence configuration information comprises at least one of first periodicity and first offset for the first CSI report, and wherein the sub-sequence configuration information comprises at least one of second periodicity and second offset for the second CSI report. The method of claim 1, wherein the first CSI report includes at least one of a preferred rank, a precoder of the preferred rank, and a corresponding quantized SINR (signal to interference plus noise The method of claim 1, further comprising: assuming a post scheduling model. The method of claim 7, wherein the post scheduling model can be expressed as y ={circumflex over (D)}.sub. sup.1/2{circumflex over (V)}.sub. +{circumflex over (D)}.sub. sup.1/2({circumflex over (V)}.sub. s- .sub. 1+η.sub. where y represents an N×1 received signal vector on a representative resource element in a resource block (RB), N being the number of receive antennas at the user equipment, {circumflex over (D)}.sub. sup.1/2 is a diagonal matrix of effective channel gains, {circumflex over (V)} denotes a semi-unitary matrix whose columns represent preferred channel directions and {circumflex over (V)}.sub. sup.† represents Hermitian of {circumflex over (V)} , U and U.sub. 1 represent precoding matrices used by the base station to transmit data to the user equipment and a co-scheduled user equipment, respectively, s and s.sub. 1 represent transmit symbol vectors intended for the user equipment and the co-scheduled user equipment, respectively, Q is a semi-unitary matrix whose columns lie in the orthogonal complement of {circumflex over (V)} , and R is a matrix which satisfies the Frobenius-norm constraint ∥R.sub. sup.2, ε being the residual error, and η , represents an additive noise vector. The method of claim 8, wherein Q.sub. sup.†{circumflex over (V)} 0. 10. The method of claim 1, further comprising: receiving residual error form configuration information from the base station. The method of claim 1, wherein the second CSI report comprises an enhancement to the first CSI report. The method of claim 1, wherein the second CSI report includes an indication of residual error norm. The method of claim 12, wherein the residual error norm can be expressed as ε = {square root over (tr(F sup.†{tilde over (D)}.sub. sup.-1))} where tr(.) denotes a trace operation, F sup.† denotes a filtered user channel, and P =(I-{circumflex over (V)} {circumflex over (V)}.sub. sup.†) is a projection matrix, {circumflex over (V)} being a precoding matrix of rank r , and {tilde over (D)} =diag{SI{circumflex over (N)}R.sub. sup.1, . . . , SI{circumflex over (N)}R.sub. }, {SI{circumflex over (N)}R.sub. being r quantized signal to interference plus noise ratios (SINRs). The method of claim 1, wherein the second CSI report includes an indication or an approximation of at least one of a residual error matrix and a residual error correlation matrix. The method of claim 1, wherein the second CSI report includes an indication or a quantized value of a dominant diagonal value of R along with a corresponding column in Q where Q is a semi-unitary matrix whose columns lie in the orthogonal complement of {circumflex over (V)} , {circumflex over (V)} being a semi-unitary matrix whose columns represent preferred channel directions, and R is a matrix which satisfies the Frobenius-norm constraint ∥R.sub. sup.2, ε being the residual error norm. The method of claim 1, wherein the second CSI report includes an indication or a quantized value of at least one of a diagonal value of C and a trace of C , where C sup.†{til- de over (D)}.sub. sup.-1, F sup.† denotes a filtered user channel, and P =(I-{circumflex over (V)} {circumflex over (V)}.sub. sup.†) is a projection matrix, {circumflex over (V)} being a precoding matrix of rank r , and {tilde over (D)} =diag{SI{circumflex over (N)}R.sub. sup.1, . . . , SI{circumflex over (N)}R.sub. }, {SI{circumflex over (N)}R.sub. being r quantized signal to interference plus noise ratios (SINRs). A method implemented in a base station configured to be used in a multi-user (MU) multiple-input multiple-output (MIMO) wireless communications system, comprising: receiving from a user equipment a first channel state information (CSI) report determined according to a single-user (SU) MIMO rule; and receiving from the user equipment a second CSI report based on a residual error. A multi-user (MU) multiple-input multiple-output (MIMO) wireless communications system, comprising: a base station; and a user equipment, wherein the user equipment transmits to the base station a first channel state information (CSI) report determined according to a single-user (SU) MIMO rule, and a second CSI report based on a residual error. This application claims the benefit of U.S. Provisional Application No. 61/480,690, entitled, "Enhancements to DL MU-MIMO," filed Apr. 29, 2011, U.S. Provisional Application No. 61/543,591, entitled, "Enhancements to DL MU-MEMO," filed Oct. 5, 2011, and U.S. Provisional Application No. 61/556,560, entitled, "DL MU-MIMO Enhancement via Residual Error Norm Feedback," filed Nov. 7, 2011, of which the contents of all are incorporated herein by reference. BACKGROUND OF THE INVENTION [0002] The present invention relates to wireless communications system and more particularly to multi-user (MU) multiple-input multiple-output (MIMO) wireless communications system. The present invention considers the problem of designing efficient channel state information (CSI) feedback schemes in order to allow improved multi-user multi-input multi-output resource allocation at a base-station (BS), resulting in increased system spectral efficiency. A cell in which multiple users feedback CSI and the BS performs MU-MIMO resource allocation is depicted in FIG. 1. Referring to FIG. 1, user terminals 110, e.g. users 1 (111) to K (119), send quantized channel feedbacks 120 to base station 130. At base station 130, DL (downlink) MU-MEMO resource allocation 140 is performed according to quantized channel feedbacks 120 and streams, e.g. user 1 stream 151 to user K stream 159, are subjected to RB (resource block) and/or MCS (modulation and coding scheme) allocation and transmit precoding 160. Signals are transmitted via n antennas from base station 130 and received by n antennas, for example, at user 1 (111). Note that the quality of resource allocation done by the BS depends on the accuracy of each user's CSI report. On the other hand, allowing a very accurate CSI feedback can result in a large signaling overhead. The key challenges that need to be overcome before spectral efficiency gains from MU-MIMO can realized are, for example, as follows: Improving CSI accuracy without a large signaling overhead, or Exploiting the enhanced CSI reports at the BS in an efficient manner. In order to solve the above problem, others have proposed various solutions, such as increasing CSI feedback overhead; CSI feedback under assumptions on BS scheduling; and complex algorithms for joint scheduling. CQI (Channel Quality Indicator)/PMI (Precoding Matrix Indicator) reporting enhancements targeting DL MU-MIMO operations on PUSCH 3-1 as well as PUSCH 3-2 were considered by several companies [1]. The proposed enhancement to PUSCH 3-2 comprised enabling sub-band PMI reporting in addition to the sub-band CQI reporting. On the other hand, enhancements to PUSCH 3-1 that were considered suggested that in addition to 3rd Generation Partnership Project (3GPP) Release (Rel-) 8 Mode 3-1 feedback, a user equipment (UE) can be configured via higher layer signalling to report as follows: A wideband PMI calculated assuming restricted rank equal to one, along with a per subband CQI targeting MU-MIMO operation. The MU-MIMO CQI is computed assuming the interfering PMIs are orthogonal to the single-user (SU) MIMO rank 1 PMI and for 4 TX, the total number of co-scheduled layers is assumed to be 4 at the time of MU CQI computation [1]. We propose a broad framework for enhanced CSI reporting by the users in order to obtain an improvement in MU-MIMO performance. We also illustrate mechanisms using which the eNodeB (eNB) can exploit such enhanced CSI feedback. System level simulations show that a simple form of enhanced feedback results in substantial system throughput improvements in homogenous networks and more modest improvements over heterogeneous networks. [1] Alcatel-Lucent, Alcatel-Lucent Shanghai Bell, AT&T, ETRI, Icera Inc., LG Electronics, Marvell, NEC, New Postcom, Pantech, Qualcomm, RIM, Samsung, Texas Instruments,"Way Forward on CQI/PMI reporting enhancement on PUSCH 3-1 for 2, 4 and 8 TX," 3GPP TSG RAN WG1 R1-105801 62bis, Xian, China, October 2010. BRIEF SUMMARY OF THE INVENTION [0014] An objective of the present invention is to achieve a high spectral efficiency, for example, even around a cell edge in an MU-MIMO wireless communications system. An aspect of the present invention includes a method implemented in a user equipment configured to be used in a multi-user (MU) multiple-input multiple-output (MIMO) wireless communications system, comprising: transmitting to a base station a first channel state information (CSI) report determined according to a single-user (SU) MIMO rule; and transmitting to the base station a second CSI report based on a residual error. Another aspect of the present invention includes a method implemented in a base station configured to be used in a multi-user (MU) multiple-input multiple-output (MIMO) wireless communications system, comprising: receiving from a user equipment a first channel state information (CSI) report determined according to a single-user (SU) MIMO rule; and receiving from the user equipment a second CSI report based on a residual error. Still another aspect of the present invention includes a multi-user (MU) multiple-input multiple-output (MEMO) wireless communications system, comprising: a base station; and a user equipment, wherein the user equipment transmits to the base station a first channel state information (CSI) report determined according to a single-user (SU) MEMO rule, and a second CSI report based on a residual error. BRIEF DESCRIPTION OF THE DRAWINGS [0018] FIG. 1 depicts an illustrative diagram for CSI feedback. FIG. 2 depicts an illustrative diagram for multiplexing SU-CSI and enhanced feedback. FIG. 3 depicts an illustrative diagram for combining SU-CSI and enhanced feedback. FIG. 4 depicts an illustrative diagram for multiplexing SU-CSI and combined CSI feedback. DETAILED DESCRIPTION [0022] We consider a downlink comprising K users and multiple orthogonal RBs that are available in each scheduling interval. We first model the actual received signal vector that the user will see on a representative resource element in an RB, if it is scheduled on that RB, as *U.sub. 1s.sub. 1+η where y[1] represents the N×1 received signal vector on an RB (N being the number of receive antennas) and H represents the M×N channel matrix (M being the number of transmit antennas) with H * denoting its Hermitian. U and U.sub. 1 represent the transmit precoding matrices used by the BS to transmit data to user-1 and the other co-scheduled users (or user equipments), respectively, and s and s.sub. 1 represent the transmit symbol vectors intended for user-1 and the other co-scheduled users, respectively. Finally η represents the additive noise vector. Note that under MU-MIMO transmission on that RB U.sub. 1 will be a non-zero matrix whereas under SU-MIMO transmission on that RB U.sub. 1 will be a zero matrix. The model in equation (1) is the model in the aftermath of scheduling. The scheduling which involves RB, MCS and transmit precoder allocation by the BS is done by the BS scheduler whose input is the quantized CSI (referred to henceforth as just CSI) fed back by the users. The conventional procedure employed by the users to report CSI is to compute a rank indicator (RI), precoding matrix indicator (PM I), which together determine a precoder from a quantization codebook, along with up-to 2 channel quality indicators or indices (CQI(s)). Note that the columns of the selected precoder represent a set of preferred channel directions and the CQI(s) represent quantized SINRs (signal to interference plus noise ratios). Further, for a rank R precoder, R SINRs (one for each column) can be recovered from the up-to 2 CQI(s). More importantly, this CSI is computed by the user using SU-MIMO rules, i.e., after assuming that it alone will be scheduled on an RB. Such CSI is referred to here as SU-CSI. Clearly, if the BS wants to do MU-MIMO transmissions on an RB then it may, modify the SU-CSI reported by the users in order to do proper MCS assignment and RB allocation. However, even after such modifications MU-MIMO performance is degraded due to a large mismatch between UE reported SU-CSI and the actual channel conditions that UE will see on an RB with MU-MIMO transmissions. In order to address this problem we propose enhanced CSI feedback along with a finer model that can exploit the enhanced CSI feedback report and can be used for better MU-MIMO resource allocation at the BS. The finer model, a post scheduling model, can be given by, but not restricted to, ={circumflex over (D)} /2{circumflex over (V)} +{circumflex over (D)} /2({circumflex over (V)} .sup.†)U.sub. 1s.sub. 1+η {circumflex over (D)} /2 is a diagonal matrix of effective channel gains, {circumflex over (V)} denotes a semi-unitary matrix whose columns represent preferred channel directions, Q is a semi-unitary matrix whose columns lie in the orthogonal complement of {circumflex over (V)}hd 1, i.e. Q .sup.†{circumflex over (V)} =0, and R is a matrix which satisfies the Frobenius-norm constraint it ∥R for some ε MU-CQI reporting: The UE is configured to also report additional CQI computed using MU-MIMO rules and possibly an additional PMI. To compute MU-CQI corresponding to a precoder G , the UE assumes a post-scheduling model as in equation (2) in which {circumflex over (D)} /2, {circumflex over (V)} are equal to the diagonal matrix of the dominant unquantized singular values and the dominant unquantized right singular vectors, respectively, of its downlink channel matrix. It sets U and assumes that the columns of U.sub. 1 are isotropically distributed in the subspace defined by I-G .sup.† (orthogonal complement of G ). In addition it assumes Q =0 which is reasonable in this case since {circumflex over (V)} is taken to contain all the unquantized dominant singular vectors so no significant interference can be received from signals in its orthogonal complement. Then, to compute MU-SINRs the UE can be configured to assume a particular number of columns in U.sub. 1 and either an equal power per scheduled stream or a non-uniform power allocation in which a certain fraction of energy per resource element energy per resource element (EPRE) is shared equally among columns of U 1 with another fraction (possibly the remaining fraction) being shared equally among columns in U.sub. 1. Enhanced CSI reporting (SU-MIMO CSI and residual error): The UE can be configured for enhanced CSI reporting. Suppose that using SU-MIMO rules the UE determined a precoder G of a preferred rank r and the corresponding quantized SINRs {SI{circumflex over (N)}R . In order to determine the residual error, the UE assumes a post-scheduling model as in equation (2) in which ^ 1 = r 1 ρ 1 diag { SI N ^ R 1 1 , , SI N ^ R 1 r 1 } ##EQU00001## {circumflex over (V)} . Then let P .sup.† denote the projection matrix whose range is the orthogonal complement of G . Let us refer to the matrix E as the (normalized) residual error matrix and the matrix C as the residual error correlation matrix and note that C ={circumflex over (D)} F.- sub.1.sup.†{circumflex over (D)} /2. The UE can be configured to report some approximation of either the residual error matrix or the residual error correlation matrix. These include: Quantizing and reporting the dominant diagonal values of R1 along with the corresponding columns in Q Quantizing and reporting the diagonal values of C [1] [0031] Quantizing and reporting the trace of C , ε .s- up.∥H .sup.∥{circumflex over (D)} ) which can be thought of as the normalized total residual error. The BS can configure the user to report a particular enhanced feedback form. A simple example of the enhanced feedback form is the residual error norm, = {square root over (tr(F .sup.†{circumf- lex over (D)} ))} (3) where tr (.) denotes the trace operation, F .sup.† denotes the filtered user channel, and P =(I-{circumflex over (V)} {circumflex over (V)} .sup.†) is a projection matrix. PMI {circumflex over (V)} of some rank r and r quantized SINRs {SI{circumflex over (N)}R are determined using SU-MIMO rules {tilde over (D)} =diag{SI{circumflex over (N)}R , . . . , SI{circumflex over (N)} }. Various other forms for the enhanced feedback and various other norms for the residual error can apply to the enhanced feedback. We list several flow diagrams that describe aspects of the invention. In each figure, the flow diagram describes the operations that are conducted at a user-terminal. The operations are enabled by signaling from the eNB (or base-station) certain parameters on a downlink (feed-forward) control channel that are then received as inputs by the user. The feed-back is sent by the user on an uplink (feed-back) control channel and is received by the eNB. The parameters signaled by the base-station to a user may be interpreted by that user in a particular way that is described in detail in the further system details. Moreover, wherever applicable, the feedback sent by the user may allow the eNB to unambiguously determine the portion of the feedback determined by the user as SU-CSI and the portion determined as per the enhanced feedback form. In each channel state information (CSI) reporting interval the user reports its CSI. The BS (or eNB) can configure a user for periodic CSI reporting and fix the periodicity and offset which together determine the exact sequence of intervals for which the user may report its CSI. This sequence will be henceforth referred to as the sequence for CSI reporting. The user equipment can transmit to the base station an SU-CSI feedback and an enhanced CSI feedback, which are received by the base station. The transmission and the reception can be performed in a various ways as follows: 1. Multiplexing SU-CSI and Enhanced Feedback In order to obtain the benefits of accurate MU-MIMO resource allocation without excessive feedback overhead, the eNB can multiplex intervals in which the user reports enhanced feedback with the ones in which it reports its SU-CSI feedback without enhanced feedback. The periodicity and offset of the sub-sequence formed by intervals designated for enhanced feedback within the sequence for CSI reporting can be configured by the eNB, based on factors such as user mobility. As shown in FIG. 2, at step 201, a UE receives residual error form configuration from a BS and receives also sequence and sub-sequence configuration information. Next, at step 202, the UE determines SU-CSI in each interval configured for SU-CSI report or determines enhanced CSI in each interval configured for enhanced CSI report. Then, at step 203, the UE feeds back the CSI to the BS. Several ways of further reducing enhanced CSI feedback are described in the further system details. These include, for instance, letting the precoder used for computing the enhanced CSI be a function of previously reported precoder(s) contained in SU-CSI reports and/or reporting one or more components in the enhanced CSI feedback in a wideband fashion and/or reporting one or more components in the enhanced CSI feedback in a differential fashion. 2. Combining SU-CSI and Enhanced Feedback In the second class of feedback schemes, the user combines SU-MIMO CSI report and enhanced CSI report and feeds them back in each interval. As shown in FIG. 3, at step 301, a UE receives residual error form configuration from a BS and receives also sequence and sub-sequence configuration information. Next, at step 302, the UE determines in each interval configured for CSI report SU-CSI and enhanced CSI. Then, at step 303, the UE feeds back combined CSI to the BS. Methods of further reducing enhanced CIT feedback overhead are described in the further system details. These include, for instance, letting the precoder used for computing the enhanced CSI be a function of the precoder computed for SU-CSI report and/or reporting one or more components in the enhanced CSI feedback in a wideband fashion and/or reporting one or more components in the enhanced CSI feedback in a differential fashion. 3. Multiplexing SU-CSI and Combined CSI Feedback FIG. 4 shows another method of CSI reporting. At step 401, a UE receives residual error form configuration from a BS and receives also sequence and sub-sequence configuration information. Next, at step 402, the UE determines SU-CSI in each interval configured for SU-CSI report or determines combined CSI for combined CSI reporting. Then, at step 403, the UE feeds back CSI to the BS. In FIGS. 2, 3, and 4, the sequence information includes, for example, periodicity and offset for the SU CSI reporting and the sub-sequence configuration information includes, for example, periodicity and offset for the enhanced CSI reporting. For example, the enhanced CSI report includes any indication, such as a quantized value, of the residual error matrix or the residual error correlation FIGS. 2, 3, and 4 may apply to MU-CQI reporting as well. In conclusion, we considered enhancements to the MU-MIMO operation by enhancing the user CSI reporting which enables more accurate MU-MIMO SINR computation at the eNB and by a finer modeling of the received output seen by a user in the aftermath of scheduling. Our results using a simple form of enhanced feedback show substantial system throughput improvements in homogenous networks and improvements also in heterogeneous networks. One important feature of the gains obtained is that they are quite robust in the sense that they are not dependent on an effective outer loop link adaptation (OLLA) implementation. The foregoing is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that those skilled in the art may implement various modifications without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention. Further System Details A 1 Enhanced MU-MIMO Operation The key hurdle that needs to he overcome in order to realize optimal MU-MIMO gains is the difficulty in modeling the received channel output seen by a user post-scheduling. The user has an un-quantized estimate of its downlink channel but does not know the transmit precoder that will be employed by the base-station. On the other hand, the base station is free to select any transmit precoder but has to rely on the quantized CSI reported by the active users. We first consider a simple (baseline) approach for modeling the received output seen by a user of interest (say user-1) post, scheduling. Such an approach is quite popular in MU-MIMO studies. Here, essentially the received output seen by user-1 post-scheduling is modeled as ={circumflex over (D)} /2{circumflex over (V)} +{circumflex over (D)} /2{circumflex over (V)} .sup.†U.sub. 1s.sub. 1+η , (A1) ˜CN(0, I) is the additive noise. U contains columns of the transmit precoder along which symbols to user-1 are sent whereas U.sub. 1 contains all the remaining columns used for the co-scheduled streams. {circumflex over (D)} /2 is a diagonal matrix of effective channel gains and {circumflex over (V)} is a semi-unitary matrix whose columns represent the preferred channel directions. Under SU-MIMO CSI reporting rules, the UE assumes a post-scheduling model as in (A1) where the matrix U =0 and {circumflex over (D)} /2, {circumflex over (V)} are equal to the diagonal matrix of the un-quantized dominant singular values and the unquantized dominant right, singular vectors, respectively, of its downlink channel matrix H .sup.†. In other words, the UE assumes that there will be no other users co-scheduled with it on its allocated resource blocks. The UE then determines a precoder G of a preferred rank r and reports the corresponding quantized SINRs {SI{circumflex over (N)}R as CQIs. The understanding is that if the base station selects a transmit precoder such that U.sub. 1=0 and 1 = ρ 1 r 1 G ^ 1 , ##EQU00002## is the EPRE configured for the UE-1, then the effective SINR seen by the UE (after filtering using a filter F to remove interference Note that when r ≧2 the SINRs are combined into two CQIs. among columns of U ) for the column of U will be SI{circumflex over (N)}R On the other hand, at the base station end we construct a model as in (A1) using the CQI(s) and PMI reported by user 1. The CQI(s) are first mapped back to {SI{circumflex over (N)}R . Then we set {circumflex over (V)} , and the matrix {circumflex over (D)} to be 1 ρ 1 diag { SI N ^ R 1 1 , , SI N ^ R 1 r 1 } . ##EQU00003## Letting A=[U , U.sub. 1] denote the transmit precoding matrix, with rank (U , the base-station can obtain the following approximation for the SINRs seen by user-1 post-scheduling. si n ^ r 1 i = α ^ 1 i 1 - α ^ 1 i , α ^ 1 i = [ ( I + A † S ^ 1 A ) - 1 A † S ^ 1 A ] i , i , 1 ≦ i ≦ r 1 ' , ( A2 ) ##EQU00004## where S[1] {circumflex over (D)} .sup.†. Since this SINR approximation is obtained by ignoring the component of the user channel that lies in the orthogonal complement of G , it is an over-estimation and can in-fact degrade system performance without appropriate compensation. Next, consider a finer modeling more tuned to MU-AMMO operation. Here, we assume that the channel output seen by user-1 post-scheduling can be modeled as ={circumflex over (D)} /2{circumflex over (V)} +{circumflex over (D)} /2({circumflex over (V)} .sup.†).sub. 1+η . (A3) where Q[1] is a semi-unitary matrix whose columns lie in the orthogonal complement of {circumflex over (V)} , i.e. Q .sub.†{circumflex over (V)} =0 and R is a matrix which satisfies the Frobenius-norm constraint ∥R , for some ε >0. Note that the model in (A3) makes the reasonable assumption that U lies in the span of {circumflex over (V)} whose columns represent the preferred directions along which the UE wishes to receive its intended signal. In addition, the model in (A3) accounts for the fact that the component of U.sub. 1 in the orthogonal complement of {circumflex over (V)} can also cause interference to the UE. Let us first consider UE side operations after assuming a post-scheduling model as in (A3). In order to determine the SU-MIMO CSI reports the UE assumes a post-scheduling model as in (A3) in which U.sub. 1=0 and the matrices {circumflex over (D)} /2, {circumflex over (V)} are equal to the diagonal matrix of the dominant unquantized singular values and the dominant unquantized right singular vectors, respectively, of its downlink channel matrix H .sup.†. Note that models (A1) and (A3) are equivalent in terms of UE SU-MIMO CSI reporting. On top of SU-MIMO CSI reports, there are alternatives for configuring the UE to report more CSI. These MU-CQI reporting: The UE is configured to also report additional CQI computed using MU-MIMO rules and possibly an additional PMI. To compute MU-CQI corresponding to a precoder G , the UE assumes a post-scheduling model as in (A3) in which {circumflex over (D)} /2, {circumflex over (V)} are equal to the diagonal matrix of the dominant unquantized singular values and the dominant unquantized right singular vectors, respectively, of its downlink channel matrix. It sets U and assumes that the columns of U.sub. 1 are isotropically distributed in the subspace defined by I-G .sup.† (orthogonal complement of G ). In addition it assumes Q =0 which is reasonable in this case since {circumflex over (V)} is taken to contain all the unquantized dominant singular vectors so no significant interference can be received from signals in its orthogonal complement. Then, to compute MU-SINRs the UE can be configured to assume a particular number of columns in U.sub. 1 and either an equal power per scheduled stream or a non-uniform power allocation in which a certain fraction of EPRE is shared equally among all columns of U.sub. 1 with the remaining fraction being shared equally among all columns in U Enhanced CSI reporting (SU-MIMO CSI and residual error): The UE can be configured for enhanced CSI reporting. Suppose that using SU-MIMO rules the UE determined a precoder G of a preferred rank r and the corresponding quantized SINRs {SI{circumflex over (N)}R . In order to determine the residual error, the UE assumes a post-scheduling model as in (A3) in which D ^ 1 = r 1 ρ 1 diag { SI N ^ R 1 1 , , SI N ^ R 1 r 1 } ##EQU00005## {circumflex over (V)} . Then let P .sup.† denote the projection matrix whose range is the orthogonal complement of G . Let us refer to the matrix E as the (normalized) residual error matrix and the matrix C as the residual error correlation matrix and note that C ={circumflex over (D)} F.- sub.1.sup.†{circumflex over (D)} /2. The UE can be configured to report some approximation of either the residual error matrix or the residual error correlation matrix. These include: Quantizing and reporting the dominant diagonal values of R along with the corresponding columns in Q Quantizing and reporting the diagonal values of C [1] [0060] Quantizing and reporting only the trace of C , ε .s- up.∥H .sup.†{circumflex over (D)} ) which can be thought of as the normalized total residual error. Let us consider the possible eNB (a.k.a base station) side operations which involve the model in (A3), i.e. at-least one of the following two cases holds true: The UE reports some CSI assuming a post-scheduling model as in (A3) or the eNB assumes a post-scheduling model as in (A3) for SINR approximation in the case of UE pairing. We first illustrate one instance of how the base station can utilize the model in (A3) along with the enhanced CSI UE report in which the user feedsback SU CSI report along with the normalized total residual error ε . Further, for simplicity let us assume that the base station considers the practically important MU-MIMO configuration, which is co-scheduling a user-pair with one stream per-user so that both U and U.sub. 1=u.sub. 1 are rank-1 vectors. Suppose that the UE 1 reports the SU-MIMO PMI G of rank r and CQI(s) (which are mapped to the SINRs {SI{circumflex over (N)}R , . . . , SI{circumflex over (N)}R }), along with the normalized total residual error ε . Then using the model in (A3), at the base station end we set {circumflex over (V)} and the matrix {circumflex over (D)} to be 1 ρ 1 diag { SI N ^ R 1 1 , , SI N ^ R 1 r 1 } . ##EQU00006## that now R[1] is not known (except for the fact that tr(R ) and Q is known to lie in the subspace determined by I-G .sup.†. Without loss of generality, we can assume Q to be a deterministic M×(M-r ) semi-unitary matrix whose columns are the basis of the orthogonal complement of G . To obtain a conservative SINR estimate the base station can assume that the UE employs a simple MRC receiver, i.e., user-1 is assumed to use the linear combiner u.sup.†G {circumflex over (D)} /2 on the model in (A3). In addition, we compute the worst-case SINR, obtained by minimizing the SINR over all choices of (M-r matrices R under the constraint that tr(R . Now the worst-case SINR can be expressed as: min R 1 .di-elect cons. M - r 1 × r 1 : R 1 F 2 ≦ ε 1 2 u 1 † G ^ 1 D ^ 1 1 / 2 4 u 1 † G ^ 1 D ^ 1 1 / 2 2 + u 1 † G ^ 1 D ^ 1 ( G ^ 1 † + R 1 † Q 1 † ) u 1 _ 2 ( A4 ) ##EQU00007## which can be simplified as u 1 † G ^ 1 D ^ 1 1 / 2 4 u 1 † G ^ 1 D ^ 1 1 / 2 2 + ( u 1 † G ^ 1 D ^ 1 G ^ 1 † u 1 _ + ε 1 u 1 † G ^ 1 D ^ 1 Q 1 † u 1 _ ) 2 ( A5 ) ##EQU00008## Note that in case zero-forcing (ZF) transmit precoding is used (5) further simplifies to 1 † G ^ 1 D ^ 1 1 / 2 4 u 1 † G ^ 1 D ^ 1 1 / 2 2 + ( ε 1 u 1 † G ^ 1 D ^ 1 Q 1 † u 1 _ ) 2 ( A6 ) ##EQU00009## Several other combinations are possible, some of which are highlighted below: The UE feedsback SU CSI (comprising of a PMI G of rank r and CQI(s) (which are mapped to the SINRs {SI{circumflex over (N)}R , . . . , SI{circumflex over (N)}R assuming a post-scheduling model as in (A1). The eNB however assumes a post-scheduling model as in (A3) in which it fixes D ^ 1 = r 1 ρ 1 diag { SI N ^ R 1 1 , , SI N ^ R 1 r 1 } ##EQU00010## {circumflex over (V)} . Note that now R is not known and Q is only known to lie in the subspace determined by I-G .sup.†. The eNB can assume a certain receiver structure at the UE (typically either a linear MMSE or a MRC receiver). Note that in either case the covariance matrix of the (intra-cell) interfere is given by S ={circumflex over (D)} /2({circumflex over (V)} .sup.†)U.sub. 1U +{circumflex over (V)} ){circumflex over (D)} /2 in which E in particular is not known. The eNB can adopt one of two approaches. In the first one, it can impose a suitable distribution on E (based possibly on past CSI and ACK/NACKs received from that user) and then compute an expected covariance matrix E[S ]. One example (supposing M-r ) is one where Q is a random M×r matrix whose columns are the isotropically distributed in the orthogonal complement of G and R I where ε' is a constant selected based on past CSI and ACK/NACKs received from user 1. Then it can determine SINRs using the known formulas for the MRC and MMSE receivers over a linear model ={circumflex over (D)} /2{circumflex over (V)}.sup.†U +{tilde over (η)} , (A7) but where {tilde over (η)} ˜CN(0, I+E[S ]) is the independent additive noise vector. In the second approach the eNB can assume S to be an unknown but deterministic matrix which lies in a bounded region. The bounded region can itself be defined based possibly on past CSI and ACK/NACKs received from that user. An example of such a region would be one comprising of all S matrices such that S ={circumflex over (D)} /2({circumflex over (V)} .sup.†)U 1U.sub. 1.sup.†(Q +{circumflex over (V)} ){circumflex over (D)} /2 where Q is a deterministic M×(M-r ) matrix whose columns are the basis of the orthogonal complement of G . R is any (M-r matrix satisfying tr(R and where ε is a a constant selected based on past CSI and ACK/NACKs received from user 1. Then it can determine worst case SINRs for either MMSE or MRC receivers by minimizing the respective SINRs over all matrices in the defined bounded region. The UE feedsback SU CSI along with additional MU-CQI(s) and possibly an MU-PMI. Suppose that based on the received feedback the eNB can determine a PMI G of rank r and corresponding MU-SINRs {SI{circumflex over (N)}R , . . . , SI{circumflex over (N)}R }). It can then assume a post-scheduling model as in (A1) in which it fixes {circumflex over (V)} and either sets D ^ 1 = r 1 αρ 1 diag { SI N ^ R 1 1 , , SI N ^ R 1 r 1 } ##EQU00011## ( in the case UE-1 is configured to assume that a fraction α of the EPRE is shared equally among desired r streams) or ^ 1 = S ρ 1 diag { SI N ^ R 1 1 , , SI N ^ R 1 r 1 } ##EQU00012## (in the case UE-1 is configured to assume that the EPRE is shared equally among S co-scheduled streams). Note that since all variables in this model (apart from the additive noise) are known, the eNB can compute SINRs using known formulas for the MRC and MMSE receivers. The UE feedsback SU CSI (comprising of a PMI G of rank r and CQI(s) (which are mapped to the SINRs {SI{circumflex over (N)}R , . . . , SI{circumflex over (N)}R }) along with additional residual error information assuming a post-scheduling model as in (A3). The eNB also assumes a post-scheduling model as in (A3) in which in which it fixes D ^ 1 = r ρ 1 diag { SI N ^ R 1 1 , , SI N ^ R 1 r 1 } ##EQU00013## {circumflex over (V)} . Depending on the type of residual error feedback, the information that the eNB may deduce about E can range from a full approximation in which case the eNB may regard E to be equal to a deterministic known matrix E to the case where only diag{C } or tr(C ) is known. The eNB can use the two aforementioned approaches assuming either MMSE or MRC receiver at the UE. In particular, the eNB can regard S ={circumflex over (D)} /2({circumflex over (V)} .sup.†)U.sub. 1U.sub. 1.sup.†(E +{circumflex over (V)} ){circumflex over (D)} /2 as a random matrix drawn using a suitable distribution on E or the eNB can regard S to be an unknown but deterministic matrix which lies in a bounded region. The bounded region or the imposed distribution can be based on past CSI and ACK/NACKs received from that user and may comply with the information that the eNB can deduce about E from the UE's current feedback. 2 Simulation Results We now evaluate the MU-MIMO performance with the different types of channel reports and the enhancement methods via system level simulations. The simulation parameters are summarized in Table A1. 2.1 Performance of MU-MIMO with SU CSI Report and Enhanced CSI Report The cell average and the 5% cell edge spectral efficiencies of MU-MIMO with SU reports for various settings are provided in Table A2. The SU-MIMO performance is also included for comparisons. The ZF transmit precoding is employed for all MU-MIMO transmissions. We can see that without applying any scheduler optimization techniques, the MU-MIMO with SU reports performs even worse than the SU-MIMO. With simple -4 dB SINR offset to compensate for the over optimistic SU-MIMO reports, the performance is improved significantly -US-00001 TABLE A1 Simulation Parameters Parameter Assumption Deployment scenario IMT Urban Micro (UMi) Duplex method and bandwidth FDD: 10 MHz for downlink Cell layout Hex grid 19 sites, 3 cells/ site Transmission power at BS 46 dBm Number of users per sector 10 Network synchronization Synchronized Antenna configuration (eNB) 4 TX co-polarized ant., 0.5-λ spacing Antenna configuration (user) 2 RX co-polarized ant., 0.5-λ spacing Downlink transmission scheme MU-MIMO: Max 2 users/RB; Each user can have rank 1 or 2 Codebook Rel. 8 codebook Downlink scheduler PF in time and frequency Scheduling granularity: 5 RBs Feedback assumptions 5 ms periodicity and 4 ms delay; Sub-band CQI and PMI feedback without errors. Sub-band granularity: 5 RBs Downlink HARQ scheme Chase Combining Downlink receiver type LMMSE Channel estimation error NA Feedback channel error NA Control channel and reference 3 OFDM symbols for control; signal overhead Used TBS tables in TS 36.213 but is still below the SU -MIMO mark. We then impose a rank restriction, i.e., r =1 on all active users via codebook subset restriction. Considering SU reporting from all users, we incorporate a user pooling in the scheduler in which only users with a good average SNR are eligible for pairing. This helps to realize the benefit of MU-MIMO with the average spectral efficiency gain being 11.5%. Then, to obtain an understanding of the gains that can be achieved via enhanced CSI reporting, we consider the case when each user reports a normalized total residual error in addition to the SU-MIMO CSI report. At the base station we modeled the post-scheduling user received output as (A3) and considered the MRC SINR approximation for rate matching (6). To obtain an initial result, a common value of ε was used to obtain SINR approximations for any choice of pairing. The resulting the -US-00002 TABLE A2 Spectral e±ciency of MU-MIMO with near orthogonal transmit precoding with zero-forcing (ZF); SU feedback or enhanced CSI feedback by the users. MU-MIMO/SU-MIMO cell average 5% cell-edge SU-MIMO r = 2 2.1488 0.0679 without SINR offset r = 2 1.49 0.0681 SINR offset r = 2 1.922 0.0698 SINR offset plus pooling r = 1 2.3964 (11.5%) 0.0687 (1.2%) MRC SINR approx. r = 1 2.5141 (17.0%) 0.0828 (21.9%) Relative percentage gains are over SU-MIMO. spectral efficiency of MU -MIMO is 17% better than that of SU-MIIVIO. This demonstrates that substantial gains can be possible via the enhanced CSI reporting and improved SINR approximation. 2.2 Performance of MU-MIMO with MU Table A3 provides the cell average and 5% cell-edge spectral efficiencies of MU-MIMO with various CSI reporting configurations involving MU-CQI feedback. In particular, we consider the scenario when all users report PMI and CQI(s) determined using MU-MIMO rules. Also, considered is a scenario in which high geometry (HG) users (whose average SNR is above a threshold) report complete MU and SU CSI reports to the base station whereas the remaining users feedback only SU CSI reports. The resulting cell spectral efficiency becomes 2.694 bit with the cost of a significant increase in the feedback signaling overhead. A more reasonable alternative is one where the SU CSI and MU CQI is obtained from HG users and the resulting the spectral efficiency is 2.6814. Note that the performance degradation compared to the full reporting by HG users is less than 0.5% and the gain over SU-MIMO is an impressive 24.8%. -US-00003 TABLE A3 Spectral e±ciency of MU-MIMO with near orthogonal transmit precoding with zero-forcing (ZF); Long-term SNR (Geometry) based user pooling with SU-report by low geometry users; Rank-1 codebook restriction imposed on all users. Type of reports and user pooling Average Cell SE 5% Cell-edge MU report by all users 2.3321 (8.5%) 0.0734 MU + SU Report by HG users 2.694 (25.4%) 0.0963 SU report + MU-CQI by HG users 2.6814 (24.8%) 0.0951 Relative percentage gains are over SU-MIMO. Further System Details B 1 Related MU-MIMO Operation The key hurdle that needs to be overcome in order to realize optimal MU-MIMO gains is the difficulty in modeling the received channel output seen by a user post-scheduling. While computing its CSI report, the user has an un-quantized estimate of its downlink channel but does not know the transmit precoder that will be employed by the base-station. On the other hand, the base station is free to select any transmit precoder but has to rely on the quantized CSI reported by the active users. To illustrate this, we consider a user of interest, say user-1, and model its received observations as , (B1) where H[1] ×M denotes the channel matrix, with N, M being the number of receive antennas at the user and the number of transmit antennas at the eNB, respectively. u is the additive noise which assumed to be spatially white and x is the signal transmitted by the eNB. In the usual SU-MIMO CSI reporting the user estimates ρ , where ρ is the EPRE configured for the UE-1 and determines a desired precoder matrix {circumflex over (V)} of rank r after assuming that no other user will be co-scheduled with it. As a byproduct, it also determines a linear filter F and r SINRs, {SINR . The understanding is that if the base station transmits using a transmit precoder ρ 1 r 1 V ^ 1 , ##EQU00014## then the effective SINR seen by the UE (after filtering using the filter F to remove interference among columns of H .sup.†{circumflex over (V)} ) for the z layer (sent along the z column of {circumflex over (V)} ) will be SINR . Mathematically, the filtered received observation vector, under SU-MIMO transmission, can be modeled as 1 = F 1 z 1 = ρ 1 r 1 F 1 H 1 † V ^ 1 s 1 + η 1 , ( B2 ) ##EQU00015## where s[1] is the symbol vector containing r normalized QAM symbols and where ( ρ 1 r 1 F 1 H 1 † V ^ 1 ) = diag { SINR 1 1 , , SINR 1 r 1 } . ##EQU00016## The user feedsback the PMI {circumflex over (V)} and quantized SINRs {SI{circumflex over (N)}R to the eNB. The eNB obtains ^ 1 and D ^ 1 = r 1 ρ 1 diag { SI N ^ R 1 1 , , SI N ^ R 1 r 1 } ##EQU00017## based on the user 's SU-MIMO CSI report. For SU-MIMO transmission, the eNB assumes a post-scheduling model for user-1 by approximating (B1) as ≠{circumflex over (D)} /2{circumflex over (V)} , (B3) is assumed to a spatially white noise vector and U denotes the transmit precoder along which symbols to user-1 are sent. Furthermore, an approach quite popular in MU-MIMO studies is to employ the following model for the received output seen by user-1, when it is co-scheduled with other users in an MU-MIMO transmission: ={circumflex over (D)} /2{circumflex over (V)} +{circumflex over (D)} /2{circumflex over (V)} .sup.†U.sub. 1s.sub. 1+η , tm (B4) where hd 1 contains all the remaining columns of the transmit precoder used for the co-scheduled streams. Letting A=[U , U.sub. 1] denote the MU-MIMO transmit precoding matrix, with rank (U , the base-station can obtain the following approximation for the SINRs seen by user-1 post-scheduling. si n ^ r 1 i = α ^ 1 i 1 - α ^ 1 i , α ^ 1 i = [ ( I + A † S ^ 1 A ) - 1 A † S ^ 1 A ] i , i , 1 ≦ i ≦ r 1 ' , ( B5 ) ##EQU00018## where S[1] {circumflex over (V)} {circumflex over (D)} {circumflex over (V)} .sup.†. Since this SINR approximation is obtained by ignoring the component of the user channel that lies in the orthogonal complement of {circumflex over (V)} , it is an over-estimation and can in-fact degrade system performance without appropriate compensation. 2 Enhanced MU-MIMO Operation The user, when configured by the eNB, reports SU-MIMO CSI plus a residual error term. The eNB can configure a user (to report the additional feedback) in a semi-static manner. We consider a simple form of residual error referred to as the residual error norm. Then, using SU-MIMO rules the user first determines a PMI {circumflex over (V)} Of some rank r along with r quantized SINRs {SI{circumflex over (N)}R . Note that r can be determined by the user or it can be enforced by the eNB via codebook subset restriction. The residual error norm is determined by the user as {tilde over (ε)} = {square root over (tr(F hd 1.sup.†P .sup.†))}, (B5) where tr (.) denotes the trace operation and P =(I-{circumflex over (V)} {circumflex over (V)} .sup.†) is a projection matrix. Note that {tilde over (ε)} represents the residual total energy in the component of the filtered channel that lies in the orthogonal complement of the reported precoder {circumflex over (V)} . The user reports the usual SU-MIMO CSI along with the residual error norm {tilde over (ε)} or a normalized residual error norm ε computed using = {square root over (tr(F .sup.†{tilde over (D)} ))}, (B7) {tilde over (D)} =diag{SI{circumflex over (N)}R , . . . , SI{circumflex over (N)}R The eNB can use the residual error norms reported by the users to determine accurate SINRs for any choice of user pairing in MU-MIMO. To achieve this, it employs a finer approximation of the filtered channel matrix (F .sup.†) of user-1 given by {circumflex over (D)} /2({circumflex over (V)} .sup.†), (B8) where Q[1] is a semi-unitary matrix whose columns lie in the orthogonal complement of {circumflex over (V)} , i.e. Q .sup.†{circumflex over (V)} =0 and R is a matrix which satisfies the Frobenius-norm constraint 1 F 2 ≦ ρ 1 r 1 ε 1 2 , ##EQU00019## >0 is the normalized residual error norm reported by user-1. Suppose the transmit precoder U is parsed as U=[U , U.sub. 1]. For a well designed transmit precoder, the eNB can make the reasonable assumption that U (almost) lies in the span of {circumflex over (V)} whose columns represent the preferred directions along which user-1 wishes to receive its intended signal (so that Q ≠0). Then, a model more tuned to MU-MIMO operation can be obtained in which the channel output seen by user-1 post MU-MIMO scheduling is modeled as ={circumflex over (D)} /2{circumflex over (V)} +{circumflex over (D)} /2({circumflex over (V)} .sup.†)U.sub. 1s.sub. 1+η , (B9) The model in (B9) accounts for the fact that the component of U.sub. 1 in the orthogonal complement of {circumflex over (V)} can also cause interference to the UE. Notice that when only SU-MIMO CSI along with the normalized residual error norm is reported by the users, in the model in (B9) the eNB can only infer that the semi-unitary matrix Q lies in the subspace determined by I-{circumflex over (V)} {circumflex over (V)} .sup.† and R is also not known except for the fact that ( R 1 † R 1 ) = ρ 1 r 1 ε 1 2 . ##EQU00020## For brevity, we illustrate one instance of how the eNB can utilize the model in (B9) for MU-MIMO SINR computation by considering a practically important MU-MIMO configuration, which is co-scheduling a user-pair with one stream per-user so that both U and U.sub. 1=u.sub. 1 are rank-1 vectors. Using the model in (B9), we will compute the worst-case SINR obtained by minimizing the SINR, over all feasible choices of R , Q . Without loss of generality, we assume Q to be a deterministic M×(M-r ) semi-unitary matrix whose columns are the basis of the orthogonal complement of V and consider all possible (M-r matrices R satisfying the constraint that ( R 1 † R 1 ) ≦ ρ 1 r 1 ε 1 2 . ##EQU00021## Further, to obtain a conservative SINR estimate, the eNB can assume that the UE employs a simple MRC receiver, i.e., user-1 is assumed to use the linear combiner u .sup.†{circumflex over (V)} {circumflex over (D)} /2 on the model in (B9). Then, the worst-case SINR can he expressed as: min R 1 .di-elect cons. M - r 1 × r 1 : R 1 F 2 ≦ ρ 1 r 1 ε 1 2 u 1 † V ^ 1 D ^ 1 1 / 2 4 u 1 † V ^ 1 D ^ 1 1 / 2 2 + u 1 † V ^ 1 D ^ 1 ( V ^ 1 † + R 1 † Q 1 † ) u 1 _ 2 ( B10 ) ##EQU00022## which can be simplified as u 1 † V ^ 1 D ^ 1 1 / 2 4 u 1 † V ^ 1 D ^ 1 1 / 2 2 + ( u 1 † V ^ 1 D ^ 1 V ^ 1 † u 1 _ + ρ 1 r 1 ε 1 u 1 † V ^ 1 D ^ 1 Q 1 † u 1 _ ) 2 ( B11 ) ##EQU00023## Note that in case zero-forcing (ZF) transmit precoding is used (11) further simplifies to 1 † V ^ 1 D ^ 1 1 / 2 4 u 1 † V ^ 1 D ^ 1 1 / 2 2 + ( ρ 1 r 1 ε 1 u 1 † V ^ 1 D ^ 1 u 1 _ ) 2 . ( B12 ) ##EQU00024## -US-00004 TABLE B1 Simulation Parameters Parameter Assumption Deployment scenario IMT Urban Micro (UMi) and Urban Macro (UMa) Duplex method and bandwidth FDD: 10 MHz for downlink Cell layout Hex grid 19 sites, 3 cells/site Transmission power at BS 46 dBm Number of users per sector 10 Network synchronization Synchronized Antenna configuration (eNB) 4 TX cross-polarized ant., 0.5-λ spacing Antenna configuration (user) 2 RX cross-polarized ant. Downlink transmission scheme Dynamic SU/MU-MIMO scheduling: MU-MIMO pairing: Max 2 users/RB; Codebook Rel. 8 codebook Downlink scheduler PF in time and frequency Scheduling granularity: 5 RBs Feedback assumptions 5 ms periodicity and 4 ms delay; Sub-band CQI and PMI feedback without errors. Sub-band granularity: 5 RBs Downlink HARQ scheme Chase Combining Downlink receiver type LMMSE Channel estimation error NA Feedback channel error NA Control channel and reference 3 OFDM symbols for control; signal overhead Used TBS tables in TS 36.213 3 Simulation Results We now evaluate the MU-MIMO performance with the different types of channel reports and enhancement methods via system level simulations. 3.1 Performance of MU-MIMO in Homogenous Networks We first consider a homogenous network for which the simulation parameters are summarized in Table B1. The cell average and the 5% cell edge spectral efficiencies of baseline scheme with SU-MIMO CSI user reports are provided in Table B2. The ZF transmit precoding is employed for all MU-MIMO transmissions. Also included are the spectral efficiencies for the -US-00005 TABLE B2 Spectral efficiency of MU-MIMO with near orthogonal transmit precoding with zero-forcing (ZF); Baseline SU-MIMO feedback or enhanced CSI feedback by the users. MU-MIMO/SU-MIMO cell average 5% cell-edge Baseline r = 2 2.3403 0.0621 Enhanced feedback r = 1 2.478 (5.88%) 0.0743 Enhanced feedback 2.409 (2.94%) 0.0705 (%) SU-MIMO plus rank-1 enhanced 2.5352 (8.33%) -- feedback Relative percentage gains are over the baseline scheme. The channel model is ITU Urban Micro (UMi). case when a rank restriction , i.e., r =1 is imposed on all active users via codebook subset restriction. Each user then reports its enhanced feedback including SU-MIMO CSI and the corresponding normalized residual error norm. Next, we consider the case when the rank one restriction is removed and each user first determines and reports its SU-MIMO CSI (for the rank it considers best) followed by the normalized residual error norm. Note that in this case at the eNB scheduler we fix each user's transmission rank to be equal to its reported rank, i.e., if a user has reported rank-2 (rank-1), it will be served using rank-2 (rank-1) if scheduled. This restriction on scheduling flexibility limits the gains. Finally, we consider the case when each user determines and reports its SU-MIMO CSI (for the rank it considers best). Then, if the determined rank is one, it reports the normalized residual error norm. However, if the determined rank is two, it determines and reports a rank-1 precoder along with the corresponding normalized residual error norm. Notice that this form of enhanced feedback (referred to in Table B2 as SU-MIMO-plus-rank-1 enhanced feedback) allows for a more substantial system throughput gain. 3.2 Performance of MU-MIMO in Heterogenous Networks We now consider a heterogenous network for which the simulation parameters are summarized in Table B3. Table B4 provides the cell average and 5% celhdge spectral efficiencies of both SU-MIMO and MU-MIMO. In order to obtain the MU-MIMO results we imposed a rank-1 codebook restriction on all users. Further, each user was configured to report a normalized residual error norm in addition to its SU-MIMO CSI report. We modeled the post-scheduling user received output as (B9) and considered the MRC SINR approximation (B12). No additional user pooling or SINR offset or OLLA was applied. We note that while more modest gains are obtained using residual error feedback, these gains are robust and can improve with other forms for enhanced feedback. 4 Appendix: More Enhanced User Feedback We first note that the residual error, i.e., the component of the filtered user channel F .sup.† in the orthogonal complement of {circumflex over (V)} is given by (I-{circumflex over (V)} {circumflex over (V)} .sup.†. After normalization using {tilde over (D)}, this component becomes (I-{circumflex over (V)} {circumflex over (V)} .sup.†{tilde over (D)} /2. The user reports {circumflex over (V)} as well as {tilde over (D)}. In addition, the user can report some information about the normalized component in the orthogonal complement (normalized residual error). As aforementioned, a simple option is to report the normalized residual error norm = {square root over (tr(F .sup.†{tilde over (D)} ))}. (B13) More involved options can enable even more accurate SINR computation at the eNB for any choice of user pairing in MU-MIMO. These include the following: User-1 obtains the QR decomposition of (I-{circumflex over (V)} {circumflex over (V)} .sup.†{tilde over (D)} /2 given by (I-{circumflex over (V)} {circumflex over (V)} .sup.†{tilde over (D)} , (B14) where Q is a semi-unitary matrix whose columns lie in the orthogonal complement of {circumflex over (V)} , i.e. Q' .sup.†{circumflex over (V)} =0 and R' is a matrix which satisfies the Frobenius-norm constraint ∥R'∥ l =ε , where ε is the normalized residual error norm. Notice that the matrix Q' in (14) is the same as Q in (B9), whereas 1 = ρ 1 r 1 R 1 ' . ##EQU00025## , the user-1 can report the first few largest diagonal values of R' along with the corresponding columns of Q after quantizing them. In addition, it can also report the normalized residual error norm ε . The number of diagonal values of R' to be reported can be configured by the eNB or the user can report all diagonal values greater than a threshold specified by the eNB. The eNB receives this report and employs it for SINR computation. In another form of residual error feedback the user can obtain the singular value de-composition of (I-{circumflex over (V)} {circumflex over (V)} .sup.†{tilde over (D)} /2 given by (I-{circumflex over (V)} {circumflex over (V)} .sup.†{tilde over (D)} {tilde over (S)} {tilde over (W)} .sup.†, (B15) where [1] are semi-unitary and unitary matrices, respectively, and the diagonal values of {tilde over (S)} are the singular values. Then, the user-1 can report the first few largest singular values in {tilde over (S)} along with the corresponding columns of quantizing them. In addition, it can also report the normalized residual error norm ε . The number of singular values to be reported can be configured by the eNB or the user can report all singular values greater than a threshold specified by the eNB. The eNB receives this report and employs it for SINR computation. 5 Appendix: Signaling Enhanced User Feedback In each channel state information (CSI) reporting interval the user reports its CSI. The eNB can configure a user for periodic CSI reporting and fix the periodicity and offset which together determine the exact sequence of intervals for which the user may report its CSI. This sequence will be henceforth referred to as the sequence for CSI reporting. In order to obtain the benefits of accurate MU-MIMO SINR computation without excessive feedback overhead, the eNB can multiplex intervals in which the user reports enhanced feedback with the ones in which it reports only its SU-MIMO CSI feedback. The periodicity and offset of the sub-sequence formed by intervals designated for enhanced feedback within the sequence for CSI reporting can be configured by the eNB, based on factors such as user mobility. Then, we have the following points that are of particular interest: In the sequence for CSI reporting, in the intervals designated for only SU-MIMO CSI feedback, the user reports its preferred precoder matrix {circumflex over (V)} and the corresponding quantized SINRs (determined using SU-MIMO rules). The user can select its preferred precoder matrix from a codebook of matrices under the constraint that it may be of a particular rank specified by the eNB or belong to a codebook subset specified by the eNB, or it can freely choose its preferred precoder matrix if no restrictions have been imposed by the eNB. In each interval designated for enhanced feedback, the user can first determine its SU-MIMO CSI comprising of a precoder {circumflex over (V)} and corresponding SINRs using SU-MIMO rules. As aforementioned, the user follows the restriction (if any) on rank or codebook subset that has been imposed by the eNB. The user uses {circumflex over and {tilde over (D)} (formed by the corresponding quantized SINRs) to determine any one of the forms of the residual error feedback described above. The particular feedback form will be configured by the eNB. The user then reports its SU-MIMO CSI along with the particular residual error feedback form. Differential feedback can be exploited in reporting the SU-MIMO CSI and the residual error feedback form. For instance, if the residual error feedback form consists of only the quantized residual error norm, then the user can report the SU-MIMO CSI and the difference of the largest (or smallest) reported SU-MIMO SINR, and the residual error norm. The user adopted convention for differential feedback is also configured by the eNB allowing it to reconstruct the residual error feedback form. Alternatively, in each interval designated for enhanced feedback, the user can first determine its SU-MIMO CSI under a restriction on rank or codebook subset that has been imposed by the eNB, where the said restriction applies only to intervals designated for enhanced feedback. The eNB can freely choose any restriction for the other intervals in the sequence for CSI reporting. The user then uses the determined precoder {circumflex over (V)} and {tilde over (D)} (formed by the corresponding quantized SINRs) to determine the eNB configured residual error feedback form and reports it along with its SU-MIMO CSI. Another option for each interval designated for enhanced feedback is also possible. Here the rank of the precoder {circumflex over (V)} to be determined via SU-MIMO rules, can itself be a function of the previous S ranks of the precoders selected by the user in the previous S intervals designated for only SU-MIMO CSI feedback. The function is pre-defined and known to both the user and the eNB. An example is where S=1 and the rule is that rank selected for the current interval designated for enhanced feedback is equal to one when the rank in the previous interval designated for only SU-MIMO CSI feedback is also equal to one; and the rank in the current interval is two otherwise. Alternatively, {circumflex over (V)} itself can be a function of the previous S precoders (and their corresponding SINRs) selected by the user in the previous S intervals designated for only SU-MIMO CSI feedback. The function is pre-defined and known to both the user and the eNB. In this case {circumflex over (V)} need not be reported by the user since it can be deduced by the eNB. Note that special cases of the sequence for CSI reporting described above, are the baseline case where each interval in the sequence is designated for SU-MIMO CSI only feedback and the one where each interval in the sequence is designated for enhanced feedback. In order to obtain full benefits of accurate MU-MIMO SINR computation and scheduling flexibility, we can combine SU-MIMO CSI reporting and enhanced CSI reporting. Then, we have the following points of particular interest: In each interval, the user can first determine its preferred precoder matrix G and the corresponding quantized SINRs using SU-MIMO rules. The user can select its preferred precoder matrix under the constraint that it may be of a particular rank specified by the eNB or belong to a codebook subset specified by the eNB, or it can freely choose its preferred precoder matrix if no restrictions have been imposed by the eNB. Next, in the same interval the user can determine another precoder matrix {circumflex over (V)} and corresponding SINRs using SU-MIMO rules. The eNB can set a separate restriction on rank or codebook subset which {circumflex over (V)} may obey. Notice in this case that if the rank enforced on {circumflex over (V)} happens to be equal to that of G , then {circumflex over (V)} and its corresponding quantized SINRs need not be reported since they are identical to G and its corresponding quantized SINRs, respectively, since both the pairs are determined using SU-MIMO rules. Alternatively, the rank of precoder {circumflex over (V)} can itself be a function of the rank of G . The function is pre-de fined and known to both the user and the eNB. An example rule is where rank of {circumflex over (V)} may be equal to one when the rank of G is one; and the rank of {circumflex over (V)} is two otherwise. In either case, using {circumflex over (V)} along with the corresponding SINRs, the user determines the eNB configured residual error feedback form. The user feedback report now includes G and corresponding quantized SINRs as well as {circumflex over (V)} , its corresponding quantized SINRs and the residual error feedback form. Again, differential feedback can be exploited in reporting this CSI. Alternatively, {circumflex over (V)} itself can be a function of G and the SINRs corresponding to G and thus need riot be reported since the function is pre-defined and known to both the user and the eNB. For instance, {circumflex over (V)} can be the column of G for which the corresponding SINR is the largest among all SINRs corresponding to G . Note here that if {circumflex over (V)} is identical to G then even the quantized SINRs corresponding to {circumflex over (V)} need not be reported since they are identical, respectively, to the quantized SINRs corresponding to G TABLE B3 Simulation Parameters: Heterogeneous network with low power RRHs within the macro-cell coverage Parameter Assumption Deployment scenario Scenario 3: Heterogeneous network with low power RRHs within the macrocell coverage - 1 cell with 2 low-power nodes (LPNs) ITU UMa for Macro, UMi for low power node Duplex method and bandwidth FDD: 10 MHz for downlink Cell layout Hex grid 19 sites, 3 cells/site Antenna Height Macro: 25 m; LPN: 10 m Number of users per sector Config4b: 30 Network synchronization Synchronized UE noise figure 9 dB Minimum Distance Macro - RRH/Hotzone: >75 m Macro - UE: >35 m RRH/Hotzone - RRH/Hotzone: >40 m RRH/Hotzone - UE: >10 m Handover margin 1 dB Indoor-outdoor modeling 100% of users are dropped outdoor Antenna configuration (eNB) 4 TX co-pol. ant., 0.5-λ spacing for both Macro Cell and LPN Antenna configuration (user) 2 RX co-pol. ant., 0.5-λ spacing Antenna pattern For macro eNB: 3D, tilt 12 degree. For low-power node: 2D Downlink transmission scheme SU-MIMO: Each user can have rank 1 or 2 MU-MIMO: Max 2 users/RB; Each user can have rank 1 Codebook Rel. 8 codebook Downlink scheduler PF in time and frequency Scheduling granularity: 5 RBs Feedback assumptions 5 ms periodicity and 4 ms delay; Sub-band CQI and PMI feedback without errors. Sub-band granularity: 5 RBs Downlink HARQ scheme Chase Combining Downlink receiver type LMMSE Channel estimation error NA Feedback channel error NA Control channel and reference 3 OFDM symbols for control; signal overhead Used TBS tables in TS 36.213 -US-00007 TABLE B4 Spectral efficiency of SU-MIMO/MU-MIMO in Heterogenous Networks; For MU-MIMO Rank-1 codebook restriction is imposed on all users and enhanced feedback is obtained from all users. MU-MIMO/SU-MIMO Average Cell SE 5% Cell-edge SU-MIMO Overall 2.8621 0.078 SU-MIMO Macro-cell 2.2025 0.0622 SU-MIMO LPN-RRH 3.1919 0.0904 MU-MIMO Overall 3.1526 (10.15%, 5.59%) 0.0813 MU-MIMO Macro-cell 2.5322 (14.97%, 8.54%) 0.0721 MU-MIMO LPN-RRH 3.4628 (8.49%, 4.91%) 0.1036 Relative percentage gains are over SU-MIMO and MU-MIMO without enhanced feedback, respectively. Further System Details C 1 Related MU-MIMO Operation The key hurdle that needs to be overcome in order to realize optimal MU-MIMO gains is the difficulty in modeling the received channel output seen by a user post-scheduling. While computing its CSI report, the user has an un-quantized estimate of its downlink channel but does not know the transmit precoder that will be employed by the base-station. On the other hand, the base station is free to select any transmit precoder but has to rely on the quantized CSI reported by the active users. To illustrate this, we consider a user of interest, say user-1, and model its received observations as , (C1) where H[1] .sup.† ε ×M denotes the channel matrix, with N, M being the number of receive antennas at the user and the number of transmit antennas at the eNB, respectively. μ is the additive noise which assumed to be spatially white and x is the signal transmitted by the eNB. In the usual SU-MIMO CSI reporting the user estimates ρ where ρ is the EPRE configured for the UE-1 and determines a desired precoder matrix {circumflex over (V)} of rank r after assuming that no other user will be co-scheduled with it. As a byproduct, it also determines a linear filter F and r SINRs, {SINR . The understanding is that if the base station transmits using a transmit precoder ρ 1 r 1 V ^ 1 , ##EQU00026## then late SINR seen by the UE (after filtering using the filter F to remove interference among columns of H .sup.†{circumflex over (V)} ) for the i layer (sent along the i column of {circumflex over (V)} ) will be SINR . Mathematically, the filtered received observation vector, under SU-MIMO transmission, can be modeled as 1 = F 1 z 1 = ρ 1 r 1 F 1 H 1 † V ^ 1 s 1 + η 1 , ( C2 ) ##EQU00027## where s[1] is the symbol vector containing r normalized QAM symbols and where ( ρ 1 r 1 F 1 H 1 † V ^ 1 ) = diag { SINR 1 1 , , SINR 1 r 1 } . ##EQU00028## The user feedsback the PMI {circumflex over (V)} and quantized SINRs {SI{circumflex over (N)}R to the eNB. The eNB obtains ^ 1 and D ^ 1 = r 1 ρ 1 diag { SI N ^ R 1 1 , , SI N ^ R 1 r 1 } ##EQU00029## based on the user 's SU-MIMO CSI report. For SU-MIMO transmission, the eNB assumes a post-scheduling model for user-1 by approximating (C1) as ≠{circumflex over (D)} /2{circumflex over (V)} , (C3) is assumed to a spatially white noise vector and U denotes the transmit precoder along which symbols to user-1 are sent. Furthermore, an approach quite popular in MU-MIMO studies is to employ the following model for the received output seen by user-1, when it is co-scheduled with other users in an MU-MIMO transmission: ={tilde over (D)} /2{circumflex over (V)} +{tilde over (D)} /2{circumflex over (V)} .sup.†U.sub. 1s.sub. 1+η , (C4) where U .sub. 1 contains all the remaining columns of the transmit precoder used for the co-scheduled streams. Letting A=[U , U.sub. 1] denote the MU-MIMO transmit precoding matrix, with rank (U , the base-station can obtain the following approximation for the SINRs seen by user-1 post-scheduling. s i n ^ r 1 i = α ^ 1 i 1 - α ^ 1 i , α ^ 1 i = [ ( I + A † S ^ 1 A ) - 1 A † S ^ 1 A ] i , i , 1 ≦ i ≦ r 1 ' , ( C5 ) ##EQU00030## where S[1] {circumflex over (V)} {circumflex over (D)} {circumflex over (V)} .sup.†. Since this SINR, approximation is obtained by ignoring the component of the user channel that lies in the orthogonal complement of {circumflex over (V)}hd 1, it is an over-estimation and can in-fact degrade system performance without appropriate compensation. 2 Enhanced MU-MIMO Operation The user, when configured by the eNB, reports SU-MIMO CSI plus a residual error term. The eNB can configure a user (to report the additional feedback) in a semi-static manner. We consider a simple form of residual error referred to as the residual error norm. Then, using SU-MIMO rules the user first determines a PMI {circumflex over (V)} of some rank r along with r quantized SINRs {SI{circumflex over (N)}R . Note that r can be determined by the user or it can be enforced by the eNB via codebook subset restriction. The residual error norm is determined by the user as {tilde over (ε)} = {square root over (tr(F .sup.†))}, (C6) where tr (.) denotes the trace operation and P =(I-{circumflex over (V)} {circumflex over (V)} .sup.†) is a projection matrix. Note that represents the residual total energy in the component of the filtered channel that lies in the orthogonal complement of the reported precoder {circumflex over (V)} . The user reports the usual SU-MIMO CSI along with the residual error norm r or a normalized residual error norm ε computed using = {square root over (tr(F .sup.†{tilde over (D)} ))}, (C7) {tilde over (D)} =diag{SI{circumflex over (N)}R , . . . , SI{circumflex over (N)}R The eNB can use the residual error norms reported by the users to determine accurate SINRs for any choice of user pairing in MU-MIMO. To achieve this, it employs a finer approximation of the filtered channel matrix (F .sup.†) of user-1 given by {circumflex over (D)} /2({circumflex over (V)} .sup.†), (C8) where Q[1] is a semi-unitary matrix whose columns lie in the orthogonal complement of {circumflex over (V)} , i.e. Q .sup.†{circumflex over (V)} =0 and R is a matrix which satisfies the Frobenius-norm constraint 1 F 2 ≦ ρ 1 r 1 ε 1 2 , ##EQU00031## >0 is the normalized residual error norm reported by user-1. Suppose the transmit precoder U is parsed as U=[U , U.sub. 1]. For a well designed transmit precoder, the eNB can make the reasonable assumption that U (almost) lies in the span of {circumflex over (V)} whose columns represent the preferred directions along which user-1 wishes to receive its intended signal (so that Q ≠0). Then, a model more tuned to MU-MIMO operation can be obtained in which the channel output seen by user-1 post MU-MIMO scheduling is modeled as ={circumflex over (D)} /2{circumflex over (V)} +{circumflex over (D)} /2({circumflex over (V)} .sup.†)U.sub. 1s.sub. 1+η , (C9) The model in (C9)accounts for the fact that the component of U.sub. 1 the orthogonal complement of {circumflex over (V)} can also cause interference to the UE. Notice that when only SU-MIMO CSI along with the normalized residual error norm is reported by the users, in the model in (C9) the eNB can only infer that the semi-unitary matrix Q lies in the subspace determined by I-{circumflex over (V)} {circumflex over (V)} .sup.†and R is also not known except for the fact that ( R 1 † R 1 ) = ρ 1 r 1 ε 1 2 . ##EQU00032## We illustrate an important instance of how the eNB can utilize the model in (C9) for MU-MIMO SINR, computation by considering the practically important MU-MIMO configuration, which is co-scheduling a user-pair. We first consider co-scheduling two users with one stream per-user so that both U and U.sub. 1=u.sub. 1 are rank-1 vectors. Using the model in (C9), we will compute the worst-case SINR obtained by minimizing the SINR over all feasible choices of R , Q . Without loss of generality, we assume Q to be a deterministic M×(M-r ) semi-unitary matrix whose columns are the basis of the orthogonal complement of {circumflex over (V)} and consider all possible (M-r matrices R satisfying the constraint that ( R 1 † R 1 ) ≦ ρ 1 r 1 ε 1 2 . ##EQU00033## , to obtain a conservative SINR estimate, the eNB can assume that the UE employs a simple MRC receiver, i.e., user-1 is assumed to use the linear combiner u .sup.†{circumflex over (V)} {circumflex over (D)} /2 on the model in (C9). Then, the worst-case SINR can be expressed as: min R 1 ε M - r 1 × r 1 : R 1 F 2 ≦ ρ 1 r 1 ε 1 2 u 1 † V ^ 1 D ^ 1 1 / 2 4 u 1 † V ^ 1 D ^ 1 1 / 2 2 + u 1 † V ^ 1 D ^ 1 ( V ^ 1 † + R 1 † Q 1 † ) u 1 _ 2 . ( C10 ) ##EQU00034## Simple manipulations reveal that (C10) is equal to 1 † V ^ 1 D ^ 1 1 / 2 4 u 1 † V ^ 1 D ^ 1 1 / 2 2 + ( u 1 † V ^ 1 D ^ 1 V ^ 1 † u 1 _ + ρ 1 r 1 ε 1 u 1 † V ^ 1 D ^ 1 Q 1 † u 1 _ ) 2 ( C11 ) ##EQU00035## which in turn can be simplified as ( u 1 † V ^ 1 D ^ 1 1 / 2 4 ) u 1 † V ^ 1 D ^ 1 1 / 2 2 + ( u 1 † V ^ 1 D ^ 1 V ^ 1 † u 1 _ + ρ 1 r 1 ε 1 u 1 † V ^ 1 D ^ 1 u 1 _ † ( I - V ^ 1 V ^ 1 † ) u 1 _ ) 2 ( C12 ) ##EQU00036## We next consider co-scheduling two users with one stream for user-1 so that U is a rank-1 vector and two streams for the other user so that U.sub. 1 is a rank-2 matrix. As before, to obtain a conservative SINR estimate, the eNB can assume that the UE employs a simple MRC receiver, and the worst-case SINR can be expressed as: min R 1 ε M - r 1 × r 1 : R 1 F 2 ≦ ρ 1 r 1 ε 1 2 u 1 † V ^ 1 D ^ 1 1 / 2 4 u 1 † V ^ 1 D ^ 1 1 / 2 2 + u 1 † V ^ 1 D ^ 1 ( V ^ 1 † + R 1 † Q 1 † ) U 1 _ 2 . ( C13 ) ##EQU00037## Next let a=u .sup.†{circumflex over (V)} {circumflex over (D)} {circumflex over (V)} .sup.†U.sub. 1 and b=u .sup.†{circumflex over (V)} {circumflex over (D)} and U.sub. 1.sup.†Q .sup.†U.sub. 1=U.sub. 1.sup.†(I-{circumflex over (V)} {circumflex over (V)} .sup.†)U.sub. 1. Let the eigen value decomposition of S be S=EΛE.sup.†, where Λ=diag{λ , xλ } and expand the 1×2 vector b as b=∥b∥[1, 0]A.sup.†, where A is a 2×2 unitary matrix. Then, letting a=[a , a ]=aE, we can show that max R 1 ε M - r 1 × r 1 : R 1 F 2 ≦ ρ 1 r 1 c 1 2 { u 1 † V ^ 1 D ^ 1 ( V ^ 1 † + R 1 † Q 1 † ) U 1 _ 2 } = max x , y .di-elect cons. IR + : x 2 + y 2 ≦ ρ 1 r 1 ε 1 2 { ( a ~ 1 + b λ 1 x ) 2 + ( a ~ 2 + b λ 2 y ) 2 } ( C14 ) ##EQU00038## (C14) is a non-convex optimization problem and letting ∥b∥, c ∥b∥ and ε = ρ 1 r 1 ε 1 ##EQU00039## we approximate (C 14) by {hacek over (ε)}) , (|a {hacek over (ε)}) }. (C15) Using (C15) in (C13) we can obtain an approximate SINR given by 1 † V ^ 1 D ^ 1 1 / 2 4 u 1 † V ^ 1 D ^ 1 1 / 2 2 + max { ( a ~ 1 + c 1 ε ) 2 + a ~ 2 2 , ( a ~ 2 + c 2 ε ) 2 + a ~ 1 2 } . ( C16 ) ##EQU00040## Indeed the steps used to obtain the approximate SINRs in (C12) and (C 16) can be readily extended to obtain the approximate SINRs for all permissible user co-scheduling configurations, all of which may satisfy co-scheduling no more than four streams in total with no more than two streams per-user. -US-00008 TABLE C1 Simulation Parameters for Homogenous Networks Parameter Assumption Deployment scenario IMT Urban Micro (UMi) and Urban Macro (UMa) Duplex method and bandwidth FDD: 10 MHz for downlink Cell layout Hex grid 19 sites, 3 cells/site Transmission power at BS 46 dBm Number of users per sector 10 Network synchronization Synchronized Antenna configuration (eNB) 4 TX cross-polarized ant., 0.5-λ spacing Antenna configuration (user) 2 RX cross-polarized ant. Downlink transmission scheme Dynamic SU/MU-MIMO scheduling: MU-MIMO pairing: Max 2 users/RB; Codebook Rel. 8 codebook Downlink scheduler PF in time and frequency Scheduling granularity: 5 RBs Feedback assumptions 5 ms periodicity and 4 ms delay; Sub-band CQI and PMI feedback without errors. Sub-band granularity: 5 RBs Downlink HARQ scheme Chase Combining Downlink receiver type LMMSE Channel estimation error NA Feedback channel error NA Control channel and reference 3 OFDM symbols for control; signal overhead Used TBS tables in TS 36.213 3 Simulation Results We now evaluate the MU-MIMO performance with the different types of channel reports and enhancement methods via system level simulations. 3.1 Performance of MU-MIMO in Homogenous Networks: Sub-Band CSI feedback We first consider a homogenous network for which the simulation parameters are summarized in Table C1. We emphasize that each user computes and reports one precoding matrix index (PMI) and up-to two CQI(s) for each subband, along with one wideband rank indicator (RI) that is common for all subbands. The cell average and the 5% cell edge spectral efficiencies of baseline scheme with SU-MIMO CSI user reports are provided in Table C2. IMT Urban Micro (UMi) channel model is considered here. The ZF transmit precoding is employed for all MU-MIMO transmissions. Also included are the spectral efficiencies for the case when a rank restriction, i.e., r =1 is imposed on all active users via codebook subset restriction. Each user then reports its enhanced feedback including SU-MIMO CSI and the corresponding per-subband normalized residual error norm. Next, we consider the case when the rank one restriction is removed and each user first determines and reports its SU-MIMO CSI (for the rank it considers best) followed by the per-subband normalized residual error norm. Note that in this case at the eNB scheduler we fix each user's transmission rank to be equal to its reported rank, i.e., if a user has reported rank-2 (rank-1), it will be served using rank-2 (rank-1) if scheduled. This restriction on scheduling flexibility limits the gains. We then consider the case when each user determines and reports its SU-MIMO CSI (for the rank it considers best). Then, if the determined rank is one, it reports the per-subband normalized residual error norm. However, if the determined rank is two, for each subband it determines and reports a rank-1 precoder along with the corresponding normalized residual error norm. Notice that this form of enhanced feedback (referred to in Table C2 as SU-MIMO-plus-rank-1 enhanced feedback) allows for a more substantial system throughput gain. Finally, we consider the case that the user reports its SU-MIMO CSI (for the rank it considers best) followed by the per-subband normalized residual error norm computed for corresponding the reported PMI. At the base station, the scheduler determines the user's transmission rank which could be lower than its reported rank. We can see that with rank override but without the additional per-subband rank-1 PMI feedback, the proposed scheme can still achieve a large gain over the baseline scheme. Note that the cell average performance for this case is even slightly better than the case of SU-MIMO-plus-rank-1 enhanced feedback. Further, no OLLA was applied to any scheme involving enhanced CSI feedback so that the gains Two CQIs per-subband are reported whenever the reported rank is greater than or equal to two and one CQI is reported otherwise. -US-00009 TABLE C2 Spectral efficiency of MU-MIMO with near orthogonal transmit precoding with zero-forcing (ZF); Per-subband SU-MIMO feedback or enhanced CSI feedback by the users. MU-MIMO/SU-MIMO cell average 5% cell-edge Baseline r = 2 2.3576 0.0647 Enhanced feedback r = 1 2.4815 (5.26%) 0.0766 (18.4%) Enhanced feedback (fixed rank) 2.4125 (2.33%) 0.0686 (6.03%) SU-MIMO plus rank-1 enhanced 2.5567 (8.45%) 0.0736 (13.8%) feedback Enhanced feedback (dynamic rank 2.5943 (10.04%) 0.0717 (10.8%) selection) Relative percentage gains are over the baseline scheme. The channel model is ITU Urban Micro (UMi). -US-00010 TABLE C3 Spectral efficiency of MU-MIMO with near orthogonal transmit precoding with zero-forcing (ZF); Per-subband SU-MIMO feedback or enhanced CSI feedback by the users. MU-MIMO/SU-MIMO cell average 5% cell-edge Baseline r = 2 2.2645 0.0654 Enhanced feedback r = 1 2.3689 (4.61%) 0.0780 (19.3%) Enhanced feedback (fixed rank) 2.3376 (3.23%) 0.0736 (12.5%) SU-MIMO pins rank-1 enhanced 2.4552 (8.42%) 0.0774 (18.4%) feedback Enhanced feedback (dynamic rank 2.4753 (9.31%) 0.0756 (15.6%) selection) Relative percentage gains are over the baseline scheme. The channel model is ITU Urban Macro (UMa). obtained are quite robust Similar results are obtained for IMT Urban Macro (UMa) channel model which are provided in Table C3. 3.2 Performance of MU-MIMO in Homogenous Networks: Wide-Band CSI Feedback We again consider a homogenous network for which the simulation parameters are summarized in Table C1 except that now each user computes and reports a wideband PMI, wideband RI along with per-subband CQI(s). .sup.C2For enhanced feedback each user reports one additional wideband normalized residual error norm which is computed using the reported wideband PMI. The cell average and the 5% cell edge spectral efficiencies of baseline scheme with SU-MIMO .sup.C2The RI as well as the PMI are invariant across all subbands. Two CQIs per-subband are reported whenever the reported rank is greater than or equal to two and one CQIs reported otherwise. -US-00011 TABLE C4 Spectral efficiency of MU-MIMO with near orthogonal transmit precoding with zero-forcing (ZF); Wideband SU-MIMO feedback or enhanced CSI feedback by the users. MU-MIMO/SU-MIMO cell average 5% cell-edge Baseline r = 2 2.342 0.0617 Enhanced feedback 2.5639 (9.47%) 0.0664 (7.62%) (subband NREN) r = 2 Enhanced feedback 2.5345 (8.22%) 0.0648 (5%) (wideband Average NREN) Enhanced feedback 2.5459 (8.71%) 0.0657 (6.48%) (wideband Best M = 3 Average NREN) Relative percentage gains are over the baseline scheme. The channel model is ITU Urban Micro (UMi). CSI user reports are provided in Table C 4 considering the IMT Urban Micro (UMi) channel model. The ZF transmit precoding is employed for all MU-MIMO transmissions. Also included are the spectral efficiencies for the case when a rank restriction, i.e., r =1 is imposed on all active users via codebook subset restriction. Next, we consider the case when the rank one restriction is removed and each user first determines and reports its SU-MIMO CSI (for the rank it considers best) followed by the wideband normalized residual error norm (NREN). The wideband NREN is computed as the average of the per sub-band NRENs. At the base station, the scheduler determines the user's transmission rank which could be lower than its reported rank. Finally, we exploit the observation that each user is likely to be scheduled on subbands that it deems to be good. In particular, each user upon computing its SU-MIMO CSI also sorts the subbands in the decreasing order of the per-subband rates (which are determined using the corresponding per subband CQIs) and selects the first M subbands which offer the M largest rates. It then computes a normalized residual error norm for each one of these M subbands and takes their average. This average NREN is then additionally reported to the eNB. In the simulation we have set M=3. We note that substantial gains are obtained even with a wideband normalized residual error norm feedback. Further, no OLLA was applied to any scheme involving enhanced CSI feedback so that the gains obtained are quite robust. Similar results have been observed for the IMT Urban Macro (UMa) channel model which are provided in Table C5. -US-00012 TABLE C5 Spectral efficiency of MU-MIMO with near orthogonal transmit precoding with zero-forcing (ZF); Wideband SU-MIMO feedback or enhanced CSI feedback by the users. MU-MIMO/SU-MIMO cell average 5% cell-edge Baseline r = 2 2.2461 0.0648 Enhanced feedback 2.4494 (9%) 0.0715 (10.34%) (subband NREN) r = 2 Enhanced feedback 2.4136 (7.46%) 0.0696 (7.4%) (wideband Average NREN) Enhanced feedback 2.4397 (8.62%) 0.0726 (12%) (wideband Best M = 3 Average NREN) Relative percentage gains are over the baseline scheme. The channel model is ITU Urban Macro (UMa). 3.3 Performance of MU-MIMO in Heterogenous Networks We now consider a heterogenous network for which the simulation parameters are summarized in Table C6. Table C7 provides the cell average and 5% cell-edge spectral efficiencies of both SU-MIMO and MU-MIMO. In order to obtain the MU-MIMO results we imposed a rank-1 codebook restriction on all users. Further, each user was configured to report a normalized residual error norm in addition to its SU-MIMO CSI report. We modeled the post-scheduling user received output as (C9) and considered the MRC SINR approximation (C12). No additional user pooling or SINR offset or OLLA was applied. We note that while more modest gains are obtained using residual error feedback, these gains are robust and can improve with other forms for enhanced feedback. 4 Appendix: More Enhanced User Feedback We first note that the residual error, i.e., the component of the filtered user channel F .sup.† in the orthogonal complement of {circumflex over (V)} is given by (I-{circumflex over (V)} {circumflex over (V)} .sup.†. After normalization using {tilde over (D)} this component becomes (I -{circumflex over (V)} {circumflex over (V)} .sup.†{tilde over (D)} /2. The user reports {circumflex over (V)} as well as {tilde over (D)}. In addition, the user can report some information about the normalized component in the orthogonal complement (normalized residual error). As aforementioned, a simple option is to report the normalized residual error norm = {square root over (tr(F .sup.†{tilde over (D)} ))}. (C17) More involved options can enable even more accurate SINR computation at the eNB for any choice of user pairing in MU-MIMO. These include the following: User-1 obtains the QR decomposition of (I-{circumflex over (V)} {circumflex over (V)} .sup.†{tilde over (D)} /2 given by (I-{circumflex over (V)} {circumflex over (V)} .sup.†{tilde over (D)} , (C18) where Q' [1] ^M is a semi-unitary matrix whose columns lie in the orthogonal complement of {circumflex over (V)} , i.e. Q' .sup.†{circumflex over (V)} =0 and R' is a matrix which satisfies the Frobenius-norm constraint ∥R' , where ε is the normalized residual error norm. Notice that the matrix Q' in (C18) is the same as Q in (C9), whereas R 1 = ρ 1 r 1 R 1 ' . ##EQU00041## , the user-1 can report the first few largest diagonal values of R' along with the corresponding columns of Q after quantizing them. In addition, it can also report the normalized residual error norm ε . The number of diagonal values of R' to be reported can be configured by the eNB or the user can report all diagonal values greater than a threshold specified by the eNB. The eNB receives this report and employs it for SINR computation. In another form of residual error feedback the user can obtain the singular value decompostion of (I-{circumflex over (V)} {circumflex over (V)} .sup.†{tilde over (D)} /2 given by (I-{circumflex over (V)} {circumflex over (V)} .sup.†{tilde over (D)} {tilde over (S)} {tilde over (W)} .sup.†, (C19) where [1] and {tilde over (W)} are semi-unitary and unitary matrices, respectively, and the diagonal values of {tilde over (S)} are the singular values. Then, the user-1 can report the first few largest singular values in {tilde over (S)} along with the corresponding columns of after quantizing them. In addition, it can also report the normalized residual error norm The number of singular values to be reported can be configured by the eNB or the user can report all singular values greater than a threshold specified by the eNB. The eNB receives this report and employs it for SINR, computation. 5 Appendix: Signaling Enhanced User CSI Feedback In each channel state information (CSI) reporting interval the user reports its CSI. The eNB can configure a user for periodic CSI reporting and fix the periodicity and offset which together determine the exact sequence of intervals for which the user may report its CSI. This sequence will be henceforth referred to as the sequence for CSI reporting. In order to obtain the benefits of accurate MU-MIMO SINR computation without excessive feedback overhead, the eNB can multiplex intervals in which the user reports enhanced feedback with the ones in which it reports only its SU-MIMO CSI feedback. The periodicity and offset of the sub-sequence formed by intervals designated for enhanced feedback within the sequence for CSI reporting can be configured by the eNB, based on factors such as user mobility. Then, we have the following points that are of particular interest: In the sequence for CSI reporting, in the intervals designated for only SU-MIMO CSI feedback, the user reports its preferred precoder matrix {tilde over (V)} and the corresponding quantized SINRs (determined using SU-MIMO rules). The user can select its preferred precoder matrix from a codebook of matrices under the constraint that it may be of a particular rank specified by the eNB or belong to a codebook subset specified by the eNB, or it can freely choose its preferred precoder matrix if no restrictions have been imposed by the eNB. In each interval designated for enhanced feedback, the user can first determine its SU-MIMO CSI comprising of a precoder V and corresponding SINRs using SU-MIMO rules. As aforementioned, the user follows the restriction (if any) on rank or codebook subset that has been imposed by the eNB. The user uses {circumflex over and {tilde over (D)} (formed by the corresponding quantized SINRs) to determine any one of the forms of the residual error feedback described above. The particular feedback form will be configured by the eNB. The user then reports its SU-MIMO CSI along with the particular residual error feedback form. Differential feedback can be exploited in reporting the SU-MIMO CSI and the residual error feedback form. For instance, if the residual error feedback form consists of only the quantized normalized residual error norm, then the user can report the SU-MIMO CSI and the difference of the largest (or smallest) reported SU-MIMO SINR and the residual error norm. The user adopted convention for differential feedback is also configured by the eNB allowing it to reconstruct the residual error feedback form. Alternatively, in each interval designated for enhanced feedback, the user can first determine its SU-MIMO CSI under a restriction on rank or codebook subset that has been imposed by the eNB, where the said restriction applies only to intervals designated for enhanced feedback. The eNB can freely choose any restriction for the other intervals in the sequence for CSI reporting. The user then uses the determined precoder {circumflex over (V)} and {tilde over (D)} (formed by the corresponding quantized SINRs) to determine the eNB configured residual error feedback form and reports it along with its SU-MIMO CSI. Another option for each interval designated for enhanced feedback is also possible. Here the rank of the precoder {circumflex over (V)} to be determined via SU-MIMO rules, can itself be a function of the previous S ranks of the precoders selected by the user in the previous S intervals designated for only SU-MIMO CSI feedback. The function is pre-defined and known to both the user and the eNB. An example is where S=1 and the rule is that rank selected for the current interval designated for enhanced feedback is equal to one when the rank in the previous interval designated for only SU-MIMO CSI feedback is also equal to one; and the rank in the current interval is two otherwise. Alternatively, {circumflex over (V)} itself can he a function of the previous S precoders (and their corresponding SINRs) selected by the user in the previous S intervals designated for only SU-MIMO CSI feedback. The function is pre-defined and known to both the user and the eNB. In this case {circumflex over (V)} need not be reported by the user since it can be deduced by the eNB. Note that special cases of the sequence for CSI reporting described above, are the baseline case where each interval in the sequence is designated for SU-MIMO CSI only feedback and the one where each interval in the sequence is designated for enhanced feedback. Finally, as an option to reduce feedback overhead, in all the aforementioned alternatives the CSI reports can include a wideband precoder matrix (i.e., a precoder matrix common for all sub-bands) along with sub-band specific SINRs and sub-band specific residual error feedback forms. In order to obtain full benefits of accurate MU-MIMO SINR computation and scheduling flexibility, we can combine SU-MIMO CSI reporting and enhanced CSI reporting. Then, we have the following points of particular interest: In each interval, the user can first determine its preferred precoder matrix G and the corresponding quantized SINRs using SU-MIMO rules. The user can select its preferred precoder matrix under the constraint that it may be of a particular rank specified by the eNB or belong to a codebook subset specified by the eNB, or it can freely choose its preferred precoder matrix if no restrictions have been imposed by the eNB. Next, in the same interval the user can determine another precoder matrix {circumflex over (V)} and corresponding SINRs using SU-MIMO rules. The eNB can set a separate restriction on rank or codebook subset which {circumflex over (V)} may obey. Notice in this case that if the rank enforced on {circumflex over (V)} happens to be equal to that of G , then {circumflex over (V)} and its corresponding quantized SINRs need not be reported since they are identical to d, and its corresponding quantized SINRs, respectively, since both the pairs are determined using SU-MIMO rules. Alternatively, the rank of precoder {circumflex over (V)} can itself be a function of the rank of G . The function is pre-defined and known to both the user and the eNB. An example rule is where rank of {circumflex over (V)} may be equal to one when the rank of G is one; and the rank of {circumflex over (V)} is two otherwise. In either case, using {circumflex over (V)} along with the corresponding SINRs, the user determines the eNB configured residual error feedback form. The user feedback report now includes G and corresponding quantized SINRs as well as {circumflex over (V)} , its corresponding quantized SINRs and the residual error feedback form. Again, differential feedback can be exploited in reporting this CSI. Alternatively, {circumflex over (V)} itself can be a function of G and the SINRs corresponding to G and thus need not be reported since the function is pre-defined and known to both the user and the eNB. For instance, {circumflex over (V)} can be the column of G for which the corresponding SINR is the largest among all SINRs corresponding to G . Note here that if {circumflex over (V)} is identical to G then even the quantized SINRs corresponding to {circumflex over (V)} need riot be reported since they are identical, respectively, to the quantized SINRs corresponding to {tilde over (G)} Finally, as an option to reduce feedback overhead, in all the aforementioned alternatives the CSI reports can include wideband G , {circumflex over (V)} along with sub-band specific SINRs and sub-band specific residual error feedback forms. 6 Appendix: Further Overhead Reduction in Signaling Enhanced User CSI Feedback Let us consider the case when the residual error feedback form consists of only the quantized normalized residual error norm. In this case in each interval of the sequence designated for enhanced feedback, in all the aforementioned alternatives, the CSI reports can include a wideband G which is common for all subbands, a wideband {circumflex over (V)} (if it is distinct from the reported G ) along with sub-band specific SINRs computed for G (and {circumflex over (V)} if it is distinct from the reported G ) and a quantized wideband normalized residual error norm. The wideband normalized residual error norm is computed using time wideband {circumflex over (V)} . Alternatively, the CSI reports can include per-subband G (one for each subband), along with sub-band specific SINRs computed for G and a quantized wideband normalized residual error norm. The wideband normalized residual error norm is now computed using the per-subband G In either one of the above two cases the computation of the wideband normalized residual error norm can be done as follows. The user can first determine a normalized residual error norm for each subband using either the wideband {circumflex over (V)} or the corresponding per-subband G , respectively. The computation of the wideband normalized residual error norm can be done using the computed subband specific normalized residual error norm (NREN) and one of the following The user can set the wideband MIEN to be equal to the average of its per-subband NRENs The user can set the wideband NREN to be equal to the best or smallest NREN among its per-subband NRENs The user can set the wideband NREN to be equal to the worst or largest NREN among its per-subband NRENs Alternatively, using the sub-band specific SINRs computed for G , the user can determine the M subbands which offer the M largest rates (where the rates arc determined using the corresponding per subband SINRs). It then computes a normalized residual error norm for each one of these M subbands using either the wideband {circumflex over (V)} or the corresponding per-subband G , respectively. A wideband NREN can be determined using these M NRENs using any one of the three methods described above. Note that the value of M is configured by the eNB and conveyed to the user in a slow semi-static mariner and can be user-specific. Notice that the computed wideband NREN may he quantized. As noted previously the user can instead report the difference of the NREN and another scalar quantity (such as CQI) which is also reported. It can instead report the ratio. The eNB may of course be aware of the reporting method being adopted. A useful observation is that a relatively large value of the NREN means that a significant portion of the channel energy remains in the orthogonal complement of the corresponding reported precoder. This implies that significant interference can potentially he caused to such a user if it is co-scheduled with one or more other users. Thus, it is sensible to not co-schedule such a user with other users and instead ensure that any RB allocated to such a user is not assigned to any other user. This observation can be leveraged by letting the user compare the computed NREN with a threshold. If the NREN is smaller than the threshold, it can be quantized and reported. Otherwise, if the NREN is larger than the threshold, a special value can be reported to the eNB instead of the quantized NREN, which will convey to the eNB that there is a "high possibility of co-scheduling interference" to the user on the one or more subbands covered by that NREN. The threshold is configured by the eNB and conveyed to the user in a slow semi-static manner and can be user-specific. -US-00013 TABLE C6 Simulation Parameters: Heterogeneous network with low power RRHs within the macro-cell coverage Parameter Assumption Deployment scenario Scenario 3: Heterogeneous network with low power RRHs within the macrocell coverage - 1 cell with 2 low-power nodes (LPNs) ITU UMa for Macro, UMi for low power node Duplex method and bandwidth FDD: 10 MHz for downlink Cell layout Hex grid 19 sites, 3 cells/site Antenna Height Macro: 25 m; LPN: 10 m Number of users per sector Config4b: 30 Network synchronization Synchronized UE noise figure 9 dB Minimum Distance Macro - RRH/Hotzone: >75 m Macro - UE: >35 m RRH/Hotzone - RRH/Hotzone: >40 m RRH/Hotzone - UE: >10 m Handover margin 1 dB Indoor-outdoor modeling 100% of users are dropped outdoor Antenna configuration (eNB) 4 TX co-pol. ant., 0.5-λ spacing for both Macro Cell and LPN Antenna configuration (user) 2 RX co-pol. ant., 0.5-λ spacing Antenna pattern For macro eNB: 3D, tilt 12 degree. For low-power node: 2D Downlink transmission scheme SU-MIMO: Each user can have rank 1 or 2 MU-MIMO: Max 2 users/RB; Each user can have rank 1 Codebook Rel. 8 codebook Downlink scheduler PF in time and frequency Scheduling granularity: 5 RBs Feedback assumptions 5 ms periodicity and 4 ms delay; Sub-band CQI and PMI feedback without errors. Sub-band granularity: 5 RBs Downlink HARQ scheme Chase Combining Downlink receiver type LMMSE Channel estimation error NA Feedback channel error NA Control channel and reference 3 OFDM symbols for control; signal overhead Used TBS tables in TS 36.213 -US-00014 TABLE C7 Spectral efficiency of SU-MIMO/MU-MIMO in Heterogenous Networks; For MU-MIMO Rank-1 codebook restriction is imposed on all users and enhanced feedback is obtained from all users. MU-MIMO/SU-MIMO Average Cell SE 5% Cell-edge SU-MIMO Overall 2.8621 0.078 SU-MIMO Macro-cell 2.2025 0.0622 SU-MIMO LPN-RRH 3.1919 0.0904 MU-MIMO Overall 3.1526 (10.15%, 5.59%) 0.0813 MU-MIMO Macro-cell 2.5322 (14.97%, 8.54%) 0.0721 MU-MIMO LPN-RRH 3.4628 (8.49%, 4.91%) 0.1036 Relative percentage gains are over SU-MIMO and MU-MIMO without enhanced feedback, respectively. Patent applications by Guosen Yue, Plainsboro, NJ US Patent applications by Meilong Jiang, Plainsboro, NJ US Patent applications by Mohammad A. Khojastepour, Lawrenceville, NJ US Patent applications by Narayan Prasad, Wyncote, PA US Patent applications by Sampath Rangarajan, Bridgewater, NJ US Patent applications by NEC Laboratories America, Inc. Patent applications in class DIAGNOSTIC TESTING (OTHER THAN SYNCHRONIZATION) Patent applications in all subclasses DIAGNOSTIC TESTING (OTHER THAN SYNCHRONIZATION) User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120275313","timestamp":"2014-04-17T16:49:55Z","content_type":null,"content_length":"167563","record_id":"<urn:uuid:4a6e5f03-99cc-4da1-8085-1d92a795ebbe>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
Question-:Consider A Fluid (of Density ρ) Inincompressible,... | Chegg.com question-:Consider a fluid (of density ρ) inincompressible, laminar flow in a plane narrow slit of lengthL and width W formed by two flat parallel wallsthat are a distance 2B apart. End effects may be neglectedbecause B << W << L. Thefluid flows under the influence of both a pressure differenceΔp and gravity. Fluid flow inplane narrow slit. a) Using a differential shell momentum balance, determineexpressions for the steady-state shear stress distribution and thevelocity profile for a Newtonian fluid (of viscosityμ). b) Obtain expressions for the maximum velocity, average velocityand the mass flow rate for slit flow.
{"url":"http://www.chegg.com/homework-help/questions-and-answers/question-consider-fluid-density-inincompressible-laminar-flow-plane-narrow-slit-lengthl-wi-q75948","timestamp":"2014-04-18T01:10:54Z","content_type":null,"content_length":"21085","record_id":"<urn:uuid:7f2e4938-dd98-4c8a-8df5-294ee3fe27aa>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Sufficient conditions for oscillatory behaviour of a first order neutral difference equation with oscillating coefficients. (English) Zbl 1224.39018 Summary: We obtain sufficient conditions so that every solution of neutral functional difference equation ${\Delta }\left({y}_{n}-{p}_{n}{y}_{\tau \left(n\right)}\right)+{q}_{n}G\left({y}_{\sigma \left(n\right)}\right)={f}_{n}$ oscillates or tends to zero as $n\to \infty$. Here, ${\Delta }$ is the forward difference operator given by ${\Delta }{x}_{n}={x}_{n+1}-{x}_{n}$, and ${p}_{n}$, ${q}_{n}$, ${f}_{n}$ are the terms of oscillating infinite sequences; $\left\{{\tau }_{n}\right\}$ and $\left\{{\sigma }_{n}\right\}$ are non-decreasing sequences, which are less than $n$ and approaches $\infty$ as $n$ approaches $\ infty$. This paper generalizes and improves some recent results. 39A21 Oscillation theory (difference equations) 39A10 Additive difference equations 39A12 Discrete version of topics in analysis 39A22 Growth, boundedness, comparison of solutions (difference equations) 34K40 Neutral functional-differential equations 34K11 Oscillation theory of functional-differential equations
{"url":"http://zbmath.org/?q=an:1224.39018","timestamp":"2014-04-19T07:11:35Z","content_type":null,"content_length":"23541","record_id":"<urn:uuid:b7f58ba2-f987-4f67-972e-e5bea9d7abcc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology Script Help [Archive] - MacRumors Forums Apr 20, 2006, 05:58 PM I'm currently a college student taking a Cryptology course just making the jump from Windows to Mac, and I was looking for a way to automate some of the repetitive tasks necessary for RSA encryption. I have some Applescript experience, but I cannot manage to get anything to work. Any help would be incredibly well received. (Mathematical Explanation) The cryptology programs I need to automate are all executables and are used through terminal. RSA encryption uses a very simple equation: a^k mod n [where "a" is your message converted to numeric form (a=01, b=02,...), "k" is your public exponent (a number anyone can see) and "n" is a public modulus ("mod" is a function producing a quotient's remainder: 12 mod 5=2, 13 mod 5=3...)]. Encryption is attained through a multiplicative inverse ("d") of you public exponent, which is secret: thus a decryption formula of z^d mod n [where "z" is the encrypted numerical plaintext, "d" is the multiplicative inverse of "e," and "n" is the same modulus as above]. [more info: http://world.std.com/~franl/crypto/rsa-guts.html] So, I know that Applescript can easily perform this type of math in its own interface, but I would much rather be able to create a script that would automatically 1) open a saved terminal session 2) open a certain textedit document 3) copy the first line of that textedit document 4) paste that line of text into terminal without executing the line 5) copy the second line of the textedit document 6) paste that line into terminal with a preceding space without executing the command 7) copy and paste the third line into terminal with a preceding space 8) execute the command line 9) close the textedit application leaving terminal as the active window. My main problem is that I cannot get textedit to copy anything to the clipboard, I can only get it to copy blocks of text within a document with the "duplicate to" command. Also, terminal refuses to accept any pasted data (from other programs) without executing the line. Again, any help with any of the steps listed above would be greatly appreciated.
{"url":"http://forums.macrumors.com/archive/index.php/t-195232.html","timestamp":"2014-04-20T16:34:41Z","content_type":null,"content_length":"9396","record_id":"<urn:uuid:48dc6457-62f1-4ab3-94d9-10c5bfc359f0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
A discount rate is the percentage by which the value of a cash flow in a discounted cash flow (DCF) valuation is reduced for each time period by which it is removed from the present. The estimation of a suitable discount rate is often the most difficult and uncertain part of a DCF. This is only worsened by the fact that the final result is very sensitive to the choice of discount rate — a small change in the discount rate causes a large change in the value. For listed securities the discount rate is most often calculated using CAPM. The availability of both data to calculate betas and of services that provide beta estimates make this convenient. Cash flows other than listed securities For unlisted securities and for other types of streams of cash flows it becomes a little more difficult. If listed securities exist that are similar in terms of undiversifiable risk, then these can be used as benchmarks to estimate what the appropriate discount rate should be. A comparatively simple approach is to find a pure play company in as similar a business as possible and calculate its WACC. This may be the appropriate discount rate, or it may need further adjustment. If further adjustments are needed it is usually best to work from the WACC, using the CAPM, to calculate what the beta would be given only equity funding, and adjust the beta. This is correct because of capital structure irrelevance. Sometimes it is possible to make simple adjustments. For example, if the cash flows face a similar (undiversifiable component of) revenue volatility to the benchmark, but a different level of operational gearing, simply multiply the beta by the ratio of (1 + fixed costs as a percentage of revenues) for the cash flows being evaluated to the same for the benchmark. In many cases it will be necessary to use detailed modelling to estimate the difference in the sensitivity to undiversifiable elements. In practice this means modelling the relationship between economic growth (the economy is the main undiversifiable risk) and both sets of cash flows. It may be simpler to use the market as the benchmark, in which case the ratio is the beta. In many, if not most, cases in developed countries, it is possible to find a good enough pure play comparator and avoid the complex approach. A last resort approach is to simply use what appears to be a sensible risk premium over the market or the risk free rate. In all cases, especially the last, it is useful to calculate a DCF with several different discount rates, so that the sensitivity of the final result to this assumption can be clearly seen.
{"url":"http://moneyterms.co.uk/discount-rate/","timestamp":"2014-04-21T07:09:32Z","content_type":null,"content_length":"7349","record_id":"<urn:uuid:415f3174-1534-475b-8669-2d3dcf894fe8>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Volodymyr Mazorchuk more recent photo 1 more recent photo 2 Professor of Mathematics, Department of Mathematics, University of Uppsala, Box 480, SE-751 06 Uppsala, SWEDEN office: 14237 phone: +46-(0)18-471-3284 fax: +46-(0)18-471-3201 e-mail: mazor [at] math [dot] uu [dot] se Short Curriculum Vitae: Date of Birth: 10 May 1972 1988-1993: student, Kyiv Taras Shevchenko University; 1993-1996: graduate student, Kyiv Taras Shevchenko University; 1996: Ph.D. in Mathematics (Candidate of Physical and Mathematical Sciences), Kyiv Taras Shevchenko University; 1996-2000: Assistant Professor in Algebra, Department of Mechanics and Mathematics, Kyiv Taras Shevchenko University; 1998-1999: Alexander von Humboldt Research Fellow in Bielefeld University, Germany. 2000: Habilitation in Mathematics (Doctor of Physical and Mathematical Sciences), Kyiv Taras Shevchenko University; 2000-2001: Junior Associate of the ICTP, Trieste, Italy; 2000-2001: Research Assistant, Department of Mathematics, Chalmers University of Technology and Göteborg University; 2001-2007: Senior Lecturer, Department of Mathematics, University of Uppsala. 2002: Docent in Mathematics, University of Uppsala. 2007-2008: Reader in Pure Mathematics, Department of Mathematics, University of Glasgow 2007-now: Professor in Mathematics, Department of Mathematics, University of Uppsala Research Interests: Representation theory of associative algebras and Lie algebras, transformation semigroups. Link to my teaching page at Uppsala University "Essén Lectures" (Uppsala, May 12-16, 2014) "Introductory Workshop" (Uppsala, February 26-28, 2015), connected to the Representation theory program at the Institute Mittag-Leffler. "Lie Algebras and Applications" (Uppsala, September 6-8, 2012). "Semigroups and Applications" (Uppsala, August 30 - September 1, 2012). "Algebraic representation theory confrence" (Uppsala, September 1-3, 2011). "Algebra and Representation Theory Workshop" (Uppsala, December 12, 2008). "Algebraic Methods in Functional Analysis" (Gothenburg, June 15-17, 2007), a mini-workshop within the joint research project "Representation theory of algebras and applications", supported by STINT. "Categorification in Algebra and Topology" (Uppsala, September 7-10, 2006), conference. "Algebraic versus analytic representations" (Kyiv, December 8-10, 2005), a mini-workshop within the joint research project "Representation theory of algebras and applications", supported by STINT. "Tame and Wild Workshop" (Uppsala, November 26-28, 2004), a mini-workshop within the joint research project "Representation theory of algebras and applications", supported by STINT. "Representation Theory and its Applications" (Uppsala, June 22-27, 2004) a satellite conference to the "Fourth European Congress of Mathematics". "C^*-algebras, Lie algebras and related topics" (Kyiv, December 4-6, 2003), a mini-workshop within the joint research project "Representation theory of algebras and applications", supported by STINT. Since December 2007 I am an Editor of the journal "Algebra and Discrete Mathematics" Since March 2010 I am a Subject Editor in the area of Algebra of "Glasgow Mathematical Journal". Since July 2011 I am an Editor of the journal "Arkiv för Matematik". Since January 2012 I am an Editor of the "Journal of Algebra". Texts of talks with OHP: -- Category O for classical Lie superalgebras, Köln Algebra seminar, Köln University, Köln, Germany, July 16, 2013, pdf -- Finitary 2-categories and their 2-representations, 17 NWDR Workshop, Wuppertal, Germany, July 12, 2013, pdf -- Simple supermodules for classical Lie superalgebras, Workshop on ``Cohomology in Lie Theory'', Oxford, UK, June 24-28, 2013, pdf -- Higher representation theory, Third international symposium on ``Groups, algebras and related topics'', Beijing, P. R. China, June 10-16, 2013, pdf -- Simple supermodules for classical Lie superalgebras, Workshop on ``Super representation theory'', Taipei, Taiwan, May 10-12, 2013, pdf -- Linear representations of semigroups from 2-categories, Workshop on ``Semigroup representations'', Edinburgh, UK, April 10-12, 2013, pdf. -- 2-categories, 2-representations and their applications, LMS Northern regional meeting and workshop on ``Triangulations and mutations'', Newcastle, UK, March 18-22, 2013, pdf. -- Koszul duality between generalized Takiff Lie algebras and superalgebras, Lie superalgebras, Rome, Italy, December 10-14, 2012, pdf. -- Endomorphisms of cell 2-representations, Gradings and decompositionnumbers, Stuttgart, Germany, September 24-28, 2012, pdf. -- Category O for quanum groups, Mathematical physics and developmentsin algebra, Special session of 6 ECM, Krakow, Poland, July 3-6, 2012, pdf -- 2-representations of finitary 2-categories, Category theoretic methods in representationtheory, Ottawa, Canada, October 14-16, 2011, pdf. -- Higher representation theory and categorification, "Algčbres de Hecke Algčbres de CherednikAlgčbres amassées et Théorie des Représentations", IESC Cargese, September 12-23, 2011, lecture_1, lecture_2, lecture_3, lecture_4, lecture_5. -- Combinatorial categorification of sl_k-knot invariants, LieGrits Workshop, Mathematical Institute, University of Oxford, January 3-9, 2008, pdf. -- Schur-Weyl dualities for symmetric inverse semigroups, Group Embeddings: Geometry and representations, BIRS, Banff, Canada, September 17-21, 2007, pdf. -- Quivers, representation, roots and Lie algebras, Uppsala Math. Colloquim, September 7, 2007, pdf. -- Algebraic Categorification, ICRA XII, Torun, August 20-24, 2007, pdf. -- Categorification, Kostant's problem and generalized Verma modules, Algebraic and Geometric Lie Theory (AGLT), Ĺrhus, June 25-30, 2007, pdf. -- Categorification of the representation theory of the symmetric group, Perspectives in Auslander-Reiten theory, Trondheim, May 10-12, 2007, pdf. -- Category O as a source for categorification, Categorification in Algebra and Topology, Uppsala, September 7-10, 2006, pdf. -- Serre functors, category O and symmetric algebras, Conference on Representation Theory and related Topics, International Center for Theoretical Physics (ICTP), Trieste, Italy: January 23-28, 2006, -- Serre functors and symmetric algebras, Algebraic versus analytic representations, Workshop, Kyiv, December 8-10, 2005, pdf. -- Interaction of Ringel and Koszul dualities for quasi-hereditary algebras, International Asia Link Conference on Algebras and Representations, Beijing, May 2005, pdf. -- A twisted approach to Kostant's problem, Algebra Seminar, Ĺrhus University, Ĺrhus, Denmark, May 2005, pdf. -- A generalization of the identity functor, Representation theory and its Applications, Uppsala, Sweden, June 2004, pdf. -- On finitistic dimension of stratified algebras, Pure Maths Colloquium, Leicester Unievrsity, Leicester, U.K., January 2004, pdf. -- Finitistic dimension of properly stratified algebras, C^*-algebras, Lie algebras and related topics, Mini-workshop, Kyiv, Ukraine, December 2003, pdf. -- Twisting, completing and approximating category O, IVth International algebraic confrence, Lviv, Ukraine, August 2003, pdf. -- Stratified algebras arising in Lie theory, ICRA X, Specialized Workshop, Toronto, Canada, August 2002, pdf. -- New properties and applications of Gelfand-Zetlin modules, ICRA X, Toronto, Canada, August 2002, pdf. -- Combinatorics of partial bijections, Docent lecture, Uppsala, Sweden, June 2002, pdf. -- Structure of generalized Verma modules, "Lie and Jordan algebras", Sao Paulo, Brazil, May 2002, pdf. -- Abstract version of Enright's completion, First AMS-SMF meeting, Lyon, France, July 2001, pdf. -- Twisted generalized Weyl algebras, First DMV-BMS meeting, Liege, Belgium, June 2001, pdf. Addition: pdf. -- Gelfand-Zetlin modules, MPI Oberseminar, Bonn, Germany, May 2001, pdf. Addition: pdf. BOOK: Classical Finite Transformation Semigroups, An Introduction Series: Algebra and Applications , Vol. 9 Ganyushkin, Olexandr, Mazorchuk, Volodymyr 2008, Approx. 340 p. 4 illus., Hardcover, ISBN: 978-1-84800-280-7. Product flyer. BOOK: Lectures on sl_2-modules , Imperial College Press, 2009, ISBN: 978-1-84816-517-5 1-84816-517-X. Product flyer. List of typos: pdf. BOOK: Lectures on algebraic categorification , The QGM Master Class Series European Mathematical Society Publishing House, ISBN 978-3-03719-108-8 Recent preprints: -- Some homological properties of category O, III, coauthor K.Coulembier, Preprint arxiv:1404.3401, pdf -- Classification of simple weight modules over the 1-spatial ageing algebra, coauthors R.Lu and K.Zhao, Preprint arxiv:1403.5691, pdf -- Category O for the Schrödinger algebra, coauthors B.Dubstky, R.Lu and K.Zhao, Preprint arxiv:1402.0334, pdf -- Primitive ideals, twisting functors and start action for classical Lie superalgebras, coauthor K.Coulembier, Preprint arxiv:1401.3231, pdf -- On simple modules over conformal Galilei algebras, coauthors R.Lu and K.Zhao, Preprint arxiv:1310.6284, to appear in J. Pure Appl. Algebra, pdf -- Morita theory for finitary 2-categories, coauthor V.Miemietz, Preprint arxiv:1304.4698, to appear in Quantum Topol. pdf -- Parabolic category O for classical Lie superalgebras, to appear in the Proceedings of the conference "Lie Superalgebras", Rome 2102, pdf -- Categoification of the Catalan monoid, coauthor A.-L. Grensing, Preprint arxiv:1211.2597, to appear in Semigroup Forum, pdf -- Simple Virasoro modules induced from codimension one subalgebras of the positive part, coauthor E.Wiesner, Preprint arxiv:1209.1691, to appear in Proc. AMS, pdf -- Endomorphisms of cell 2-representations, coauthor V.Miemietz, Preprint arxiv:1207.6236, pdf -- Weight modules over infinite dimensional Weyl algebras, coauthors V.Futorny and D.Grantcharov, Preprint arxiv:1207.5780, to appear in Proc. AMS, pdf -- Simple Virasoro modules which are locally finite over a positive part, coauthor K.Zhao, Preprint arxiv:1205.5937, to appear in Selecta Math., pdf -- On multisemigroups, coauthor G.Kudryavtseva, Preprint arxiv:1203.6224, pdf -- Additive versus abelian 2-representations of fiat 2-categories, coauthor V.Miemietz, Preprint arxiv:1112.4949, to appear in Moscow Math. J., pdf -- Category O for quantum groups, coauthor H.H.Andersen , Preprint arxiv:1105.5500, to appear in JEMS, pdf Old preprints: -- A counter example to Slater's conjecture on basic gaps, coauthor R.Twarock, Preprint 2002:35, Uppsala University, pdf. -- The full finite inverse symmetric semigroup IS_n, coauthor O.Ganyushkin, Preprint 2001:37, Göteborg University, pdf. More papers are available here. Some mathematical problems I can not solve can be found here. -- Theory of Semigroups, course notes in ukrainian, coauthor O.Ganyushkin, pdf. -- Jordan normal form, solution guide in ukrainian, pdf. -- Linear Algebra, matrices and determinants, solution guide in ukrainian coauthors S.Ovsienko and N.Golovashchuk, doc.gz. Teaching in Uppsala In the world of mathematics: -- The journal homepage is here. -- The problems from "Our competition" are available here. My Erdös number is 2 via Svante Janson.
{"url":"http://www2.math.uu.se/~mazor/","timestamp":"2014-04-18T08:24:55Z","content_type":null,"content_length":"48521","record_id":"<urn:uuid:28b7ef91-d29d-4ec4-ab8b-987d91f0a7e5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Getting the Job Done Date: 12/17/96 at 14:59:02 From: Jerrad Evans Subject: Solving a problem Our teacher gave us a worksheet with a question on it that I don't seem to be able to solve. I've tried to make some equations out of it, but there doesn't seem to be one (at least nothing that I can see): Two typists share a job. The second typist begins working one hour after the first. Three hours after the first typist has begun working, only 11/20 of the job has been completed. When the job is finished, it turns out that each typist has done exactly half of the work. How many hours would it take each typist, working alone, to complete the job? Could you help? Thanks! Jerrad Evans Date: 12/17/96 at 18:58:39 From: Doctor Wilkinson Subject: Re: Solving a problem What you're really being asked is how fast each typist types. That is, to answer the question "how long would this typist take to finish this job working alone" you need to know how much of this job the typist would get done in an hour. For example, if he or she takes 4 hours to finish the job, then in 1 hour he or she finishes 1/4 of the job. These rates are easier to work with than the times. In most problems involving time, you may want to move back and forth between looking at rates and looking at periods of time. After that introduction, let's let: a = amount of job first typist can do in an hour b = amount of job second typist can do in an hour Another number we don't know is T, the number of hours the typists took to finish the job. So we've got three unknowns, and we're going to need three equations. Do we have three pieces of information we can get equations out of? We have the curious piece of information that the first typist started an hour ahead of the second typist. What this means is that after t hours, where t is at least 1, the amount that has been done by the first typist is (at), and the amount that has been done by the second typist is b(t-1), because the second typist has worked an hour less than the first. The next thing we know is that after three hours, 11/20 of the job has been done. At that time the first typist has done an amount 3a and the second typist has done an amount 2b, so the total is 3a + 2b = 11/20 That's one equation. Two more to go! We're also given that when the job is finished, each typist has done exactly half of it. So this involves our unknown time T, the time to finish the job. The first typist has done half of it, so aT = 1/2 and the second typist has also done half of it, so b(T-1) = 1/2. And there we have the three equations. -Doctor Wilkinson, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/57402.html","timestamp":"2014-04-19T17:54:47Z","content_type":null,"content_length":"7684","record_id":"<urn:uuid:f26417f4-67a8-4ea2-a433-9d3039bad188>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
Philosophy of Reality Hacking Decoding Me Recursive languages or formal systems or automata are generally decidable; the set of such languages, formal systems, or automata are generally undecidable. In my work, I am looking for the logical description or model of a universal Turing machine physically encoded, and I believe I know what the physical encoding is. The characteristic of a universal Turing machine is the acceptance of every input and the transformation of state for every input as output; physically, this corresponds pretty neatly to the characteristics of a blackhole which absorbs all matter and energy and transforms state for everything absorbed. A universal Turing machine can simulate any other machine if supplied with the equivalent encoding of that Turing machine. If we correspond this behavior to blackholes then we get an interesting potential consequence; feeding a blackhole a human being gives it the ability to simulate a human being and likewise for fundamental particles. This correspondence works the other way too. Theoretically, blackholes (totally absorbative) are joined to white holes (totally emitting) via a Einstein-Rosen bridge (wormhole aka bifrost), so this would seem to correspond to how the internal parts of a universal Turing machine connect with the external parts; the internal memory maps to the external memory via a negatively curved space. What does this tell us? For one, if we live in this kind of universe then we can move from one universal Turing machine to another via bridges. There’s additional complexity involved in transforming from one universal Turing machine to another and in escaping one universe for another, but it is in principle possible to do it. For two, it tells us it is plausible that we are some kind of Turing machine operating within a universal Turing machine. It tells us it is plausible that we might generate a table for programming ourselves and our environment. It tells us that if we live in this kind of universe then we don’t need to develop infinitely better machines because the smallest components of our universe are logical circuits corresponding to finite state machines and regular languages. All the mechanism are there waiting for us to figure out the programming codes to make them perform their functions. We need to develop the interfaces between our semi-classical first hand experience and the quantum mechanical Planck state transformations of the information physically encoding us. It is akin to the work which was done to convert special relativity and classical physical theory to quantum mechanics; special relativity tells us about the behavior of two-state systems with zero rest mass aka photons and gluons aka Qubits. Chromodynamics by virtue of its finite extent probably corresponds to regular physical languages and finite state machines; spin mechanics likely corresponds similarly, but we know that electromagnetics join with weak force mechanics to form electroweak dynamics, so we know that weak forces border on infinite extents. My inkling is that electromagnetics are countably infinite in extent whereas gravitation is uncountably infinite in extent. Which means that electromagnetics, weak, and strong forces correspond to regular, context-free, context-sensitive, and recursive physical languages whereas gravitation corresponds to recursively enumerable and analytical physical languages. My primary research at this time considers the possibility of contradiction tolerant universes and physical systems. I feel it is necessary to pursue contradiction tolerant alternative methods and theories in order to explicitly account for things such as hallucinations and dream logics. We tell stories and assign meaning and purposes to people and things in ways which are arguable not consistent. The very fact that we can identify and discuss contradictions ought to be a significant clue that contradictions do in fact “exist” in our universe in so far as we exist in our universe and our discourse about contradictions exists in our universe. This is oddly disputed by some otherwise very bright people; it is an untenable position for a materialist to have as it leads readily to a supermaterialistic dualism which places human consciousness outside of the domain of physics. While I will humor spiritual and religious arguments about the supernatural properties of consciousness, when it comes to physical theory my loyalties are with attempting to disprove supernatural and unphysical characterizations of human consciousness. Counter-intuitively to many, this means theorizing human consciousness totally as a materialistic process and creating reductionist and instrumentalist experiments based on that theory. The point of such scientific inquiry is almost never to confirm the theory and succeed but to fail while doing your damnedest to succeed. Failure is interesting theoretically speaking whereas success is not. From the purely materialistic view point, everything we experience has to have a material origin. My thoughts may seem ephemeral, but they are material just as energy and electrical charge are materially mediated. My thoughts exist as a mixture of molecules and electromagnetic signals communicated between the medium of my body and the environment within which my body resides. When we connect the material to the computational, what is material origins becomes information origins; my thoughts simplify somewhat to sequences or configurations of various kinds of states, so what I experience—even dreams and stories and hallucinations—is a direct experience of information materially encoded in the medium of me. In that sense, what I experience are programs. Mechanics simulated in the subspace of the machine that is me; mechanical interactivity.
{"url":"http://radicomp.tumblr.com/","timestamp":"2014-04-17T03:48:49Z","content_type":null,"content_length":"81080","record_id":"<urn:uuid:5425d9f2-edb4-41fe-8516-13325e5b9ef3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Learning a little Motivic Cohomology up vote 14 down vote favorite Simply because I find it interesting, I have spent some time studying motivic cohomology from the lectures by Mazza, Voevodsky and Weibel. However, I'm finding it hard to tell if the theory is something I could use for intuition or to prove interesting theorems. I was hoping somebody could give me a few examples if this is the case. An example I would consider for K-theory could be that Bloch (Ch. 5 in Duke Lectures) proves Roitman's theorem using in part K-theoretic techniques. $\textbf{Question:}$ For example, I've been told that one can, in a useful way, write down the Abel-Jacobi map using this theory. However I have no example to this effect. Does anyone know of an example where this is the case, or understand maybe why this is predicted to be the case? $\textbf{Question:}$ There are several conjectures that one can state in complete generality if one uses this theory. However, what one can prove if they learn motivic cohomology? Could one prove (perhaps in an "easier" way) some theorems that an arithmetic algebraic geometer (interpreted however you like) could find interesting? Are there number theoretic things that people study using these $\textit{Edit}$: I should clarify: Beilinson's conjecture on special values of L-functions of course aims to explain special values of various L-functions via motivic cohomology. Hence, it makes sense that in studying it people use the theory. I'm really interested if there are examples that are not of this form. (In the sense that the theory is used in work on conjectures not written in its language). Examples would be the question about the Abel-Jacobi map above, or in the case of K-theory Bloch's proof mentioned above. Milne's result on the the polynomials $P_{2r}$ appearing in the Weil conjectures (from the paper referenced by Andrew below) is also an example. Somehow I feel this can't be the only result of this kind ... There are some results that sound extremely interesting: For example, in http://arxiv.org/pdf/1309.4068.pdf Geisser and Schmidt construct a pairing between $\textrm{mod } m$ algebraic singular homology and $\textrm{mod } m$ tame etale cohomology group. This gives a kind of class field theory in a very general setting, and appears to formally resemble the topological situation a lot. However, it is completely unclear to me if I could ever use such a result as someone who doesn't intend to specialize in this subject. ag.algebraic-geometry reference-request examples motivic-cohomology add comment 1 Answer active oldest votes My understanding is that motivic cohomology has the ability to describe integral values of L-functions up to a constant (and specifically values of zeta functions up to a sign). An example would be Milne's paper "motivic cohomology and values of zeta functions" published in compositio in '88. So, the answer to your question in the bold is yes, but I am sorry I don't have a up vote 4 more detailed answer for you. down vote Hi Andrew, thanks for your comments and the reference - I wasn't aware of it. I agree, Motivic cohomology has the ability to describe special values of $L$-functions of smooth projective varieties (and more generally) as predicted by Beilinson. – LMN Oct 4 '13 at 0:18 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry reference-request examples motivic-cohomology or ask your own question.
{"url":"http://mathoverflow.net/questions/143643/learning-a-little-motivic-cohomology/143851","timestamp":"2014-04-19T09:50:02Z","content_type":null,"content_length":"54075","record_id":"<urn:uuid:8cfa6ac6-9330-4a01-8ad2-085d1f49fc1b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
Who Knows Excel? 09-16-2011, 02:13 PM #1 I am stumped, can anyone help me? I am doing an assignment that requires me to find the weighted average of a students four exams. I am asked to use absolute cell references. When I enter the formula for one student I get the correct answer according to the book. However I am asked next to use autofill to complete the other students grades and that's my problem. It does not work. I am in college and don't know what I am doing wrong since excel is very new to me. How do I find out what I am doing wrong? Can anyone help this student? I use excel a lot. I will try and help. I PM'd you We are in the same boat, I am in statistics right now and if I only knew how to work excel it would be sooooo much easier. Is that bridge near Lynchburg? Hope someone was able to help you...excel was very hard for me to learn about and I really haven't used it since I took an Office class. Good luck! You use a $ to do an absolute cell reference...ie $B$7. So this is more an Excel question than a math one, right? Are you trying to copy the formulas to another location, and that's what's not working? Without seeing it or more information, I'd guess you're having ABSolute issues. So this may be relevant. Or it may not be. :mrgreen: When you use absolutes, you're attempting to lock in both or one of a cell's column or row location. So if the formula in cell D10 is =$A$1, no matter where you copy or move it to on any other cell on your spreadsheet, it will always look for the value in cell A1. Now, if the formula was =$A1, and you copied if from cell10 to, say, cell D11, it would still be locked in to column A, but would now be looking at the value one down (since D11 is located one cell down from D10). Same goes for =A$1. If you copied that formula from cell D10 to cell D11, it would still look at the contents of cell A1. However, if you copied it to cell E10, then it would look at the value in cell B1. Now, you can change each formula manually. But that's not fun or efficient. When you set the initial formulas to figure out whatever it - totals/weighted avgs/%, - look at what would remain constant from student to student - would it be the info in a column or a row? And then lock that in with a $. When you hit F4 to make an cell ABS in your formula, it will do a double $x$1 first. Hit F4 again, and it locks only the row. Hit it again and it locks only the column. Once more, and it unlocks it all back to x1. It takes a little getting used to, but if you've figured out what the difference is between COPY and MOVE, then it should make sense. Good luck! This is the formula that I entered using absolute cell reference. Have I entered it wrong? Thanks for your help. can you post a picture of the data? And you need () around some of the formulas to break it up. This is the formula that I entered using absolute cell reference. Have I entered it wrong? Thanks for your help. The values that have the weights should have two $. So say that the one value (b17, c17, d17, e17) is the student's grade and the other value (c8, c9, c10, and c11) is the weight, the weight will be the same for each student. It should be: (BTW - I reversed the values in the first two so that they are consistent). Chemistry 303 First Semester Scores Posted 12/20/2010 Class Summary Exam Weight Median Maximum Minimum Exam 1 20% Exam 2 20% Exam 3 20% Final Exam 40% Overall 100% Student Scores Top Ten Overall Scores Student ID Exam 1 Exam 2 Exam 3 Final Exam Overall 390-120-2 84.0 80.0 83.0 72.0 390-267-4 98.0 92.0 91.0 99.0 390-299-8 54.0 56.0 51.0 65.0 390-354-3 98.0 95.0 90.0 94.0 390-423-5 83.0 83.0 74.0 77.0 390-433-8 52.0 63.0 58.0 53.0 390-452-0 97.0 98.0 93.0 91.0 390-485-7 87.0 77.0 83.0 87.0 390-648-6 94.0 91.0 92.0 97.0 390-699-6 74.0 75.0 50.0 64.0 391-260-8 96.0 84.0 95.0 96.0 391-273-8 73.0 75.0 78.0 74.0 391-315-1 89.0 89.0 73.0 82.0 391-373-1 99.0 94.0 85.0 93.0 391-383-6 92.0 93.0 96.0 80.0 391-500-8 81.0 88.0 78.0 88.0 391-642-7 72.0 80.0 83.0 86.0 I need to find the weighted average of the first students four exams and then use auto fill to complete the remainder. 09-16-2011, 02:22 PM #2 Super Member Join Date Dec 2009 White Mountains, AZ Blog Entries 09-16-2011, 02:28 PM #3 09-16-2011, 02:38 PM #4 09-16-2011, 02:39 PM #5 Super Member Join Date Apr 2007 Central CA - But otherwise, NOTW Blog Entries 09-16-2011, 02:43 PM #6 Join Date Nov 2010 09-16-2011, 02:53 PM #7 09-16-2011, 02:54 PM #8 Join Date Nov 2010 09-16-2011, 02:57 PM #9 09-16-2011, 02:57 PM #10
{"url":"http://www.quiltingboard.com/general-chit-chat-non-quilting-talk-f7/who-knows-excel-t153470.html","timestamp":"2014-04-17T13:11:38Z","content_type":null,"content_length":"70335","record_id":"<urn:uuid:13ebc576-010f-489f-b86b-69dbee7e96fc>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamic Typing Let us take a brief excursion and apply the skills and techniques learned so far to analysing the notion of “dynamic typing”. Two steps: • model a dynamic language (remember: syntax, statics, dynamics). PFPL 18. • consider correspondence to the typing approrach we’ve seen so far. PFPL 19. What is “dynamic typing”? 1. “safe” (for some value of the word “safe”). 2. just runs – syntax (all but) defines the language (minimal or no statics). 3. values cary their classification at runtime. Let’s model a dynamic language! A Theory of Dynamic Typing language: PCF with one type: dyn. Two classes of values: numbers and functions. zero and succ(d) are not values. Statics: trivial, just check for erroneous free variables. x1 ok, ..., xn ok |- d ok Dynamics: see book (rules 18.1 - 18.4). what is guaranteed by this language? Lemma: class checking If d val then 1. either d is_num n for some n, or d isnt_num. 2. either d is_fun x.d’, or d isnt_fun. Pf: shallow case analysis of the rules for d val. Theorem: Progress (=safety). If d ok, then either d val or d err, or there exists d’ such that d -> d’. Case d = succ(d’). Need to use Lemma 18.1. Meaning… execution is well defined! This is the essential differentiation between safe and unsafe. Even though errors can occur in Dynamic PCF, executation is fully defined. Extension: lists d ::= nil | cons(d1;d2) | ifnil(d; d0; x,y.d1) What should the dynamics be? Can we take the same approach as for numbers where we have a class of actual numbers? Doesn’t scale. So, need to weaken ifnil to delay error checking. We can now easily form nonsensical lists and ifnil won’t catch the error right away. cons(zero; cons(zero; \(x) x)) Reality check Dynamic language designers are not “lazy type theorists” – type theorists who didn’t want to bother building a proper type checker. They think about languages differently. Specifically, tagged values, or classes – no canonical forms lemma. So, we drop specialized “if” constructs in favor of a general conditional, predicates and deconstructors: d ::= cond(d; d0; d1) | zero?(d) | succ(d)? | nil?(d) | cons?(d) | pred(d) | car(d) | cdr(d) Conditional distinguishes between nil and anything else, where nil corresponds to “false”. Example: list append function. fix append is \(x) \(y) cond(x; cons(car(x); append(cdr(x))(y)); y) WARNING: discussions of static vs. dynamic typing are littered with strawman arguments (on both sides!). Unfortunately, few actually understand the underlying theory well enough to make an informed Disadvantages of dynamic typing: 1. Late debugging 2. Cost of tags and/or cost of “smart” compiler which can optimize to (somewhat) make up for the cost of the language design. Example: addition function (section 18.3) Some related reading: 1. Abada et al. Dynamic Typing in a Statically Typed Language 2. Henglein. Dynamic Typing: Syntax and Proof Theory
{"url":"http://www.cs.cmu.edu/~rwh/courses/typesys/notes/Dynamic-Typing.html","timestamp":"2014-04-19T03:34:01Z","content_type":null,"content_length":"4793","record_id":"<urn:uuid:7fb85dd2-73b9-4a3b-a433-52c835fe5c30>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
Prelab Activity B: Current Division: For The Circuit ... | Chegg.com Image text transcribed for accessibility: Prelab Activity B: Current Division: For the circuit shown in figure 2 assume R1 = 10 k Ohm, R2= 10 10 k Ohm and R3 = 20 k Ohm. Write the two loop equations. Using KCL. write I2 in terms of the currents I and I1. Substitute I2 from part (b) into pan (a), solve the two equations and find currents I, I1 and I2. Now, find the parallel combination of resistors R2 and R3 and find the current I using Ohm's law. Given the current I found in pan (d), find I1 and I2 using current division. Compare the currents obtained with those obtained in pan (c). Figure 2 Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/prelab-activity-b-current-division-circuit-shown-figure-2-assume-r1-10-k-ohm-r2-10-10-k-oh-q3557741","timestamp":"2014-04-19T00:47:23Z","content_type":null,"content_length":"20192","record_id":"<urn:uuid:de94ddc7-9427-4a41-a308-23e0e389c5a0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Poisson distribution February 2nd 2011, 04:03 AM #1 Junior Member Jun 2010 Poisson distribution A guy has a chicken that gives him 10 egg/month it is known that if the eggs are not sold within a month they are no longer in good conditions to be sold. If the number of people that wants eggs per month follows a poisson distribution with mean 8 and if the guy has a profit of 7 euros for selling an egg and loses 3 if he doesn't sell if whats the expected profit per month? How can I get the right answer? I multiplied the mean number of people that try to buy eggs by the profit of selling that amount and I got the right answer but I dont think thats a valid way of doing it or is it? A guy has a chicken that gives him 10 egg/month it is known that if the eggs are not sold within a month they are no longer in good conditions to be sold. If the number of people that wants eggs per month follows a poisson distribution with mean 8 and if the guy has a profit of 7 euros for selling an egg and loses 3 if he doesn't sell if whats the expected profit per month? How can I get the right answer? I multiplied the mean number of people that try to buy eggs by the profit of selling that amount and I got the right answer but I dont think thats a valid way of doing it or is it? The expected profit is (this probably can be simplified somewhat): $\displaystyle \left[\sum_{n=0}^{10} [7 \times n-3\times (10-n)] \times p(n,8)\right]+\left[7\times \sum_{n=11}^{\infty}p(n,8)\right]$ where $p(n,8)$ denotes the probability of $n$ from a Poission distribution with mean $8$. if you could repost it but this time with no code I would be very gratefull man since I cant see what's in there the way you wrote it February 2nd 2011, 04:25 AM #2 Grand Panjandrum Nov 2005 February 2nd 2011, 04:30 AM #3 Junior Member Jun 2010
{"url":"http://mathhelpforum.com/advanced-statistics/169999-poisson-distribution.html","timestamp":"2014-04-20T04:54:24Z","content_type":null,"content_length":"37053","record_id":"<urn:uuid:8c01d799-7df5-4002-b6af-a952491f96e8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Double angle formulae February 18th 2010, 06:24 AM #1 Junior Member Feb 2010 Double angle formulae Can someone please show me the process of answering this question? By writing sin3X as sin(2X+X), show that sin3X=3sinX−4sin^3X I know that sin2X=2sinXcosX and the other trig identities, but I cannot understand how to substitute it in, in particular the answer follows as: sin3X = sin2XcosX + cos2XsinX - it is this step that I am struggling with! Can someone please show me the process of answering this question? By writing sin3X as sin(2X+X), show that sin3X=3sinX−4sin^3X I know that sin2X=2sinXcosX and the other trig identities, but I cannot understand how to substitute it in, in particular the answer follows as: sin3X = sin2XcosX + cos2XsinX - it is this step that I am struggling with! Use the hint given at the start of the question and recall that $\sin(A+B) = \sin A \cos B + \cos B \sin A$ $\sin(2X+X) = \sin (2x) \cos (x) + \cos (2x) \sin(x)$ February 18th 2010, 06:40 AM #2
{"url":"http://mathhelpforum.com/trigonometry/129448-double-angle-formulae.html","timestamp":"2014-04-21T09:23:50Z","content_type":null,"content_length":"33541","record_id":"<urn:uuid:c6a937be-8b84-4460-8c37-d0c75cc66be3>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Rudolf E. Kalman From GHN Rudolf E. Kalman Rudolf Kalman was born in Budapest, Hungary on 19 May 1930. The son of an electrical engineer he decided to follow in his father's footsteps. He immigrated to the United States and obtained a Bachelor's and Master's degree in Electrical Engineering from M.I.T. in 1953, and 1954 respectively. He left M.I.T. and continued his studies at Columbia University where he received his ScD. in 1957 under the direction of Professor J. R. Ragazzini. His early interest in control systems was evident by his research at M.I.T. and especially at Columbia. His early research was based upon the notion of state variable representations, was mathematically advanced but motivated by practical problems. He also showed, even at these early years, a highly individual approach to research which has continued during the remainder of his brilliant career. From 1957 to 1958 Kalman was employed as a staff engineer at the IBM Research Laboratory in Poughkeepsie, N. Y. During that period of time he made important contributions to the design of linear sampled-data control systems using quadratic performance criteria, as well as in the use of Lyapunov theory for the analysis and design of control systems. He foresaw at that time the importance of the digital computer for large-scale systems. In 1958 Kalman joined the Research Institute for Advanced Study (RIAS) which was started by the late Solomon Lefschetz. He started as a research mathematician and was promoted later to Associate Director of Research. It was during that period of time (1958-1964) that he made some of his truly pioneering contributions to modern control theory. His lectures and publications during that time period are indicative of his tremendous creativity and his search for a unified theory of control. His research in fundamental systems concepts, such as controllability and observability, helped put on a solid theoretical basis some of the most important engineering systems structural aspects. He unified, in both the discrete-time and continuous-time case, the theory and design of linear systems with respect to quadratic criteria. He was instrumental in introducing the work of Caratheodory in optimal control theory, and clarifying the interrelations between Pontryagin's maximum principle and the Hamilton-Jacobi-Bellman equation, as well as variational calculus in general. His research not only stressed mathematical generality, but in addition it was guided by the use of the digital computer as an integral part of the design process and of the control system implementations. It was also during his stay at RIAS that Kalman developed what is perhaps his most well known contribution, the so-called "Kalman filter". He obtained results on the discrete-time (sampled data) version of this problem in late 1958, and early 1959. He blended earlier fundamental work in filtering by Wiener, Kolmogorov, Bode, Shannon, Pugachev and others with the modern state-space approach. His solution to the discrete-time problem naturally led him to the continuous-time version of the problem and in 1960-1961 he developed, in collaboration with R. S. Bucy, the continuous-time version of the "Kalman filter". The Kalman filter, and its later extensions to nonlinear problems, represents perhaps the most widely applied by-product of modern control theory. It has been used in space vehicle navigation and control (e.g. the Apollo vehicle), radar tracking algorithms for ABM applications, process control, and socioeconomic systems. Its applications popularity is due to the fact that the digital computer is effectively used in both the design phase as well as the implementation phase. From a theoretical point of view it brought under a common roof related concepts of filtering and control, and the duality between these two problems. In 1964 Kalman went to Stanford University where he was associated with the departments of Electrical Engineering, Mechanics, and Operations Research. During that period his research efforts shifted, toward the fundamental issues associated with realization theory, and algebraic system theory. Once more he opened up new research avenues in a new and basic area, and his contributions have helped shape up a new field of research in modern system theory. In 1971 Kalman was appointed graduate research professor at the University of Florida, Gainesville, Florida. He became director of the Center for Mathematical System Theory and his education and research activities involve the departments of electrical engineering, industrial engineering, and mathematics. He also acts as a scientific consultant to research centers in the Ecole des Mines de Paris, France. Kalman not only shaped the field of modern control theory, but he has been instrumental in promoting its wide usage. His magnetic personality and his numerous lectures in universities, conferences, and industry have attracted countless researchers who were greatly influenced by his ideas. He has acted as a catalytic force in international exchange of ideas. Kalman has published over fifty technical articles, and has delivered numerous lectures. In 1962 he was named as the Outstanding Young Scientist of the Year by the Maryland Academy of Sciences. He became Fellow of the IEEE in 1964. He is a member of many professional societies and serves on the editorial board of numerous journals. He is the co-author of the book Topic in Mathematical System Theory, McGraw-Hill, 1969. He received the IEEE Medal of Honor in 1974, "For pioneering modern methods in system theory, including concepts of controllability, observability, filtering, and algebraic
{"url":"http://www.ieeeghn.org/wiki6/index.php?title=Rudolf_E._Kalman&oldid=44183","timestamp":"2014-04-18T22:29:18Z","content_type":null,"content_length":"34324","record_id":"<urn:uuid:2c754f8e-05a8-4b7f-9951-5c3b66e9de16>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: In a 30-60-90 triangle, the shorter leg is opposite the _____ degree angle and the longer leg is opposite the ________ degree angle. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/516ff61ee4b01bc709201dab","timestamp":"2014-04-17T13:07:38Z","content_type":null,"content_length":"63070","record_id":"<urn:uuid:044895b7-97c4-4eac-95e4-5d9a471fc4f6>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
A reference book for Schur's lemma up vote 0 down vote favorite We want to find the reference book for some version of the schur's lemma which covers the following result Let A be an assoiative algebra over {\mathbb C} with countable basis, then any central element acts on any simple A-module as a scalar. rt.representation-theory reference-request Why is it relevant that the algebra has a countable basis? – André Henriques Jun 25 '12 at 18:58 2 @André: Take $A = M = \mathbb{C}(x)$. Every element is central, but only a few act as scalars! – Evan Jenkins Jun 25 '12 at 19:32 I'd bet it must be also in Dixmier's Universal eneveloping algebras. – Vít Tuček Jun 27 '12 at 12:52 add comment 4 Answers active oldest votes This is (an immediate consequence of) Lemma 2.1.3(b) in Chriss-Ginzburg. up vote 4 down vote accepted add comment Doc, if you want to be anal with your references, you should quote Amitsur, A. S. Algebras over infinite fields. Proc. Amer. Math. Soc. 7 (1956), 35–48. up vote 3 down Otherwise, this is a well-known fact and you can just refer to it as "Amitsur's Trick" or "Noncommutative Nullstellensatz". Chapter 9 of vote McConnell-Robson-Noncommutative-Noetherian-Rings is devoted entirely to this property and its finer variations. add comment From Wikipedia: • David S. Dummit, Richard M. Foote. Abstract Algebra. 2nd ed., pg. 337. • Lam, Tsit-Yuen (2001), A First Course in Noncommutative Rings, Berlin, New York: Springer-Verlag, ISBN 978-0-387-95325-0 From Google Books: up vote 2 down • William Fulton, Joe Harris. Representation Theory. vote • William Arveson. An Invitation to C*-Algebras. I hope that helps you a bit. P.s: I wanted to post this in a comment, but the comment button isn't available for some reason. Thanks, i only have the book of Lam in hand. However i don't find it yet. – r_l Jun 25 '12 at 19:11 Johan, just so you know, you need at least 50 reputation in order to leave comments on posts that are not your own. I imagine you will get there pretty soon, though! – B R Jun 25 '12 at 19:27 @BR Aha! Well, You are right. I just passed that limit (-; – jmc Jun 25 '12 at 19:38 add comment This is also in Bourbaki's Algebra 8 (most recent edition), Section 3, number 2, Example, page 43. This reference is online if you have access to SpringerLink. up vote 2 down vote add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/100618/a-reference-book-for-schurs-lemma/100738","timestamp":"2014-04-18T08:51:11Z","content_type":null,"content_length":"67410","record_id":"<urn:uuid:24f231ce-a75a-4f48-b545-281ba20ade08>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Passaic Calculus Tutor Find a Passaic Calculus Tutor ...Currently, I am taking graduate courses in math in NYC. I am a patient and dynamic teacher and can not wait to help you with your tutoring needs! AyeletI played for a NCAA basketball team. 17 Subjects: including calculus, chemistry, reading, physics ...Integrals are applied to calculating volume as well as area. Derivatives and integrals of trigonometric, exponential, and logarithmic functions are studied, with an introduction to differential equations. Special integration techniques, including substitution, integration by parts and partial fractions are studied. 6 Subjects: including calculus, geometry, algebra 1, algebra 2 Hello! I'm Hannah. I graduated from Princeton University with an engineering degree in Computer Science and a minor in Theater and I love tutoring! 37 Subjects: including calculus, chemistry, French, physics Over the past year, I have tutored more than a dozen students at all academic levels (K-12 & undergrad). I enjoy watching my students learn and grow over time, as they begin to grasp new material and develop an affinity for learning. I strive to imbed a deeper understanding of most subjects than is... 38 Subjects: including calculus, Spanish, algebra 1, GRE ...I taught high school math (Algebra 1 through Calculus) for 8 years, and I am expert in all math concepts tested on the SAT exam. I taught high school math (Algebra 1 through Calculus) for 8 years and am certified in New York. I taught high school math (Algebra 1 through Calculus) for 8 years. 10 Subjects: including calculus, algebra 1, algebra 2, SAT math
{"url":"http://www.purplemath.com/passaic_nj_calculus_tutors.php","timestamp":"2014-04-18T13:56:38Z","content_type":null,"content_length":"23612","record_id":"<urn:uuid:98685402-a04b-4f8d-848d-8c12a5866a48>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Cooperative Mechanisms in Cardiac Muscle (model 5)Model Structurereaction_diagrams Catherine Lloyd Bioengineering Institute, University of Auckland In cardiac muscle, steady-state force-Ca2+ (F-Ca) relations exhibit more cooperativity than that predicted by the single Ca2+ binding site on troponin. The exact mechanisms underlying this high cooperativity are unknown. In their 1999 paper, J. Jeremy Rice, Raimond L. Winslow and William C. Hunter present five potential models for force generation in cardiac muscle (see the figure below). These models were constructed by assuming different subsets of three possible cooperative mechanisms: Cooperative mechanism 1 is based on the theory that cross bridge formation between actin and myosin increases the affinity of troponin for Ca2+. Cooperative mechanism 2 assumes that the binding of a cross bridge increases the rate of formation of neighbouring cross bridges and that multiple cross bridges can actin activation even in the absence of Ca2+. Cooperative mechanism 2 simulates end-to-end interactions between adjacent troponin and tropomyosin. Comparison of putative cooperative mechanisms in cardiac muscle: length dependence and dynamic responses J. Jeremy Rice, Raimond L. Winslow and William C. Hunter, 1999, American Journal of Physiology , 276, H1734-H1754. (Full text and PDF versions of the article are available for Journal Members on the American Journal of Physiology website.) PubMed ID: 10330260 State diagrams for the five models of isometric force generation in cardiac muscle. T represents tropomyosin, TCa is Ca2+ bound tropomyosin, N0, N1, P0 and P1 are the non-permissive and permissive tropomyosin states. All the models are similar in that they are structured around a functional unit of troponin, tropomyosin and actin. Tropomyosin can exist in four states, two permissive or two non-permissive (referring to whether or not the actin sites are available for binding to myosin and hence cross bridge formation). Depending on the model, one or more cross bridges exist, and these are either weakly-bound (non-force generating) or strongly bound (force generating). The paper (cited below) tests the behaviours of the five models of force generation in cardiac myocytes. The first two models provide a baseline of performance for comparison. Models 3 to 5 are developed to incorporate more cooperative mechanisms. From the results of these simulations, which were compared to and consistent with experimental data, it is hypothesised that multiple mechanisms of cooperativity may coexist and contribute to the responses of cardiac muscle.
{"url":"http://models.cellml.org/workspace/rice_winslow_hunter_1999/@@rawfile/07a793391882207bef971527208ebe266904b6a7/rice_winslow_hunter_1999_e.cellml","timestamp":"2014-04-17T18:50:12Z","content_type":null,"content_length":"41699","record_id":"<urn:uuid:f141618f-4aa0-4722-9c3f-a889f72163b9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Random-Approx 2013 Last week I attended the Random-Approx conference at Berkeley. I missed quite a few talks as I was also settling in my new office for the semester at the Simons Institute so I will just report on the three invited talks: Luca Trevisan gave a talk on spectral graph theory. He first went through the basics: the eigenvalues of the normalized Laplacian Another example is the result of Arora, Barak and Steurer which says that a small value for the One thing that I find fascinating is that as far as I understand we still have pretty much no idea of what ‘type’ of properties are encoded in the spectrum of a graph. Of course there is a good reason for this: computing the spectrum is computationally easy so this question is directly related to asking what properties of a graph are easy to compute, and this latter question is far far beyond our current knowledge. One example of a highly non-trivial property which is ‘hidden’ in the spectrum (and the corresponding eigenbasis) is a Szemeredi decomposition. Recall that the Szemeredi regularity lemma states roughly the following: fix a precision here together with a proof based on the spectrum of the graph. Santosh Vempala gave a talk on high-dimensional sampling algorithms. I already blogged about this since he gave the same talk at ICML, see here. Persi Diaconis gave a very nice board talk about how to use algorithms to prove theorems in probability theory. Of course the standard approach is rather to go the other way around and use probability to derive new algorithms! I discussed a somewhat similar topic in the early days of this blog when I suggested that one could use results in statistics to prove new theorems in probability theory (again one usually do the opposite), see here. In his talk Diaconis illustrated the power of algorithms as a proof method for probability statements with the following problem. Let Bell number for some function this paper is to prove an existence theorem for such an expression for ‘reasonable’ functions beautiful strategy of Stam. First one has to know the Dobinski’s formula: (Yup that’s right, it’s an amazing formula!) Now pick an integer P.S: In the last post I gave links to the videos for COLT 2013 and ICML 2013, in case you are interested here are the direct links to my talks in these events: - Bounded regret in stochastic multi-armed bandits (COLT), paper is here. - Multiple Identifications in Multi-Armed Bandits (ICML), unfortunately for this one the sound is not correctly synchronized so it’s probably better to just look at the paper directly. - Here is another talk I did during the summer on an older paper, it’s about 2 Responses to "Random-Approx 2013" This entry was posted in Random graphs, Theoretical Computer Science. Bookmark the permalink.
{"url":"https://blogs.princeton.edu/imabandit/2013/08/27/random-approx-2013/","timestamp":"2014-04-19T09:42:50Z","content_type":null,"content_length":"54321","record_id":"<urn:uuid:a8bcd262-5719-4805-bd88-80179a0972a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Peculiar vertex-isoperimetric inequality on the discrete torus (and generalization) up vote 1 down vote favorite Consider a discrete even torus $G=(V,E)$, i.e. the graph on $\lbrace 0,1,\dots,n-1 \rbrace^2$, $n$ even, where two vertices are connected by an edge only if they differ by 1 in only one coordinate, modulo $n$. $G$ is a bipartite graph. Call $O$ and $E$ the two sets into which the vertex set $V$ is partitioned (consisting of the odd and even vertices, respectively). Given $A \subset V$, denote by $\partial A$ the vertex boundary of $A$, i.e. the set of all vertices in $V\setminus A$ whose graph distance from $A$ is exactly $1$. Question: is there any vertex-isoperimetric inequality (VIP) of the form $$\min_{|A|=m, A \subseteq O} |\partial A| \geq f(m),$$ i.e. where the minimum is taken only over the subsets of odd vertices of $V$? Bonus question: how much can be said in general about the same question, with $G$ being a bipartite regular graph? co.combinatorics graph-theory isoperimetric-problems isoperimetry 2 Is this homework? I vote to close. – Ori Gurel-Gurevich May 3 '13 at 20:23 If it is a homework, then poorly digested, since one can take $f(m)=1$. – Misha May 3 '13 at 23:28 add comment 1 Answer active oldest votes The case $n=2$ is the usual discrete cube. Thinking of this as the power set of $[n]$, Harper's theorem tells us that initial segments of the simplicial order (ordering by set size then lexicographically) minimise the vertex boundary. If we want to minimise over odd or even sets then the best possible result we could hope for is true: initial segments of the simplicial order restricted to the odd or even sets minimise vertex boundary in this new setting. This has been rediscovered numerous times independently; see the references in this paper. In particular, Harper's theorem tells us that Hamming balls minimise vertex boundary. The same result is true in tori of even "side length"; see Bollobás and Leader. Based on the $n=2$ case we would expect the intersection of Hamming balls with the even or odd sets to minimise the vertex boundary for your problem, and I'd be surprised if this wasn't true. up vote 3 down vote At least one proof of Harper's theorem (via codimension 1 compressions) goes through essentially unchanged to prove the restricted parity version. I'm also told that it can be deduced accepted directly from Harper's theorem itself. So if you want the result for general tori you could try to adapt Bollobás and Leader's proof, or try to deduce the result from the unrestricted I expect the question for general regular bipartite graphs is hard, because good isoperimetric results are only known for very special graphs like grids and cubes. Working on this problem I also bumped into a paper of O.Riordan "An Ordering on the Even Discrete Torus" (1998), which strengthens the result of Bollobás and Leader. I strongly believe that the restriction of this order to the set $O$ of odd nodes is the isoperimetric order which tackles my problem. However, I didn't succeed in proving this formally. I also don't see how it follows directly from Harper's theorem, neither how to make the proof work (the "usual compressions" don't seem to be enough), but probably I'm not familiar enough with these techniques, since it isn't my area of expertise. – Ale Zok May 14 '13 at 20:32 add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics graph-theory isoperimetric-problems isoperimetry or ask your own question.
{"url":"http://mathoverflow.net/questions/129546/peculiar-vertex-isoperimetric-inequality-on-the-discrete-torus-and-generalizati/129588","timestamp":"2014-04-16T11:10:06Z","content_type":null,"content_length":"56272","record_id":"<urn:uuid:b36cadab-22e2-4a8a-a48f-85f129fa6ab8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Growth of the reciprocal gamma function in the critical strip up vote 2 down vote favorite I was wondering if there are any results that studied the growth of $\left|\frac{1}{\Gamma(s)}\right|$ where $0 < \Re(s) < 1$ and as $\Im(s) \to \infty$? Any pointers to any results, papers, references will be highly appreciated. gamma-function complex-analysis add comment 1 Answer active oldest votes According to Gradshteyn-Ryzhik: Tables of integrals... 8.328.1, for fixed real $x$ and for $|y|\to\infty$ one has $$ |\Gamma(x+iy)|\sim\sqrt{2\pi}e^{-\frac\pi 2|y|}|y|^{x-\ up vote 5 down vote frac12}. $$ 1 And in general, Stirling formula (asymptotic expansion) holds as $|z|\to\infty$ uniformly with respect to $\arg z$ in every angle of the form $|\arg z|<\pi-\epsilon,\; \ epsilon>0$. – Alexandre Eremenko Apr 20 '13 at 13:27 anton,Alexandre Thanks! :) – Roupam Ghosh Apr 21 '13 at 2:37 add comment Not the answer you're looking for? Browse other questions tagged gamma-function complex-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/128162/growth-of-the-reciprocal-gamma-function-in-the-critical-strip/128164","timestamp":"2014-04-17T18:56:40Z","content_type":null,"content_length":"51779","record_id":"<urn:uuid:ac2f2cc8-6aa6-4c4b-a5e0-ff31002e33f5>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
How many words do you know? | Word Dynamo Printable Algebra Word Problems flash cards 41 words Created by Dictionary.com Print Cards Print List Print to added to to join a number to another number or total addition process of uniting two or more numbers into one sum, represented by the plus amount sum total of two or more quantities or sums, aggregate analogies form of reasoning in which one thing is inferred to be similar to another thing area size a surface takes up, typically measured in square units base bottom of a shape which could be a line or face of a solid by a factor of to multiply by a particular number combine like terms adding together terms whose variables and their exponents are the same combined indicates two or more numbers being added together complement set of all the elements of a universal set not included in a given set consecutive integers numbers that follow each other in order from smallest to largest decreased by to get smaller in size or number by a specific amount difference to subtract one value from another evaluation act of ascertaining or fixing the value or worth of even number whole numbers that can be divided by two evenly identify the variables act of determining the unknown values in an equation increased by to get larger by size or number integer one of the positive or negative numbers or zero least common multiple smallest number that is the multiple of two or more other numbers less decreased by, the indication of a minus symbol, not as many as less than indicates the placement of a minus symbol, shows the relationship between two numbers like terms terms whose variables and their exponents are the same minus decreased by, to subtract more than quantifier meaning greater in size or amount multiplied by number is added to itself a number of times out of divided by, in division this is divisor percentage number out of 100 or divided by 100, often expressed with percentage symbol product of result when two numbers are multiplied proportion part to whole comparison written as an equation quotient comparative value of two or more numbers rate time distance formula rt equals d ratio of comparative value of two or more numbers simplified to reduce the numerator or denominator in a fraction to the smallest number possible sum aggregate of two or more numbers term one part of an algebraic expression which may be a number, variable, or product of both together indicating addition, or the joining of two or more amounts total of sum or whole amount, the result of adding, total unit another name for one or a place value for one, the unit column is the ones column added to to join a number to another number or total addition process of uniting two or more numbers into one sum, represented by the plus amount sum total of two or more quantities or sums, aggregate analogies form of reasoning in which one thing is inferred to be similar to another thing area size a surface takes up, typically measured in square units base bottom of a shape which could be a line or face of a solid by a factor of to multiply by a particular number combine like terms adding together terms whose variables and their exponents are the same combined indicates two or more numbers being added together complement set of all the elements of a universal set not included in a given set consecutive integers numbers that follow each other in order from smallest to largest decreased by to get smaller in size or number by a specific amount difference to subtract one value from another evaluation act of ascertaining or fixing the value or worth of even number whole numbers that can be divided by two evenly identify the variables act of determining the unknown values in an equation increased by to get larger by size or number integer one of the positive or negative numbers or zero least common multiple smallest number that is the multiple of two or more other numbers less decreased by, the indication of a minus symbol, not as many as less than indicates the placement of a minus symbol, shows the relationship between two numbers like terms terms whose variables and their exponents are the same minus decreased by, to subtract more than quantifier meaning greater in size or amount multiplied by number is added to itself a number of times out of divided by, in division this is divisor percentage number out of 100 or divided by 100, often expressed with percentage symbol product of result when two numbers are multiplied proportion part to whole comparison written as an equation quotient comparative value of two or more numbers rate time distance formula rt equals d ratio of comparative value of two or more numbers simplified to reduce the numerator or denominator in a fraction to the smallest number possible sum aggregate of two or more numbers term one part of an algebraic expression which may be a number, variable, or product of both together indicating addition, or the joining of two or more amounts total of sum or whole amount, the result of adding, total unit another name for one or a place value for one, the unit column is the ones column
{"url":"http://dynamo.dictionary.com/42910/algebra-word-problems/print","timestamp":"2014-04-21T05:23:49Z","content_type":null,"content_length":"54508","record_id":"<urn:uuid:0aecd473-81f0-43e1-be0d-b45a53be14c4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Publications - Sparse RL in High Dimensions • Alexandra Carpentier and Rémi Munos. "Bandit Theory Meets Compressed Sensing for High-Dimensional Stochastic Linear Bandit". Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS-2012), April 2012. • Matthew Hoffman, Alessandro Lazaric, Mohammad Ghavamzadeh, & Rémi Munos. "Regularized Least Squares Temporal Difference Learning with Nested L2 and L1 Penalization". Ninth European Workshop on Reinforcement Learning (EWRL-2011), Athens, Greece, September 2011. • Mohammad Ghavamzadeh, Alessandro Lazaric, Rémi Munos, & Matthew Hoffman. "Finite-Sample Analysis of Lasso-TD". Proceedings of the Twenty-Eighth International Conference on Machine Learning (ICML-2011), pp. 1177-1184, Bellevue, WA, June 2011. • Aviv Tamar, Dotan Di Castro, and Ron Meir. "Integrating Partial Model Knowledge in Model Free Reinforcement Learning Algorithms". Proceedings of the Twenty-Eighth International Conference on Machine Learning (ICML-2011), Bellevue, WA, June 2011. • Mohammad Ghavamzadeh, Alessandro Lazaric, Odalric Maillard, & Rémi Munos. "LSTD with Random Projections". Accepted for Spotlight Presentation (73 out of 1219 submissions). Proceedings of the Twenty-Fourth Annual Conference on Advances in Neural Information Processing Systems (NIPS), pp. 721-729, 2010. • Odalric Maillard and Rémi Munos. "Scrambled Objects for Least-Squares Regression". Proceedings of the Twenty-Fourth Annual Conference on Advances in Neural Information Processing Systems (NIPS), pp. 1549-1557, 2010. • Dotan Di Castro and Shie Mannor. "Adaptive Bases for Reinforcement Learning". Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, pp. 312-327, 2010. • Dotan Di Castro and Shie Mannor. "Adaptive Bases for Q-learning". Proceedings of the Forty-Ninth IEEE Conference on Decision and Control (CDC), 2010. • Dotan Di Castro and Shie Mannor. "Tutor Learning Using Linear Constraints in Approximate Dynamic Programming". Proceedings of the Forty-Eighth Allerton Conference on Communication, Control, and Computing, 2010. • Odalric Maillard and Rémi Munos. "Compressed Least-Squares Regression". Proceedings of the Twenty-Third Annual Conference on Advances in Neural Information Processing Systems (NIPS-2009), pp. 1213-1221, 2009.
{"url":"https://sites.google.com/site/sparserl/publications","timestamp":"2014-04-20T19:07:15Z","content_type":null,"content_length":"21135","record_id":"<urn:uuid:4cf23800-8b4b-4a6f-ae0f-43e68986b122>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: plot 3 normal distribution on one graph Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: plot 3 normal distribution on one graph From Phil Clayton <philclayton@internode.on.net> To statalist@hsphsun2.harvard.edu Subject Re: st: plot 3 normal distribution on one graph Date Tue, 6 Nov 2012 17:52:23 +1100 The -#delimit- command can only be run from a do-file or ado-file. See -help delimit- So I would recommend trying the code from the do-file editor. It works for me. On 06/11/2012, at 5:24 PM, yannan shen <yannan2010@gmail.com> wrote: > Dear statalist, > I need to plot three normal distributions on one graph, I want them to > be over each other, not side by side. > The three distributions are: > y1~norm(1, 2) > y2~norm (3, 4) > y3 is a weighted average of y1 and y2, let's say y3=.5*y1+.5*y2 > I found the following code from previous statalist discussion that > looks very helpful: > /* Plot two normal distributions */ > #delimit ; > graph twoway (function y=normalden(x,1,2), range(-10 20) lw(medthick)) > (function y=normalden(x,5,3), range(-10 20) lw(medthick)), > title("Normal-Distribution comparison") > xtitle("Normal", size(medlarge)) ytitle("") > xlabel(-10(2)20) > xscale(lw(medthick)) yscale(lw(medthick)) > legend(off) > graphregion(fcolor(white)); > #delimit cr > /* Stata code ends */ > However, I copied and pasted the exact code into stata command window > and the first line returns an error message "Unknown #command" > Why??? I googled the error message but was not able to find a > solution. I cleared the program history, closed and reopned stata, and > tried on different computers, but was unable to find spot the problem. > Can someone please help me? I am using stata IC 12.0 > Thank you very much! > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/faqs/resources/statalist-faq/ > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-11/msg00158.html","timestamp":"2014-04-18T13:23:37Z","content_type":null,"content_length":"9667","record_id":"<urn:uuid:c27e36ae-deef-4ec4-b108-79d941026f40>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Teaching Load Summary C. F. Niederriter - April 1997 For the past several years, the Dean's office and the Faculty Senate have struggled with defining teaching loads at Gustavus for the purposes of determining overload situations and staffing allocations. Several ad hoc committees have looked into the matter, without resolution. At least since the 1992-1993 school year, the Registrar has tabulated Faculty Load data by department, including number of sections (or contact hours) taught, full time load (as determined by the department), enrollment, and enrollment per faculty. All of the data is confused by the various ways that we calculate and/or discuss teaching load; courses, contact hours, and combinations of the two. This has been (and currently is) the source of much frustration for the committees discussing the issue. Some have suggested that things would be better if only we used semester hours instead of courses as a measure, but this ignores the inherent differences in the courses we teach and the evaluation tools that we use. The author has taken data provided by the registrar's office for both the Fall Semester of 1996 and the Spring Semester of 1997 and tabulated a number of parameters associated with teaching loads. The data provided included First Term Seminars (for the Fall) and Curriculum II (both semesters), independent studies, etc. However, for the purposes of calculation, courses that have arranged times were not counted, including such things as music lessons and independent study research. This was not done as a result of any prejudice, but due to the fact that the number of contact hours and the combined scores could not be measured for arranged courses. This obviously effects some departmental data more than others, particularly Music, some science departments, and other areas with a significant number of arranged courses. Figure 1: Average number of courses taught per faculty by department for Fall 1996. If one looks at the average number of courses taught in a semester by department, as shown in Figures 1 and 2, it is apparent that some departments are lower than the college average (2.8 for Fall '96 and 2.7 for Spring '97) and some are higher. Departments that teach a significant number of laboratory sections are generally lower because labs often carry little or no credit, although they take a substantial amount of faculty time. Music appears low on these plots only because the author could not properly account for music lessons. Some departments clearly are teaching more courses than average, as well. However, it should be noted that all but five departments are within one standard deviation of the college average (mean), three low and two high. Figure 2: Average number of courses taught per faculty by department for Spring 1997. For a number of years contact hours have been used by some departments, mainly in the sciences and primarily for simplicity in comparing to national standards. As one might expect, the departments that appeared low in number of courses, are higher on a graph of contact hours by department, as shown in figures 3 and 4. The inverse is also true to some extent, departments that are higher on the courses graphs are lower on the contact hour graphs. Again, it should be noted that all but 4 or 5 departments are within one standard deviation of the mean, 2 on the high side and 2 or 3 (Spring) on the low side. The disclaimer about music lessons applies here as well. Figure 3: Average number of contact hours per faculty member for Fall Semester. Figure 4: Average number of contact hours per faculty member for Spring Semester. Another important factor to take into consideration when discussing teaching load is the number of students in the courses taught. Plots of number of students per FTE are shown in figures 5 and 6. As we might expect, there are some departments who teach an above average number of students and some who teach fewer than the average. But only 5 departments are outside one standard deviation from the Figure 5: Average number of students per faculty for the Fall Semester. From the data presented so far, one should conclude that each of these measures of teaching load, number of courses, contact hours, or number of students, is inadequate. Clearly faculty in different departments teach in different ways, some meeting students more often, some less often. Some of the differences can be attributed to numbers of students, recommendations of outside agencies, etc. Whatever the reasons for the differences, they do exist as should a reasonable and equitable measuring technique. The ideal approach would be for all faculty to record all of the time spent on each of their courses so their teaching load could be tabulated. Since this hasn't been done, and if it were begun now would not yield useful information for at least a semester, it is necessary to attempt to model the situation. The best model would somehow take into account all of the different ways that we teach and all of the constraints involved (like number of students, etc). Figure 6: Average number of students per faculty for the Spring Semester. The model that I am suggesting is an attempt to take into account several of the important factors in teaching load, class time, preparation, and grading. Class time and preparation time are both associated with the number of contact hours but are not directly related to the number of students in the course. Grading load, however, is strongly linked to the number of students and the type of course. For these reasons, the combined score for a faculty member is calculated by combining contact hours and number of students: Combined Score = # of Students * X + # of Contact Hours * Y The multipliers, X and Y are somewhat arbitrary, and I have settled on the following values because I believe that they most closely approximate reality. X can be 0, ½, or 1 depending upon the type of class that is being taught. A normal class would use the value ½ (times the number of students) to approximate the grading load, while a writing course would use a value of 1. Seminars, or other courses where there is little or no grading involved, would use the multiplier of 0 for X. To tabulate preparation and class time, I suggest using values of Y like 1, 2, and 3. A value of 2 would be used for normal courses, assuming that it takes just as long to prepare for a class as to teach it. If a faculty member has multiple sections of the same course, one of these would be assigned a value of 2 for Y and the other(s) would be assigned a value of 1. The last value, 3, might be used for a course that the faculty member has never taught before and must spend more time preparing for (I haven't made use of it yet). Figure 7: Average Combined Scores for Fall Semester. The resulting Combined Scores are much more tightly grouped around the college average as can be seen in figures 7 and 8. There are still 5 or six departments outside one standard deviation from the mean, but the standard deviation is smaller in this case. Also, one of the departments that is low is Music, for which I have already stated that I do not have an appropriate way to count lessons. Based on the smaller spread, I would suggest Combined Score is a better way to discuss teaching load across campus. The multipliers could, perhaps, be fine tuned a bit, but the basic concept of including all of the major components of teaching in some way is better than using only one component. It should also be clear from this perspective that teaching loads are fairly even across campus. Figure 8: Average Combined Scores for Spring Semester. There are limitations to this study, notably that it focuses on departmental averages of teaching load. But one should be able to conclude from the information presented here that teaching load is fairly evenly distributed across departments, contrary to perceptions that might exist. There may be individuals who are working significantly more or less than the averages presented here, but not whole departments or divisions.
{"url":"http://physics.gac.edu/~chuck/teaching.htm","timestamp":"2014-04-19T22:05:43Z","content_type":null,"content_length":"9602","record_id":"<urn:uuid:5c07eca8-6bde-479f-a053-b16c09ea66fd>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Help needed on IMO 1990 question March 26th 2009, 09:46 PM Help needed on IMO 1990 question (IRN 2) Let S be a set with 1990 elements. P is the set such that its elements are the ordered sequences of 100 elements of S. Knowing that any ordered element pair of S appears at most in one element of P. (If x = (…a…b…), then we call ordered pair (a, b) appeared in x.) Prove that P has at most 800 elements. March 27th 2009, 05:53 AM Hello muditg It would be helpful if we could see the exact wording of the original question, since this one, as you have punctuated it, does not have a very clear meaning. The words I have marked in red do not form a complete sentence, so I can only guess that this should perhaps be part of the definition of the set $P$. Although I have some thoughts which may be helpful, I can't see quite where the $800$ comes from. So I am open to corrections or further suggestions/information. My thoughts are these: The number of ordered pairs, $(a, b), a < b$, that can be formed from the elements of the set S is $\tfrac{1}{2}1990 \times 1989$. Each element $x \in P$ is an ordered sequence made up of $100$ elements of S. The number of ordered pairs $(a,b), a<b$, that can be made from $100$ elements of S is $\tfrac{1}{2}100\times 99$. So (if I am understanding the question correctly) this is the number of ordered pairs of $S$ that each element $x$ will 'consume', if each ordered pair can be used at most once in this way. So it would appear that a maximum value on the number of possible elements $x \in P$ is: $\frac{\tfrac{1}{2}1990 \times 1989}{\tfrac{1}{2}100 \times 99} = 399.8$ This appears to show that $|P| < 400$, not the $800$ of which the question speaks. Am I misinterpreting something? Is there something wrong with my reasoning? March 27th 2009, 09:14 AM I copied and pasted that question from IMO 1990 longlist verbatim (see attachment). However, the same question appears in a text on Combinatorics in a different, and I hope clear, language. It is question number 106 on the image of the page attached with this message. March 27th 2009, 09:20 AM Sorry, I pressed the submit button too early. Attachments are with this message. (See problem no. 33 in the longlist for the problem in its original wordings.) March 27th 2009, 10:14 AM Hello muditg Thanks for clarifying that you have stated the question exactly as it was set on the paper. I note that it was translated from the Chinese. Perhaps something has been lost in the translation! March 28th 2009, 02:55 AM It somehow looks correct to me (Thinking) An example I consider S having 6 elements and P as 3-ary S= {A1,A2,A3,A4,A5,A6} p={ (A1,A2,A3), (A1,A5,A6),.......} It means that Ai & Aj can be used only once in P I have made , (A1,A3,A4) won't be a member as A1 & A3 appeared earlier in (A1,A2,A3) If this was the question than we had to prove that no. of elements in p <= something (was 800 in question) Please correct me if this was not similar to the question (Happy) March 28th 2009, 06:36 AM (IRN 2) Let S be a set with 1990 elements. P is a set whose elements are ordered sequences of 100 elements of S. Given that any ordered element pair of S appears at most in one element of P (if x = (…a…b…), then we say that the ordered pair (a, b) appeared in x), prove that P has at most 800 elements. [I have edited the wording to try to clarify what I think the question is asking.] The number of ways of selecting an ordered pair from an ordered set of 100 elements is ${\textstyle{100\choose2}}=4950$. The number of ways of selecting an ordered pair from an (unordered) set of 1990 elements is $1990\times1989=3958110$. But $\tfrac{3958110}{4950} = 799.618\ldots$, so there can be at most 799 of the 100-element sets with all their ordered-pair subsets disjoint.
{"url":"http://mathhelpforum.com/discrete-math/80872-help-needed-imo-1990-question-print.html","timestamp":"2014-04-17T14:54:15Z","content_type":null,"content_length":"13704","record_id":"<urn:uuid:4a9facae-a9f3-4523-b676-376043a6bbb8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
I can't find a practice exam for Principles of Math 11 June 18th 2008, 03:16 PM #1 Jun 2008 I can't find a practice exam for Principles of Math 11 Hello, the exam is tomorrow afternoon and was hoping someone knew of a site with practice ones for Principles of Mathematics 11. I even tried google, there are Math 10 P and Math 12 P practices, but not one for Math 11 =\ Do you want to tell us what will be covered on the exam? If you do that, we can direct you to the appropriate website. June 21st 2008, 06:53 PM #2
{"url":"http://mathhelpforum.com/algebra/41934-i-can-t-find-practice-exam-principles-math-11-a.html","timestamp":"2014-04-18T13:49:32Z","content_type":null,"content_length":"31854","record_id":"<urn:uuid:47f69c5e-2a86-40ac-a6a3-0290b609e9aa>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
NYJM Abstract - 11-2 - Gábor Braun Gábor Braun Characterization of matrix types of ultramatricial algebras Published: January 31, 2005 Keywords: matrix type of a ring; dimension group; ultramtaricial algebra; automorphism group of a dimension group Subject: Primary 20K30, 16S50; Secondary 06F20, 19A49 For any equivalence relation ≡ on positive integers such that nk ≡ mk if and only if n ≡ m, there is an abelian group G such that the endomorphism ring of G^n and G^m are isomorphic if and only if n ≡ m. However, G^n and G^m are not isomorphic if n ≠ m. Author information Alfréd Rényi Institute of Mathematics, Hungarian Academy of Sciences, Budapest, Reáltanoda u 13-15, 1053, Hungary
{"url":"http://www.emis.de/journals/NYJM/j/2005/11-2.html","timestamp":"2014-04-18T08:10:04Z","content_type":null,"content_length":"8694","record_id":"<urn:uuid:0893f424-00f9-4d2f-b4c2-341434986667>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
Let n/d be in Q, m be a positive integer and let u = n/d mod m. Thus u is the image of a rational number modulo m. The rational reconstruction problem is; given u and m find n/d. A solution was first given by Wang in 1981. Wang's algorithm outputs n/d when m > 2 M^2 where M = max(|n|,d). Because of the wide application of this algorithm in computer algebra, several authors have investigated its practical efficiency and asymptotic time complexity. In this paper we present a new solution which is almost optimal in the following sense; with controllable high probability, our algorithm will output n/d when m is only a few bits longer than 2 |n| d. Further, our algorithm will fail with high probability when m < 2 |n| d. This means that in a modular algorithm where m is a product of primes, the modular algorithm will need about one prime more than the minimum necessary to reconstruct n/d; thus if |n| << d or d << |n| the new algorithm saves up to half the number of primes.
{"url":"http://www.risc.jku.at/conferences/issac2004/abstracts/2.html","timestamp":"2014-04-17T04:19:31Z","content_type":null,"content_length":"7022","record_id":"<urn:uuid:df95beda-ada4-4d62-b0ad-f180821fd07b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
What if we flip a biased coin, with the probability of a head p and the probability of a tail q = 1 - p? The probability of a given sequence, e.g., 100010 ..., in which k heads appear in n flips is, by Eq. A.4, pqqqpq ..., or There are a total of 2^n possible sequences. Only some of these give k heads and n - k tails. Their number is where 0!, whenever it appears in the denominator, is understood to be 1. Since any one or another of these sequences will do, the probability that exactly k heads occur in n flips is, by Eq. A.2. This is the binomial distribution. The coefficient n things taken k and n - k at a time. You have seen it before in algebra in the binomial theorem: We can use the binomial theorem to show that the binomial distribution is normalized: As an example, let's work out the case of 4 flips of an unbiased coin. If p = q = 1/2, then p^kq^n-k = (1/2)^n = (1/2)^4 = 1/16 for all values of k, and the probabilities P(0;4,1/2), ..., P(4;4,1/2) are equal to the binomial coefficients we obtain the probabilities 1/16, 1/4, 3/8, 1/4, and 1/16, as before. The expectation value of k is To evaluate this, note that the k = 0 term is 0 and that k/k! = 1/(k - 1)!, so that Next, factor out np: Finally, change variables by substituting m = k - 1 and s = n - 1: The sum in this expression is the same as the one in Eq. A.20; only the labels have been changed. Thus, One can evaluate the expectation value of k^2 in a similar fashion by two successive changes in variables and show that The variance of k, Eq. A.14, is and its standard deviation is An example of the binomial distribution is given in Fig. A.4, which shows the theoretical distribution P(k;10,1/6). This is the probability of obtaining a given side k times in 10 throws of a die. Figure A.4. The binomial distribution for n = 10, p = 1/6. The mean value is 1.67, the standard deviation 1.18.
{"url":"http://ned.ipac.caltech.edu/level5/Berg/Berg3.html","timestamp":"2014-04-17T06:43:50Z","content_type":null,"content_length":"5901","record_id":"<urn:uuid:4101e728-5eb8-4d37-9f3d-818e24153ccd>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Marblehead Prealgebra Tutor ...I have also worked for the past two years as a math/special education teacher at a treatment center in Natick. I have also worked as a tutor with low income youth through another learning center. My approach to teaching is to build a relationship with my students through trust and understanding their point of view. 15 Subjects: including prealgebra, reading, grammar, English ...Some of the aspects of the program that I would be happy to teach you include: making an outline, using styles to ensure formatting consistency, generating a table of contents, table of figures and table of tables, and creating a bibliography and storing your references in the program. I was a s... 9 Subjects: including prealgebra, geometry, algebra 1, algebra 2 I am a Middle School Math Teacher with a love of the subject and a strong desire to pass this love on to my students - both my classroom students and my tutoring students. I believe very strongly that it is not enough to just "get it" and pass a test. I believe that in order to truly do well in Math, each topic must really be understood. 6 Subjects: including prealgebra, algebra 1, algebra 2, SAT math ...When do you start to plan? What are the real choices? How do you access services? 45 Subjects: including prealgebra, chemistry, English, physics As a tutor in Math with over a decade of experience, and being the oldest brother of four younger siblings of my own, I have had extensive experience with relating to kids and young adults of all ages and abilities. Although I am not a fully certified teacher in Massachusetts, I have passed the MTE... 13 Subjects: including prealgebra, calculus, geometry, GRE
{"url":"http://www.purplemath.com/marblehead_prealgebra_tutors.php","timestamp":"2014-04-19T09:35:16Z","content_type":null,"content_length":"23932","record_id":"<urn:uuid:789f2e4c-9bde-4a4e-81c9-752f52252e3a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
The ample cone and ranks of Frobenius on cohomology up vote 6 down vote favorite Suppose that $X$ is a smooth projective algebraic variety over a perfect field $k$ of characteristic $p > 0$ (I'm also interested in the non-smooth case). Suppose that $D$ is a divisor or possibly a $\mathbb{Q}/\mathbb{R}$-divisor on $X$ that is ample (I'd also be happy to consider weaker notions of positivity, big, semi-ample, nef, etc). Now, consider the Frobenius map $$F : \mathcal{O}_X(-D) \to \mathcal{O}_X(-pD)$$ (or more generally to $\mathcal{O}_X(-p^eD)$ for $e \gg 0$). If $X$ is a $\mathbb{Q}$-divisor, then for this map, I mean $\mathcal{O}_X(\lfloor -D \rfloor) \to \mathcal{O}_X(\lfloor -p^eD \rfloor)$. Take cohomology and consider $$H^i(X, \mathcal{O}_X(-D))\to H^i(X, \mathcal{O}(-p^eD)) $$ for $i \neq 0$. I'm particularly interested in the top cohomology, (notice that for $e \gg 0$, the lower cohomologies are uninteresting by Serre vanishing as Donu pointed out). Question: Can I say anything about where $D$ lives in the nef/ample cone based on the dimension of the kernel of that map (ie, the rank of that map)? In particular, has this been written down somewhere? Even for surfaces? Question: More generally, do the set of classes in the ample cone which have a kernel of dimension $\geq n$ representative, (ie a "bad" (low rank in the above sense)) live in a region with any nice (connectivity/convexity?) properties. I'm somewhat aware of some of the various statements that can be made in this direction for curves (ie, Tango curves), but if someone has something interesting to say that also illuminates the higher dimensional picture, I'd be quite interested! Maybe I'm asking the wrong questions too, so I'd be happy to hear what the right questions are. EDIT: I should point out that for $D$ sufficiently ample Weil divisor, that the picture is very uninteresting (Serre vanishing applies). Karl, the picture is also uninteresting when $e\gg 0$ for the same reason. What sort of description were you hoping for in general? – Donu Arapura Apr 11 '11 at 15:56 ... but maybe the nonvanishing locus for $H^(X,\mathcal{O}(-D))$ is interesting to you. – Donu Arapura Apr 11 '11 at 15:58 2 $H^($ -> $H^i($ – Donu Arapura Apr 11 '11 at 15:59 Donu, you are right about it being uninteresting for $e \gg 0$ for $i < \dim X$, but it is still interesting for $i = \dim X$ (that's actually the situation I've been thinking most about). Let me add a comment to that effect. – Karl Schwede Apr 11 '11 at 16:11 I guess the region you describe should be empty sometimes. Consider the case where $X$ is Frobenius split, i.e., there is a projection $\F_*O_X\to O_X$ such that $O_X\to \F_*O_X\to O_X$ is the 1 identity. In this case it is not hard to see that for any line bundle $L$ on $X$ the map $ H^i(X,L)\to H^i(X,L\otimes F_*O_X)=H^i(X,L^p) $ is injective (in fact, this gives a proof of the Kodaira vanishing theorem). In particular, there is no kernel for any line bundle $L$. – J.C. Ottem Apr 12 '11 at 7:17 show 2 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"https://mathoverflow.net/questions/61300/the-ample-cone-and-ranks-of-frobenius-on-cohomology","timestamp":"2014-04-18T15:41:21Z","content_type":null,"content_length":"53866","record_id":"<urn:uuid:e377003d-89a6-4702-aada-11d26ef5bd3a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
A First Day Statistics Activity I have the honor of again teaching our undergraduate statistics course in the School of Education, better known here as EDUC 4716 Basic Statistical Methods. Perhaps the most interesting thing about the course is that it's not required for any education programs, minors, or certificates. Instead, the course attracts students largely from the Department of Speech, Language, and Hearing Sciences (who don't need it to graduate, but do need it to apply to grad school or, more recently, to get certified) and sociology majors. So how does this course end up in the School of Ed? Probably due to the legacy we have in quantitative methods, thanks to people like Robert Linn Gene Glass Lorrie Shepard , and now faculty like Derek Briggs Greg Camilli . Somehow all of their hard work and success filters down and gives a relative stats-hack like me a chance to teach undergrads. Many of my students are upperclassmen and have spent much of their college experience avoiding math courses. In fact, on last year's FCQ (Faculty Course Questionnaire) my students' average rating for the item "Personal interest in this subject prior to enrollment" was a 1.8 out of 6 -- a response the university tells me is at the 0th percentile across campus . I like to think of this as a great opportunity in a "nowhere to go but up" kind of way, a chance for me to change the way students think of mathematics and see themselves as mathematical beings. Then again, it's hard to make big changes in only 15 class meetings of 2.5 hours each. If I'm going to make a difference, class has to get off to a solid start. My opening activity this year started with the preparation of four simple index cards with different distribution shapes: Four common distributions, clockwise from top left: normal, left skewed, right skewed, and normal. I have the benefit of a small class of 14 students. So I cut my graphs into a total of 14 pieces: 14 pieces for 14 students. Note on the bottom I've provided the hints A, B, C, and D. When class started, I mixed up the graph pieces and handed one to each student. Then I told the class to find the other people in class who had the graph pieces that aligned with theirs. Once they had a completed graph, form a group at one of the tables and discuss which of the following they thought their group's graph might describe: • People born each month of the year • Student GPAs at this university • Student heights at this university • Starting salaries of new graduates from this university It took my class less than 3-4 minutes to find their groups and then I gave them another 3-4 minutes to discuss what their graph shape might describe. As a class, I had each group share their ideas and then we discussed them. Not everybody agreed initially about which shape matched which description, which led into important comments about how we might think about unbiased sampling of students and imagining different scales and labels along the horizontal axes. So in less than 15 minutes I combined group-making, statistics, and active, student-centered problem solving into one activity. This activity also gets students thinking about distribution shapes, which I sometimes worry we ignore in the rush to calculate centers and spreads. If you're wondering how to adapt this for your classroom, I offer these suggestions: • If you have a few more students, cut more slices. • If you have twice as many students, consider making two of each distribution shape and scaling the x-axis to match one of 8 potential descriptions. (i.e., a normal distribution scaled for heights in inches could be distinguished from one scaled for SAT scores.) • If you want to use this for Algebra 1, you can make graphs that describe things like, "Toni walked to the bus stop at 2 mph, rode the bus at 30 mph to the bike shop, then rode a bike back home at 12 mph." Such an activity begins CPM's Algebra Connections and was the inspiration for my activity. • If you want to use this for Algebra 2 or higher, you can use graphs of functions that students will become familiar with (parabolas, cubics, hyperbolas, etc.). I don't think it's worth fretting over vocabulary at this point -- just give students an opportunity to think about how the functions behave and what phenomena they could possibly model.
{"url":"http://blog.mathed.net/2012/08/a-first-day-statistics-activity.html","timestamp":"2014-04-19T01:56:36Z","content_type":null,"content_length":"99957","record_id":"<urn:uuid:9a96d6e2-10a6-49ca-92eb-97b1e59f76e5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
e M Dana Mosely provides immediate, easy to understand, online math help by offering math games, math worksheets, math problems, and online math lesson videos. Welcome to Cool Math Guy, the source for online math lessons. Here you can find a wealth of online math help resources including kids math games, math worksheets, math problems, and math help in the following online math help courses: (Click on the Math Lesson that best suits you) Online Math Help in: GEDonline.us - Online high school test prep and free GED resources.
{"url":"http://coolmathguy.com/?ref=km","timestamp":"2014-04-20T05:31:13Z","content_type":null,"content_length":"28421","record_id":"<urn:uuid:ffb54504-0e40-4247-8ed2-abd1dd07ebc9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
Agility Stacking for Threat Based DK Tanks. [Archive] - TankSpot 05-08-2009, 02:04 AM This is an Idea i've been rolling over in my head for quite some time... Not entirely based on the idea that it will maximize threat output, as I'm sure that Strength would be a much better option. However, After doing a bit of mathematics in my head I really started considerring Agility being a safe way to stack in stats wherein it stacks with several base stats that contribute heavily to Threat generation. AP isn't really effected so strongly here with Agility, mainly it would only be effective with an amount so high that it would gimp other stats, and it would only work from synergy with bladed armor. I haven't done an exact math equation yet, but an estimate is roughly 1 AP for 20 Agility with 5/5 bladed armor. Which I know is absolutely minimal as compared to the benefits that Strength would However, going by the passive Forceful Deflection that DK's have. 25% of our strength goes into Parry Rating. But the key here is Threat Generation. Providing that you spec correctly, Agility may grant a high benefit to this, considerring Critical hits will greatly contribute to spiked threat. I don't remember the exact number, but I believe its in the mid 40s for 1%Crit. Which isn't too high of a number considerring that Agility also grants Dodge Rating, Armor, and as I said a small trickle down amount of attack power. Wherein (in theory perhaps) it may not be a terrible idea to give it a try. More so than stacking a crazy amount of strength. Which of course will grant a much higher attack power and a small amount of Parry Rating. Agility will grant Armor, Crit, Dodge, and a small amount of AP. Unfortunately the number hasn't been determined yet, exactly how much Agility grants 1% dodge. (I may be mistaken.) But I'd love to hear some opinions on this... granted I know it's sort of an off the wall idea, but as I said, if the synergies worked correctly it may in fact be a good way to focus for a threat
{"url":"http://www.tankspot.com/archive/index.php/t-49839.html","timestamp":"2014-04-19T12:40:07Z","content_type":null,"content_length":"6544","record_id":"<urn:uuid:f16b3d83-a8c6-4db7-9aff-0d6ba9d00424>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: disjointness Replies: 9 Last Post: Dec 8, 1999 1:35 PM Messages: [ Previous | Next ] Posted: Dec 6, 1999 5:31 PM I'm having difficulty explaining why disjoint events are not independent. YMM states, "If A and B are disjoint, then the fact that A occurs means B cannot occur. So disjoint events are not independent." I'm ok with the first part but I'm having difficulty coming up with an example to show that disjoint events are not Al Reif wrote <Disjoint events cannot occur simultaneously. When rolling a die, you <could roll a 3 [Event A] or roll a 4 [Event B]. But you cannot roll a 3 and a 4 at the same time with one die. In contrast, rolling a 3 <[EventA] and having blonde hair [Event B] are not disjoint--they can happen at the same time. But isn't rolling a 3 or a 4 on a die independent? Ruth Carver Germantown Academy The Advanced Placement Statistics List To UNSUBSCRIBE send a message to majordomo@etc.bc.ca containing: unsubscribe apstat-l <email address used to subscribe> Discussion archives are at Problems with the list or your subscription? mailto://jswift@sd70.bc.ca
{"url":"http://mathforum.org/kb/thread.jspa?threadID=197523&messageID=728006","timestamp":"2014-04-20T19:08:20Z","content_type":null,"content_length":"27542","record_id":"<urn:uuid:bae010b7-fb9d-4a41-968e-9feafe1ac2cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Marblehead Prealgebra Tutor ...I have also worked for the past two years as a math/special education teacher at a treatment center in Natick. I have also worked as a tutor with low income youth through another learning center. My approach to teaching is to build a relationship with my students through trust and understanding their point of view. 15 Subjects: including prealgebra, reading, grammar, English ...Some of the aspects of the program that I would be happy to teach you include: making an outline, using styles to ensure formatting consistency, generating a table of contents, table of figures and table of tables, and creating a bibliography and storing your references in the program. I was a s... 9 Subjects: including prealgebra, geometry, algebra 1, algebra 2 I am a Middle School Math Teacher with a love of the subject and a strong desire to pass this love on to my students - both my classroom students and my tutoring students. I believe very strongly that it is not enough to just "get it" and pass a test. I believe that in order to truly do well in Math, each topic must really be understood. 6 Subjects: including prealgebra, algebra 1, algebra 2, SAT math ...When do you start to plan? What are the real choices? How do you access services? 45 Subjects: including prealgebra, chemistry, English, physics As a tutor in Math with over a decade of experience, and being the oldest brother of four younger siblings of my own, I have had extensive experience with relating to kids and young adults of all ages and abilities. Although I am not a fully certified teacher in Massachusetts, I have passed the MTE... 13 Subjects: including prealgebra, calculus, geometry, GRE
{"url":"http://www.purplemath.com/marblehead_prealgebra_tutors.php","timestamp":"2014-04-19T09:35:16Z","content_type":null,"content_length":"23932","record_id":"<urn:uuid:789f2e4c-9bde-4a4e-81c9-752f52252e3a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Integrable Function October 16th 2011, 12:17 PM #1 Junior Member Sep 2011 Integrable Function Let (X,M, $\mu$) be a measure space and let $f\in L^{+}$ be integrable ( $\int$$f$$d\mu$< $\infty$). Show that for each $\epsilon$ > 0 there is a $\delta$ > 0 such that if A $\subset$ X and $\mu (A)$< $\delta$, then $\int$$f$< $\epsilon$ where the integral is calculate over the subset A. Show that the result may fail if the assumption on the integrability of f is dropped. Re: Integrable Function First, consider the case for which f is a simple function. October 16th 2011, 12:43 PM #2
{"url":"http://mathhelpforum.com/differential-geometry/190532-integrable-function.html","timestamp":"2014-04-19T09:42:47Z","content_type":null,"content_length":"34269","record_id":"<urn:uuid:e33aff51-97ce-4181-9035-9867c3920315>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Riverdale, GA ACT Tutor Find a Riverdale, GA ACT Tutor ...I have successfully tutored several high school and college students in both Biology and Animal Science (Zoology). Thus, I possess the patience, focus, scientific vocabulary and expertise to be an effective Zoology tutor. I am uniquely-qualified to tutor students for the MCAT. I received a B.A. degree in Chemistry and Mathematics. 57 Subjects: including ACT Math, reading, chemistry, English ...I can teach you my means of preparation and my test taking strategies so YOU can achieve the score you're after! I deeply believe in the transformational power of education. I left full time teaching to pursue my lifelong dream of writing, but through tutoring, I stay connected with young people and get to use my gift for teaching to help them reach their maximum potential. 22 Subjects: including ACT Math, English, reading, writing ...I do still study the topics to keep the information fresh in my head. Took the actual test when I considered joining the military. Made a 92 or 93 on the actual test. 29 Subjects: including ACT Math, chemistry, reading, physics I'm a full time mechanical engineering professor at Georgia Tech during the week and I tutor students on the weekend. I've been teaching for a few years and have received excellent reviews from my students, because I'm very good at keeping things interesting by using videos and demonstrations that ... 12 Subjects: including ACT Math, calculus, geometry, GRE ...I was a Science major at the University of Georgia, then decided I wanted to teach and received my teaching certificate. Recently, I finished my Master's degree in Education. I look forward to using my classroom skills and enthusiasm to assist students on an individual basis.Experience tutoring college level Biology. 11 Subjects: including ACT Math, chemistry, physics, biology Related Riverdale, GA Tutors Riverdale, GA Accounting Tutors Riverdale, GA ACT Tutors Riverdale, GA Algebra Tutors Riverdale, GA Algebra 2 Tutors Riverdale, GA Calculus Tutors Riverdale, GA Geometry Tutors Riverdale, GA Math Tutors Riverdale, GA Prealgebra Tutors Riverdale, GA Precalculus Tutors Riverdale, GA SAT Tutors Riverdale, GA SAT Math Tutors Riverdale, GA Science Tutors Riverdale, GA Statistics Tutors Riverdale, GA Trigonometry Tutors Nearby Cities With ACT Tutor College Park, GA ACT Tutors Conley ACT Tutors East Point, GA ACT Tutors Fayetteville, GA ACT Tutors Forest Park, GA ACT Tutors Hapeville, GA ACT Tutors Jonesboro, GA ACT Tutors Lake City, GA ACT Tutors Mableton ACT Tutors Morrow, GA ACT Tutors Norcross, GA ACT Tutors Peachtree City ACT Tutors Red Oak, GA ACT Tutors Tucker, GA ACT Tutors Union City, GA ACT Tutors
{"url":"http://www.purplemath.com/Riverdale_GA_ACT_tutors.php","timestamp":"2014-04-18T09:04:11Z","content_type":null,"content_length":"23834","record_id":"<urn:uuid:b82abeb7-e96e-44bb-a6ab-6daddb4628de>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Cedar Hill, TX Trigonometry Tutor Find a Cedar Hill, TX Trigonometry Tutor I am an experienced, certified secondary math teacher and a mom of five. I have taught and tutored basic all levels of math, from elementary math skills through collage-level calculus for over ten years. I have experience in helping students identify educational "gaps" and working to fill those ga... 10 Subjects: including trigonometry, calculus, statistics, geometry ...This is where I taught students with learning disabilities how to pass the STAAR Test. I have taught Algebra I and Geometry, but am certified to teach any math in public schools. I am a Texas certified teacher in 4th grade through 12th grade mathematics. 10 Subjects: including trigonometry, geometry, statistics, algebra 1 ...I volunteer coached a team, so I have the ability to have the mindset of a coach and player. I know all the fundamentals and drills that could help make someone successful. I have been active my whole life. 10 Subjects: including trigonometry, reading, geometry, algebra 1 I am a recently retired (2013) high school math teacher with 30 years of classroom experience. I have taught all maths from 7th grade through AP Calculus. I like to focus on a constructivist style of teaching/learning which gets the student to a conceptual understanding of mathematical topics. 12 Subjects: including trigonometry, calculus, geometry, statistics ...I can give tuition on various electrical engineering courses such as signal processing, image processing, statistical estimation, mathematical analysis, machine learning and digital communication.I have taken discrete math course at graduate level. I am a PhD student in Electrical Engineering an... 15 Subjects: including trigonometry, calculus, statistics, geometry Related Cedar Hill, TX Tutors Cedar Hill, TX Accounting Tutors Cedar Hill, TX ACT Tutors Cedar Hill, TX Algebra Tutors Cedar Hill, TX Algebra 2 Tutors Cedar Hill, TX Calculus Tutors Cedar Hill, TX Geometry Tutors Cedar Hill, TX Math Tutors Cedar Hill, TX Prealgebra Tutors Cedar Hill, TX Precalculus Tutors Cedar Hill, TX SAT Tutors Cedar Hill, TX SAT Math Tutors Cedar Hill, TX Science Tutors Cedar Hill, TX Statistics Tutors Cedar Hill, TX Trigonometry Tutors Nearby Cities With trigonometry Tutor Arlington, TX trigonometry Tutors Balch Springs, TX trigonometry Tutors Dalworthington Gardens, TX trigonometry Tutors Desoto trigonometry Tutors Duncanville, TX trigonometry Tutors Farmers Branch, TX trigonometry Tutors Glenn Heights, TX trigonometry Tutors Highland Park, TX trigonometry Tutors Hurst, TX trigonometry Tutors Lancaster, TX trigonometry Tutors Mansfield, TX trigonometry Tutors Midlothian, TX trigonometry Tutors Ovilla, TX trigonometry Tutors Pantego, TX trigonometry Tutors Watauga, TX trigonometry Tutors
{"url":"http://www.purplemath.com/Cedar_Hill_TX_Trigonometry_tutors.php","timestamp":"2014-04-20T01:48:35Z","content_type":null,"content_length":"24398","record_id":"<urn:uuid:161caad2-c75f-45b4-9834-8e0aa82eb230>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
60F15 Strong theorems The Kallianpur-Robbins law describes the long term asymptotic behaviour of the distribution of the occupation measure of a Brownian motion in the plane. In this paper we show that this behaviour can be seen at every typical Brownian path by choosing either a random time or a random scale according to the logarithmic laws of order three. We also prove a ratio ergodic theorem for small scales outside an exceptional set of vanishing logarithmic density of order three.
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/12069","timestamp":"2014-04-16T19:23:54Z","content_type":null,"content_length":"16190","record_id":"<urn:uuid:5dec7c12-b1eb-4b42-80da-89a8ffe8282d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability question that I cant work out September 26th 2011, 06:27 PM #1 Sep 2011 Multiple attempt percentages Hi all, I answered questions from set A and was correct 64% of the time. Every time I was incorrect I had to answer a different set of questions (set B) and of these I was correct 66% of the time. When I was incorrect with a question from set B I had to answer a question from set C of which I was correct 70% of the time. I was wondering what the probability of getting all three questions wrong is? How many questions would I get correct from set A,B and C before I got all three wrong (on average)? How many times would I expect to get all three answers wrong if I answered 1000 questions from set A? Thanks so much for your help! Last edited by tjcoll; September 26th 2011 at 07:30 PM. Re: Multiple attempt percentages So I've been going through all the treads trying to find something that may help me. Is it Binomial probability that I would need to use to find the answer? Sorry for my ignorance... math was never my strong point!! Re: Multiple attempt percentages Hi all, I answered questions from set A and was correct 64% of the time. Every time I was incorrect I had to answer a different set of questions (set B) and of these I was correct 66% of the time. When I was incorrect with a question from set B I had to answer a question from set C of which I was correct 70% of the time. I was wondering what the probability of getting all three questions wrong is? How many questions would I get correct from set A,B and C before I got all three wrong (on average)? How many times would I expect to get all three answers wrong if I answered 1000 questions from set A? Thanks so much for your help! The probability of getting all three wrong is (1-0.64)(1-0.66)(1-0.70) Re: Multiple attempt percentages Great, thanks!! September 26th 2011, 09:27 PM #2 Sep 2011 September 26th 2011, 10:33 PM #3 Grand Panjandrum Nov 2005 September 27th 2011, 12:35 PM #4 Sep 2011
{"url":"http://mathhelpforum.com/statistics/188941-probability-question-i-cant-work-out.html","timestamp":"2014-04-17T13:05:04Z","content_type":null,"content_length":"39115","record_id":"<urn:uuid:674bf7fa-193c-4b9d-b932-42b9455f856c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
GFD LabVII: Taylor Columns The T-P theorem demands that vertical columns of fluid move along contours of constant fluid depth. Suppose a rotating, homogeneous fluid flows over a bump on a bottom boundary, as shown here. Near the boundary, the flow must of course go around the bump. But the Taylor-Proudman theorem says that the flow must be the same at all heights: so, at all heights, the flow must be deflected as if the bump on the boundary extended all the way through the fluid! Thus, fluid columns act as if they were rigid columns and move along contours of constant fluid depth. We can demonstrate this behavior in the laboratory. We place a cylindrical tank of water on a rotating turntable. A few obstacles, none of which are taller than a small fraction of the depth of the water, are on the base of the tank. With f = 3 /s and h = 10 cm we wait for the fluid to settle down and come in to solid body rotation and then carefully drop a few crystals of dye in to the water. Each crystal leaves a vertical dye streak as it falls. Note the vertical `rigidity' of the fluid. We sprinkle black dots over the surface to mark the fluid and reduce f to 2.9 /s. Until a new equilibrium is established (the ``spin-down'' process takes several minutes, depending on rotation rate and water depth) the water will be moving relative to the tank. We should be able to see the dots being diverted around the obstacles in a vertically coherent way (as shown schematically below) as if the obstacles extended all the way through the water, thus creating stagnant ``Taylor columns'' above the obstacles. Here's a snapshot and a movie to show you what actually happens. It's a pretty tricky experiment - you have to practice hard to get it to work. Note that the black dots below are floating on the surface: the cylinder is submerged - see photograph above.
{"url":"http://paoc.mit.edu/labweb/lab6/gfd_6.htm","timestamp":"2014-04-20T20:56:14Z","content_type":null,"content_length":"4786","record_id":"<urn:uuid:256141e3-e26e-40f8-989a-582e17ed20dd>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Microsoft Mathematics 4.0 Microsoft Mathematics 4.0 download Microsoft Mathematics provides a set of mathematical tools that help students get school work done quickly and easily. Students can learn to solve equations step-by-step while gaining a better understanding of fundamental concepts in pre-algebra. Related software downloads GeoGebra portable is free and multi-platform dynamic mathematics software for all levels of education that joins geometry, algebra, tables, graphing, statistics and calculus in one easy-to-use package. Features includes Graphics, algebra and tables are connected and fully dynamic, Easy-to-use interface, yet many powerful features, authoring tool to create interactive learning materials .... Free download of GeoGebra Portable 5.0 Beta Calculator Prompter is a math expression calculator. You can evaluate expressions like sin(cos(tan(pi)))#2%10^3 Calculator Prompter has a built-in error recognition system that helps you get correct results: - Paste operator here; - Paste number here; - Missing '('; - Missing ')'; - Unknown symbol ... etc. With Calculator Prompter you can .... Free download of Calculator Prompter 2.7 SimplexNumerica is an object-oriented numerical data analyzer, plot and presentation program. SimplexNumerica is proving to be extremely popular among scientists. Ergonomic programming using the newest Windows programming guidelines with toolbars, context dialogs and interactive diagrams providing easy handling with difficult numeric mathematics. SimplexNumerica is best suited for publication type graphics, analysis .... Free download of SimplexNumerica 9.2.9.4 Math tool for high school math, middle school math teaching and studying. Function graphing and analyzing: 2D, 2.5D function graphs and animations, extrema, root, tangent, limit,derivative, integral, inverse; sequence of number: arithmetic progression, geometric progression; analytic geometry: vector, line, circle, ellipse, hyperbola and parabola; solid geometry: spatial line, prism, pyramid, .... Free download of Math Studio 2.8.1 ScienCalc is a convenient and powerful scientific calculator. ScienCalc calculates mathematical expression. It supports the common arithmetic operations (+, -, *, /) and parentheses. The program contains high-performance arithmetic, trigonometric, hyperbolic and transcendental calculation routines. All the function routines therein map directly to Intel 80387 FPU floating-point machine instructions. Find values .... Free download of Scientific Calculator - ScienCalc 1.3.9 EqPlot plots 2D graphs from complex equations. The application comprises algebraic, trigonometric, hyperbolic and transcendental functions. EqPlot can be used to verify the results of nonlinear regression analysis program. Graphically Review Equations: EqPlot gives engineers and researchers the power to graphically review equations, by putting a large number of equations at .... Free download of EqPlot 1.3.9 A handy, fast, reliable, precise tool if you need to find symbolic and numerical Taylor polynomials of standard functions. Taylor Calculator Real 36 is programmed in C#. All calculations are done in double floating data type. The calculator calculates partial sums of Taylor series of standard functions (including hyperbolic). Calculation history .... Free download of Taylor Calculator Real 36 What is Yorick? Yorick is an interpreted programming language for scientific simulations or calculations, postprocessing or steering large simulation codes, interactive scientific graphics, and reading, writing, or translating large files of numbers. Yorick includes an interactive graphics package, and a binary file package capable of translating to and from the .... Free download of Yorick for Windows 2.1.05 This software utility can plot regular or parametric functions, in Cartesian or polar coordinate systems, and is capable to evaluate the roots, minimum and maximum points as well as the first derivative and the integral value of regular functions. Easy to use, ergonomic and intuitive interface, large graphs are only a .... Free download of WinDraw 1.0 Lite version converts several units of length. Plus version converts length, weight and capacity measures. By typing a number into box provided will instantly display the results without the user having to search through a confusing menu of choices. Great for mathematical problems, science or travel. Many different uses for this .... Free download of Breaktru Quick Conversion 10.1
{"url":"http://www.downloadtyphoon.com/microsoft-mathematics/downloadinugkiwy","timestamp":"2014-04-18T18:35:49Z","content_type":null,"content_length":"35836","record_id":"<urn:uuid:07640475-c15a-4a45-8d64-0b0267c093d4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
Riverdale, GA ACT Tutor Find a Riverdale, GA ACT Tutor ...I have successfully tutored several high school and college students in both Biology and Animal Science (Zoology). Thus, I possess the patience, focus, scientific vocabulary and expertise to be an effective Zoology tutor. I am uniquely-qualified to tutor students for the MCAT. I received a B.A. degree in Chemistry and Mathematics. 57 Subjects: including ACT Math, reading, chemistry, English ...I can teach you my means of preparation and my test taking strategies so YOU can achieve the score you're after! I deeply believe in the transformational power of education. I left full time teaching to pursue my lifelong dream of writing, but through tutoring, I stay connected with young people and get to use my gift for teaching to help them reach their maximum potential. 22 Subjects: including ACT Math, English, reading, writing ...I do still study the topics to keep the information fresh in my head. Took the actual test when I considered joining the military. Made a 92 or 93 on the actual test. 29 Subjects: including ACT Math, chemistry, reading, physics I'm a full time mechanical engineering professor at Georgia Tech during the week and I tutor students on the weekend. I've been teaching for a few years and have received excellent reviews from my students, because I'm very good at keeping things interesting by using videos and demonstrations that ... 12 Subjects: including ACT Math, calculus, geometry, GRE ...I was a Science major at the University of Georgia, then decided I wanted to teach and received my teaching certificate. Recently, I finished my Master's degree in Education. I look forward to using my classroom skills and enthusiasm to assist students on an individual basis.Experience tutoring college level Biology. 11 Subjects: including ACT Math, chemistry, physics, biology Related Riverdale, GA Tutors Riverdale, GA Accounting Tutors Riverdale, GA ACT Tutors Riverdale, GA Algebra Tutors Riverdale, GA Algebra 2 Tutors Riverdale, GA Calculus Tutors Riverdale, GA Geometry Tutors Riverdale, GA Math Tutors Riverdale, GA Prealgebra Tutors Riverdale, GA Precalculus Tutors Riverdale, GA SAT Tutors Riverdale, GA SAT Math Tutors Riverdale, GA Science Tutors Riverdale, GA Statistics Tutors Riverdale, GA Trigonometry Tutors Nearby Cities With ACT Tutor College Park, GA ACT Tutors Conley ACT Tutors East Point, GA ACT Tutors Fayetteville, GA ACT Tutors Forest Park, GA ACT Tutors Hapeville, GA ACT Tutors Jonesboro, GA ACT Tutors Lake City, GA ACT Tutors Mableton ACT Tutors Morrow, GA ACT Tutors Norcross, GA ACT Tutors Peachtree City ACT Tutors Red Oak, GA ACT Tutors Tucker, GA ACT Tutors Union City, GA ACT Tutors
{"url":"http://www.purplemath.com/Riverdale_GA_ACT_tutors.php","timestamp":"2014-04-18T09:04:11Z","content_type":null,"content_length":"23834","record_id":"<urn:uuid:b82abeb7-e96e-44bb-a6ab-6daddb4628de>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
14.3 Extremal Values on a Surface in Three Dimensions Home | 18.013A | Chapter 14 Tools Glossary Index Up Previous Next 14.3 Extremal Values on a Surface in Three Dimensions A surface in three dimensions is determined by one equation, which again we write as $G = 0$ . Suppose, again that we wish to find extrema of $F$ on this surface. This time $∇ &LongRightArrow; F$ can have no non-vanishing component in the plane tangent to the surface an extreme point, exactly as in the previous case. This means that $∇ &LongRightArrow; F$ and $∇ &LongRightArrow; G$ must again point in the same direction. We can observe that this implies that the cross product $∇ &LongRightArrow; F × ∇ &LongRightArrow; G$ must be 0, and this vector equation gives us two independent component equations that we can solve along with $G = 0$ to find the extrema. Also we can apply the Lagrange Multipliers approach exactly as before. This time there are three components to all the vectors, so that the statement $∇ &LongRightArrow; F = c ∇ &LongRightArrow; G$ supplies us with three equations which, along with $G = 0$ , is enough to determine $c$ and the coordinates of the extrema. Again you must identify maxima and minima and distinguish merely local extrema from global ones at each extreme point. When the surface is defined parametrically, you can reduce the problem by substitution to a two dimensional problem in the two parameters of the surface, with no restriction on the parameters. Finding critical points then involves solving the equations gotten by setting the partial derivatives of $F$ with respect to the parameters to 0. The two dimensional Newton's Method of the last chapter can be used to do this for a numerical example. 14.3 Suppose we want to maximize $x y z − x$ subject to the condition $2 x 2 + 4 y 2 + 3 z 2 = 6$. Write equations for obeyed by $x , y$ and $z$ at critical points obtained by the cross product 14.4 Write the equations for this same problem implied by the Lagrange Multipliers approach. 14.5 Suppose that on the surface defined by the parametric representation $x = cos ⁡ u sin ⁡ v y = 2 sin ⁡ u sin ⁡ v z = 3 cos ⁡ v$ for $0 < u < 2 π , 0 < v < π$ . We want to find the critical points of $F , F = x 4 − 2 y 2 z 2$ . Find equations for same.
{"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/MathML/chapter14/section03.xhtml","timestamp":"2014-04-18T09:08:20Z","content_type":null,"content_length":"9784","record_id":"<urn:uuid:f18dc74c-a420-4886-ba9b-3c05dcdccdf9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Harmonic series From Encyclopedia of Mathematics The series of numbers $$\sum_{k=1}^{\infty}\frac{1}{k}.$$ Each term of the harmonic series (beginning with the second) is the harmonic mean of its two contiguous terms (hence the name harmonic series). The harmonic series is divergent (G. Leibniz, 1673), and its partial sums $$S_n = \sum_{k=1}^n\frac{1}{k}$$ increase as $\ln n$ (L. Euler, 1740). There exists a constant $\gamma>0$, known as the Euler constant, such that $S_n = \ln n + \gamma + \varepsilon_n$, where $\lim\limits_{n\to\infty}\varepsilon_n = 0$. The series $$\sum_{k=1}^{\infty}\frac{1}{k^{\alpha}}$$ is called the generalized harmonic series; it is convergent for $\alpha>1$ and divergent for $\alpha\leq1$. For a proof of the expression for $S_n$ see, e.g., [a1], Thm. 422. Note that the series $\sum 1/p$ extended over all prime numbers $p$ diverges also; see, e.g., [a1], Thm. 427, for an expression of its partial sums. Generalized harmonic series are often used to test whether a given series is convergent or divergent by estimating in terms of $1/n^{\alpha}$ the order of the terms of the given series; see Series. [a1] G.H. Hardy, E.M. Wright, "An introduction to the theory of numbers" , Oxford Univ. Press (1979) How to Cite This Entry: Harmonic series. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Harmonic_series&oldid=29150 This article was adapted from an original article by L.D. Kudryavtsev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/Harmonic_series","timestamp":"2014-04-20T08:21:49Z","content_type":null,"content_length":"20236","record_id":"<urn:uuid:7c2d787a-5454-4ea9-9f25-dea98d54587d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
inverse by mod May 13th 2008, 01:44 PM #1 Jan 2008 inverse by mod You have intercepted a long sequence of numbers and noticed that the most frequently occurring numbers were 13 and 7, in this order. You know that the ciphertext was obtained with an affine enciphering transformation with modulus 35, on single letter message blocks of the ordinary English alphabet A - Z and the letter b for “space”. Find the deciphering transformation. I have done so far: Deciphering transformation of the form: g(y) = cx + d mod 35 Space will be the most frequently occuring, with e the second most frequently occuring. So b or 26 ---> 13 e or 4 ----> 7 26 = g(13) = 13c + d 4 = g(7) = 7c + d So 6c = 22 mod 35 But I have no idea where to go from here?! Could anyone please explain? Thanks in advance! You have intercepted a long sequence of numbers and noticed that the most frequently occurring numbers were 13 and 7, in this order. You know that the ciphertext was obtained with an affine enciphering transformation with modulus 35, on single letter message blocks of the ordinary English alphabet A - Z and the letter b for “space”. Find the deciphering transformation. I have done so far: Deciphering transformation of the form: g(y) = cx + d mod 35 Space will be the most frequently occuring, with e the second most frequently occuring. So b or 26 ---> 13 e or 4 ----> 7 26 = g(13) = 13c + d 4 = g(7) = 7c + d So 6c = 22 mod 35 But I have no idea where to go from here?! Could anyone please explain? Thanks in advance! Multiply both sides of the equation by 6. Then you have 36c = 132 mod 35 or c = 27 mod 35. Thank you so much for the reply icemanfan, I understand what you have done... except how do you know to multiply by 6? In this example it's fairly easy to see that mod 35, 6 is its own inverse, because when you multiply 6 times 6 you get 36 which is congruent to 1. If that number had been 2, for instance, you would find the inverse of 2 mod 35 which is 18, and multiply by that. Sometimes finding an inverse isn't as easy, but it was in this case. Thanks icemanfan, I just have one more past exam question I'm a little stuck on... Explain why, h(x) = 7x + 5 mod 35 is not a suitable enciphering transformation. This is worth 6 marks, yet the only reason I can think of is because 35 is divisble by 3, and hence when rearranged to y - 5 = 7x + mod 35 it is not possible to find a number x that gives 1 mod 35...could anyone please explain what else I have missed out/need to say? Thanks in advance! Explain why, h(x) = 7x + 5 mod 35 is not a suitable enciphering transformation. This is worth 6 marks, yet the only reason I can think of is because 35 is divisble by 3, and hence when rearranged to y - 5 = 7x + mod 35 it is not possible to find a number x that gives 1 mod 35...could anyone please explain what else I have missed out/need to say? Thanks in advance! I think I have an explanation.... but I am not sure. The utility of such a cipher is that 7x+5 should map to as many different numbers as possible for different x.That is we need to have a unique 7x+5 mod 35 for each x. But from this thread, we know that in any collection of more than 6 integers, we will have some two whose difference will be divisible by 5. Lets call it a and b. So $5|(a-b) \Rightarrow 35|7 (a-b) \Rightarrow 7a = 7b\, mod \,35 \Rightarrow h(a) = 7a + 5 = 7b + 5 = h(b)\, mod \,35$. So this means we can have a bijection for h, for at most 5 values of x. Thus only 5 characters can be coded successfully using this cipher. Hence it is not a good cipher, by virtue of low Last edited by Isomorphism; May 14th 2008 at 10:35 AM. May 13th 2008, 01:47 PM #2 MHF Contributor Apr 2008 May 13th 2008, 01:54 PM #3 Jan 2008 May 13th 2008, 02:09 PM #4 MHF Contributor Apr 2008 May 14th 2008, 12:43 AM #5 Jan 2008 May 14th 2008, 10:18 AM #6
{"url":"http://mathhelpforum.com/number-theory/38245-inverse-mod.html","timestamp":"2014-04-18T18:46:42Z","content_type":null,"content_length":"46586","record_id":"<urn:uuid:ce55b4f4-9502-4242-ae14-fb73c9645dce>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by saim Total # Posts: 8 (x+y)^2= x^2 + y^2 + 2xy so add the values 12^2=x^2 + y^2 + 100 x^2+y^2=44 well i think you got the question wrong anyway the tenth term is 1/128 the first term is 4 the "r" is 0.5 surface area of cone=pi(radius)(s) + pi(radius)^2 "s" is the slanting edge of cone s^2=height^2 + radius^2 s=(100+49)^1/2 s=12.2 so surface area=pi(7)(12.2) +pi(49) =134.44(pi) Math 9 lets call your drink cost x your sons y and daughters Z so x+y+z=12.9 and y=x+2.2 and z=x-0.9 so x+x+2.2+x-0.9=12.9 solve for x you get 3x=11.6 x=3.867 first made = 60*8=480 second made= 80*6=480 together= 960 it will take = (5976*10^21)/75=7968*10^19 humans it will take (1.14^n)*6.8x10^9=7968x10^19 n comes out to bw 26.4 Part a 1) 877.2 2) 886.794 3) 901.07 4) 922.2 5) 953.56 6) 982.45 tell the taste of these types of water well water mineral water fresh water tap water earated water?
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=saim","timestamp":"2014-04-17T21:33:15Z","content_type":null,"content_length":"7145","record_id":"<urn:uuid:5b60a28e-c994-4eee-af94-a22d5c0df862>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] python (against java) advocacy for scientific projects [SciPy-user] python (against java) advocacy for scientific projects Ravi lists_ravi@lavabit.... Mon Jan 19 13:58:19 CST 2009 The advice from Mr. Molden is well-argued, but he does gloss over a few of the difficulties. These serious problems are also present in Matlab & Java for the most part. 1. The python packaging system is junk. Matlab & Java get around this problem by not really having a packing system (leading to even worse confusion). PyPi & setuptools are painful (search enthought-dev & ipython-dev lists for the last year, especially for posts from Fernando Perez & Gael Varoquaux, for more 2. Installation/compilation of C/C++ extensions/wrappers: Matlab's cohesive toolbox shines here; their method is clearly documented and works reasonably well across all the platforms they support (at least on Solaris, HPUX, Linux & Windows, the plaforms I work with). Java extensions are, IMHO, reasonably straightforward to maintain, but python distutils takes everything to a whole new level of nightmare. For distutils difficulties, simply search the archives for this mailing list (especially those from David Cournepeau). 3. The lack of a real JIT compiler is a serious issue if the use cases involve more than linear algebra and differential equation solvers. In many such cases, for-loops and/or while-loops are the only reasonable solutions, both of which, very often, execute much faster under Matlab or Java. Some operations are simply not vectorizable if you wish to have maintainable code, e.g., large groups of interacting state machines. 4. Both Java & Matlab have very well-thought out IDEs. As I don't use IDEs myself, I cannot comment on their ease of use, but my colleagues who do work with them find them extremely useful. Neither eclipse-pydev nor eric3 is anywhere close to the Matlab IDE workspace. Java has several very nice IDEs but none of them are as useful as the Matlab IDE. A related issue is the lack of a decent debugger; pydb+ipython is the best one I have come across for python, but they are nowhere near Matlab/Java offerings. In spite of the issues highlighted above, Python is still the best choice, beacuse of the large library and because of the well-designed language specification. (Cpython's shortcomings are well-known and will eventually be addressed by PyPy and the like; in some computation-intensive cases, even IronPython beats out cpython, go figure.) Mr. Molden has provided a very good summary of the Python workflow but there is one issue that keeps rearing its ugly head on the numpy/scipy lists over & over again: On Monday 19 January 2009 11:14:28 Sturla Molden wrote: > 9. If the bottleneck cannot be solved by libraries or changing > algorithm, re-write these parts in Fortran 95. Compile with f2py to get > a Python callable extension module. Real scientists do not use C++ (if > we need OOP, we have Python.) I completely agree with the first part of the point above. (Use Fortran95 or many of the other languages which have very good numerical performance to speed up bottlenecks). However, the last part is merely ugly prejudice. Like python, Fortran, and other languages, C++ does have its place in scientific computing. Here's one example which, in my experience, is completely impossible to do in python, Matlab, Java or even C: The bottleneck in one our simulations is a fixed point FFT computation followed by a modified gradient search. Try implementing serious fixed-point computation with, say, 13-bit numbers, some of which are optimally expressed in log-normal form and the others in the standard form, on python/Matlab/Java/C. You will end up with either unmaintainable code or unusably slow code. C++ templates & a little bit of metaprogramming make prototyping the algorithm easy (because you can use doubles to verify data flow) while simultaneously making it easy to enhance the prototype quickly into fixed point code (simply by replacing types and running some automated tests to find appropriate bit-widths). In our case, we needed to optimize the radix of the underlying FFTs as well because of some high throughput Admittedly, the problem considered above is pretty difficult & pretty specialized, but the beauty of C++ or even of PL/1 is that it makes certain difficult problems tractable: problems which are practically impossible to solve with python/Java/Matlab/C. Leave your programming language prejudices at home when you consider afresh the optimal solutions to your problem. More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-January/019454.html","timestamp":"2014-04-20T21:10:36Z","content_type":null,"content_length":"7247","record_id":"<urn:uuid:a5f9dcfd-730e-4aed-9713-0b63c0ba57ca>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Force exerted by two charges 1. The problem statement, all variables and given/known data Two point charges are placed on the x-axis as follows: one positive charge, q[1], is located to the right of the origin at x= x[1], and a second positive charge, q[2], is located to the left of the origin at x= x[2]. What is the total force (magnitude and direction) exerted by these two charges on a negative point charge, q[3], that is placed at the origin? Use [tex]\epsilon_{0}[/tex] for the permittivity of free space. Take positive forces to be along the positive x-axis. Do not use unit vectors. 2. Relevant equations Coulomb's Law: F = [tex]\frac{1}{4\pi\epsilon_{0}}\frac{\left|q_{1}q_{2}\right|}{r^{2}}[/tex] 3. The attempt at a solution MasteringPhysics keeps giving me a "Check your signs error". Yet, as far as I can tell, I should be subtracting the force which is going left / the negative direction (ie, q[2]) from the force going to the right (ie, q[1]). Any hints?
{"url":"http://www.physicsforums.com/showthread.php?t=374501","timestamp":"2014-04-20T01:00:11Z","content_type":null,"content_length":"26157","record_id":"<urn:uuid:02bea3d0-c78f-433f-94d4-a3e56310a8b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Fade remover [Date Prev][Date Next][Thread Prev][Thread Next] - [Date Index][Thread Index][Author Index] Re: Fade remover >Am I correct in assuming that most of the Reed-Solomon and Vitribi encoding >could be done with lookup tables if necessary? I thought Pretty much correct. A Reed-Solomon encoder looks a lot like a CRC generator. Each element in the shift register is a group of bits called a "symbol" rather than a single bit. You can define a RS code with symbols of any size, but 8-bit symbols are especially popular because most computers have 8-bit bytes. You also typically have two lookup tables (each 256 bytes long for 8-bit symbols) used to convert symbols between "polynomial" and "index" form, though there are ways to do these conversions on the fly to save memory at the expense of a little speed. Re "Viterbi encoding", strictly speaking there is no such thing. The "Viterbi algorithm" is one of several algorithms for decoding a certain class of codes known as "convolutional" codes. So it's arguably more correct to refer to a "Viterbi-decoded convolutional Regardless of the decoding algorithm, all convolutional codes are generated in much the same way. You shift the data bits into a shift register with predetermined taps feeding networks of XOR gates. The outputs of each XOR network is sent sequentially on the channel. If there are two XOR networks, then two distinct encoded bits are sent on the channel for each input data bit and you have a "rate 1/2" code. If there are three XOR networks, you have a "rate 1/3" code. That's all there is to convolutional coding. It's one of the most widely used FEC codes used on spacecraft precisely because it's so easy to implement on the sending end. It's the *receiver* that has to do the hard work of decoding it, e.g., with the Viterbi algorithm, but that's all on the ground where lots of computing power is available. My web page has C and C++ code to encode and decode Reed-Solomon codes. I also have code to generate convolutional codes and decode them with either the Viterbi or Fano algorithms. See Via the amsat-bb mailing list at AMSAT.ORG courtesy of AMSAT-NA. To unsubscribe, send "unsubscribe amsat-bb" to Majordomo@amsat.org
{"url":"http://www.amsat.org/amsat/archive/amsat-bb/200102/msg00294.html","timestamp":"2014-04-16T04:12:43Z","content_type":null,"content_length":"5351","record_id":"<urn:uuid:17b117b7-61fb-487b-8970-466a49925af2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Modeling for the Life Sciences After the last glaciation, oak trees re-colonized Europe beginning in havens in Spain , Italy and the Balkans and spreading into northern Europe . The oak trees spread at a rate of 50 to 500 meters per year, very fast considering that acorns fall very near the trunks of the trees. How do we explain this rapid rate of re-colonization? This is one of several biological mysteries explored in Mathematical Modeling for the Life Sciences. The author first considers a deterministic diffusion model and uses standard reaction diffusion equations to model the expansion of the oak trees. However, the rate predicted by this model is too low. Using a spatial branching process model, however, the author can account for the observed spreading rate and provide a convincing biological explanation. Animals, particularly jays, carry a minority of acorns away, and the carrying distance varies from a hundred meters to several kilometers. Furthermore, the jays tend to bury the acorns separately in favorable vegetation transition zones. The stochastic branching model mixes two distributions for the spreading of acorns: the majority fall near the parent tree, but a minority fall according to a long-range distribution with a higher probability for large deviations. The whole discussion of this example occupies about three pages of text, and this is typical of the book. The outlook is “panoramic”, the focus is on real issues in the life sciences, and the discussion of each application is usually quite brief. The author does a good job in balancing mathematical rigor and biological interest. Theorems are presented, but often without proof or elaboration. Detailed calculations are presented when the author judges that necessary to explain the underlying biological issue. The author considers several other fascinating applications in the life sciences. These include pest control of the spruce budworm, the apparently chaotic population dynamics of the coleopter Tribolium, game theory for the interaction of hawks and doves, domestication of pearl millet, the Polymerase Chain Reaction (PCR) for DNA replication, and mapping of the Qualitative Trait Locus in genetics. The mathematics associated with these applications runs from discrete and continuous dynamical systems to Markov chains and diffusion, branching processes, and maximum likelihood This is a book best suited to advanced undergraduates or beginning graduate students. The prerequisites include some familiarity with ordinary and partial differential equations, probability and statistics. The Dominated Convergence Theorem is invoked at one point, but a good background in advanced calculus is generally sufficient in most places. There are appendices on ordinary differential equations, evolution equations, probability and statistics, but these are very brief summaries. This book was originally published in French in 2000. The English edition is quite readable, although there are occasional odd choices of words as well as a sentence here and there that doesn’t quite make sense. This would be a good choice for the main text or for supplemental reading in a course on mathematical applications to biology, particularly with appropriate instructor support to expand and amplify the abbreviated discussions. Bill Satzer (wjsatzer@mmm.com) is a senior intellectual property scientist at 3M Company, having previously been a lab manager at 3M for composites and electromagnetic materials. His training is in dynamical systems and particularly celestial mechanics; his current interests are broadly in applied mathematics and the teaching of mathematics.
{"url":"http://www.maa.org/publications/maa-reviews/mathematical-modeling-for-the-life-sciences?device=mobile","timestamp":"2014-04-17T14:00:56Z","content_type":null,"content_length":"28124","record_id":"<urn:uuid:5ebf26e3-3d83-477e-b946-c314b192b7c5>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Equivalence partitioning and Boundary Value Analysis Equivalence partitioning and Boundary Value Analysis Equivalence partitioning: Equivalence Partitioning determines the number of test cases for a given scenario. Equivalence partitioning is a black box testing technique with the following goal: 1.To reduce the number of test cases to a necessary minimum. 2.To select the right test cases to cover all possible scenarios. EP is applied to the inputs of a tested component. The equivalence partitions are usually derived from the specification of the component's behavior. An input has certain ranges which are valid and other ranges which are invalid. This may be best explained at the following example of a function which has the pass parameter "month" of a date. The valid range for the month is 1 to 12, standing for January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13. . ... -2 -1 0 1 .............. 12 13 14 15 ..... invalid partition 1 valid partition invalid partition 2 The testing theory related to equivalence partitioning says that only one test case of each partition is needed to evaluate the behavior of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behavior of the program. To use more or even all test cases of a partition will not find new faults in the program. The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably. Equivalence partitioning is no standalone method to determine test cases. It has to be supplemented by boundary value analysis. Having determined the partitions of possible inputs the method of boundary value analysis has to be applied to select the most effective test cases out of these partitions. Boundary Value Analysis : Boundary Value Analysis determines the effectiveness of test cases for a given scenario. To set up boundary value analysis test cases the tester first has to determine which boundaries are at the interface of a software component. This has to be done by applying the equivalence partitioning technique. Boundary value analysis and equivalence partitioning are inevitably linked together. For the example of the month a date would have the following partitions: . .. -2 -1 0 1 .............. 12 13 14 15 ..... --invalid partition 1 valid partition invalid partition 2 By applying boundary value analysis we can select a test case at each side of the boundary between two partitions . In the above example this would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper boundary. Each of these pairs consists of a "clean" and a "dirty" test case. A "clean" test case should give a valid operation result of the program. A "dirty" test case should lead to a correct and specified input error treatment such as the limiting of values, the usage of a substitute value, or in case of a program with a user interface, it has to lead to warning and request to enter correct data. The boundary value analysis can have 6 test cases: n, n-1, and n+1 for the upper limit; and n, n-1, and n+1 for the lower limit. Note: As per the ISTQB standard the BVA will be " lower boundary -1between the boundary upper boundary +1", so boundary value analysis can have 4 test cases i.e. n & n+1 for the upper limit and n & n-1 for the lower limit. An input field takes the year of birth between 1900 and 2004 The boundary values for testing this field are A. 0,1900,2004,2005 B. 1900, 2004 C. 1899,1900,2004,2005 D. 1899, 1900, 1901,2003,2004,2005 Practically the Correct answer would be D, but as per ISTQB standards the answer would be C. Happy Testing, Javed Nehal
{"url":"http://www.iotap.com/Blog/tabid/673/entryid/133/Equivalence-partitioning-and-Boundary-Value-Analysis.aspx","timestamp":"2014-04-16T21:56:24Z","content_type":null,"content_length":"80820","record_id":"<urn:uuid:e005d24b-ac64-4526-abb1-8b090e27b069>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
12.8.4 Conclusion Next: 13 Data Parallel C Up: Hierarchical Tree-Structures as Previous: 12.8.3 Tree as Grid Grid-based particle simulation algorithms continue to provide an effective technique for studying systems of pointlike particles in addition to continuum systems. These methods are a useful alternative to grid-less simulations which cannot incorporate fluid interactions or complicated boundary conditions as easily or effectively. While the approach is quite different, the tree-structure and enhanced accuracy criterion which are the bases of multipole methods are equally applicable as the fundamental structure of an adaptive refinement mesh algorithm. The two techniques complement each other well and can provide a useful environment both for studying mixed particle-continuum systems and for comparing results even when a mesh is not necessitated by the physically interesting aspects of the modelled system. The hierarchical structure naturally occurs in problems which demonstrate locality such as systems governed by Poisson's Equation. Implementations for parallel, distributed-memory computers gain direct benefit from the locality. Because both the grid-based and particle-based methods form the same hierarchical structure, common data partitioning can be employed. A hybrid simulation using both techniques implicitly has the information for both components-particle and fluid-at hand on the local processor node, simplifying the software development and increasing the efficiency of computing such systems. Considerations such as the efficiency of a deep, grid-based hierarchy with few or even one particle per grid cell need to be explored. Current particle-based algorithm research comparing computational accuracy against grid resolution (i.e., one can utilize lower computational accuracy with a finer grid or less refinement with higher computational accuracy), will strongly influence this result. Also, the error created by interpolating the particles onto a grid and then solving the discrete equation must be addressed when comparing gridless and grid-based methods. Guy Robinson Wed Mar 1 10:19:35 EST 1995
{"url":"http://www.netlib.org/utk/lsi/pcwLSI/text/node307.html","timestamp":"2014-04-21T02:03:33Z","content_type":null,"content_length":"3612","record_id":"<urn:uuid:c8bd2fd3-c7d2-4ec4-b027-19c62c5eb763>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Approximately 15% of peole are left-handed. If two people are selected at random, what is the probability of the following events? P(At least one is right-handed)? Is this correct? 1-P(none are RH) so..1-P(LH and LH) 1-(.85)(.85) =1-.7225 =.2775...is this correct? • 10 months ago • 10 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51b76d9ae4b00fef3d0a9549","timestamp":"2014-04-16T22:28:08Z","content_type":null,"content_length":"54311","record_id":"<urn:uuid:e6b6c489-16d4-4ab0-a75f-a8b6439edeb8>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
Renesse, “Scalable and secure resource location Results 1 - 10 of 19 - ACM Transaction on Computer Systems , 2003 "... The growing interest in peer-to-peer applications has underlined the importance of scalability in modern distributed systems. Not surprisingly, much research effort has been invested in gossip-based broadcast protocols. These trade the traditional strong reliability guarantees against very good “sca ..." Cited by 241 (33 self) Add to MetaCart The growing interest in peer-to-peer applications has underlined the importance of scalability in modern distributed systems. Not surprisingly, much research effort has been invested in gossip-based broadcast protocols. These trade the traditional strong reliability guarantees against very good “scalability” properties. Scalability is in that context usually expressed in terms of throughput and delivery latency, but there is only little work on how to reduce the overhead of membership management at large scale. This paper presents Lightweight Probabilistic Broadcast (lpbcast), a novel gossip-based broadcast algorithm which preserves the inherent throughput scalability of traditional gossip-based algorithms and adds a notion of membership management scalability: every process only knows a random subset of fixed size of the processes in the system. We formally analyze our broadcast algorithm in terms of scalability with respect to the size of individual views, and compare the analytical results both with simulations and concrete measurements. - IEEE TRANSACTIONS ON INFORMATION THEORY , 2006 "... Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join a ..." Cited by 208 (5 self) Add to MetaCart Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of “gossip ” algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the so-called Preferential Connectivity (PC) model. - IEEE TRANSACTIONS ON COMPUTERS , 2003 "... Gossip-based protocols for group communication have attractive scalability and reliability properties. The probabilistic gossip schemes studied so far typically assume that each group member has full knowledge of the global membership and chooses gossip targets uniformly at random. The requirement ..." Cited by 167 (21 self) Add to MetaCart Gossip-based protocols for group communication have attractive scalability and reliability properties. The probabilistic gossip schemes studied so far typically assume that each group member has full knowledge of the global membership and chooses gossip targets uniformly at random. The requirement of global knowledge impairs their applicability to very large-scale groups. In this paper, we present SCAMP (Scalable Membership protocol), a novel peer-to-peer membership protocol which operates in a fully decentralized manner and provides each member with a partial view of the group membership. Our protocol is self-organizing in the sense that the size of partial views naturally converges to the value required to support a gossip algorithm reliably. This value is a function of the group size, but is achieved without any node knowing the group size. We propose additional mechanisms to achieve balanced view sizes even with highly unbalanced subscription patterns. We present the design, theoretical analysis, and a detailed evaluation of the basic protocol and its refinements. Simulation results show that the reliability guarantees provided by SCAMP are comparable to previous schemes based on global knowledge. The scale of the experiments attests to the scalability of the protocol. "... Ahtruct- Motivated by applications to sensor, peer-topeer and ad hoc networks, we study distributed asynchronous algorithms, also known as gossip algorithms, for computation and information exchange in an arbitrarily connected network of nodes. Nodes in such networks operate under limited computatio ..." Cited by 158 (14 self) Add to MetaCart Ahtruct- Motivated by applications to sensor, peer-topeer and ad hoc networks, we study distributed asynchronous algorithms, also known as gossip algorithms, for computation and information exchange in an arbitrarily connected network of nodes. Nodes in such networks operate under limited computational, communication and energy resources. These constraints naturally give rise to "gossip " algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for arbitrary network, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic mairix characterizing the algorithm. Using recent results of Boyd, Diaconis and Xiao , 2001 "... The dynamic behavior of a network in which information is changing continuously over time requires robust and efficient mechanisms for keeping nodes updated about new information. Gossip protocols are mechanisms for this task in which nodes communicate with one another according to some underlying d ..." Cited by 141 (7 self) Add to MetaCart The dynamic behavior of a network in which information is changing continuously over time requires robust and efficient mechanisms for keeping nodes updated about new information. Gossip protocols are mechanisms for this task in which nodes communicate with one another according to some underlying deterministic or randomized algorithm, exchanging information in each communication step. In a variety of contexts, the use of randomization to propagate information has been found to provide better reliability and scalability than more regimented deterministic approaches. In many settings, such as a cluster of distributed computing hosts, new information is generated at individual nodes, and is most “interesting ” to nodes that are nearby. Thus, we propose distance-based propagation bounds as a performance measure for gossip mechanisms: a node at distance d from the origin of a new piece of information should be able to learn about this information with a delay that grows slowly with d, and is independent of the size of the network. For nodes arranged with uniform density in Euclidean space, we present natural gossip mechanisms, called spatial gossip, that satisfy such a guarantee: new information is spread to , 2003 "... As Peer-to-Peer (P2P) networks become popular, there is an emerging need to collect a variety of statistical summary information about the participating nodes. The P2P networks of today lack mechanisms to compute even such basic aggregates as MIN, MAX, SUM, COUNT or AVG. In this paper, we define and ..." Cited by 65 (4 self) Add to MetaCart As Peer-to-Peer (P2P) networks become popular, there is an emerging need to collect a variety of statistical summary information about the participating nodes. The P2P networks of today lack mechanisms to compute even such basic aggregates as MIN, MAX, SUM, COUNT or AVG. In this paper, we define and study the NODEAGGREGATION problem that is concerned with aggregating data stored at nodes in the network. We present generic schemes that can be used to compute any of the basic aggregation functions accurately and robustly. Our schemes can be used as building blocks for tools to collect statistics on network topology, user behavior and other node characteristics. This is a STUDENT paper intended as a REGULAR presentation. I. , 2002 "... Monitoring wide, hostile areas requires disseminating data between fixed, disconnected clusters of sensor nodes. It is not always possible to install long-range radios in order to cover the whole area. We propose to leverage the movement of mobile individuals, equipped with smart-tags, to disseminat ..." Cited by 56 (5 self) Add to MetaCart Monitoring wide, hostile areas requires disseminating data between fixed, disconnected clusters of sensor nodes. It is not always possible to install long-range radios in order to cover the whole area. We propose to leverage the movement of mobile individuals, equipped with smart-tags, to disseminate data across disconnected static nodes spread across a wide area. Static nodes and mobile smart-tags exchange data when they are in the vicinity of each other; smart-tags disseminate data as they move around. In this paper, we propose an algorithm for update propagation and a model for smart-tag based data dissemination. We use simulation to study the characteristics of the model we propose. Finally, we present an implementation based on bluetooth smart-tags. , 2002 "... In recent years, gossip-based algorithms have gained prominence as a methodology for designing robust and scalable communication schemes in large distributed systems. The premise underlying distributed gossip is very simple: in each time step, each node v in the system selects some other node w as a ..." Cited by 55 (3 self) Add to MetaCart In recent years, gossip-based algorithms have gained prominence as a methodology for designing robust and scalable communication schemes in large distributed systems. The premise underlying distributed gossip is very simple: in each time step, each node v in the system selects some other node w as a communication partner — generally by a simple randomized rule — and exchanges information with w; over a period of time, information spreads through the system in an “epidemic fashion”. A fundamental issue which is not well understood is the following: how does the underlying low-level gossip mechanism — the means by which communication partners are chosen — affect one’s ability to design efficient high-level gossip-based protocols? We establish one of the first concrete results addressing this question, by showing a fundamental limitation on the power of the commonly used uniform gossip mechanism for solving nearest-resource location problems. In contrast, very efficient protocols for this problem can be designed using a non-uniform spatial gossip mechanism, as established in earlier work with Alan Demers. We go on to consider the design of protocols for more complex problems, providing an efficient distributed gossipbased protocol for a set of nodes in Euclidean space to construct an approximate minimum spanning tree. Here too, we establish a contrasting limitation on the power of uniform gossip for solving this problem. Finally, we investigate gossip-based packet routing as a primitive that underpins the communication patterns in many protocols, and as a way to understand the capabilities of different gossip mechanisms at a general level. , 2004 "... Motivated by applications to sensor and ad hoc networks, we study distributed algorithms for passing information and for computing averages in an arbitrarily connected network of nodes. Our work draws upon and contributes to a growing body of literature in three areas: (i) Distributed averaging algo ..." Cited by 12 (0 self) Add to MetaCart Motivated by applications to sensor and ad hoc networks, we study distributed algorithms for passing information and for computing averages in an arbitrarily connected network of nodes. Our work draws upon and contributes to a growing body of literature in three areas: (i) Distributed averaging algorithms, as formulated in Kempe, Dobra and Gehrke (2003), (ii) geometric random graph models for large networks of sensors, as put forth in Gupta and Kumar (2000), and (iii) the fastest mixing Markov chain on a graph, as studied recently in Boyd, Diaconis and Xiao (2003). For distributed - ACM Mobile Computing and Communications Review , 2002 "... With the rising popularity of network-based applications and the potential use of mobile ad hoc networks in civilian life, an efficient resource discovery service is needed in such networks for quickly locating resource providers. In addition, to improve user experience, QoS awareness is also cru ..." Cited by 11 (1 self) Add to MetaCart With the rising popularity of network-based applications and the potential use of mobile ad hoc networks in civilian life, an efficient resource discovery service is needed in such networks for quickly locating resource providers. In addition, to improve user experience, QoS awareness is also crucial. In this paper, we identify the challenges when basic resource discovery techniques for the Internet are used in mobile ad hoc networks. We then propose a framework that provides a unified solution to the discovery of resources and QoS-aware selection of resource providers. The key entities of this framework are a set of self-organized discovery agents. These agents manage the directory information of resources using hash indexing. They also dynamically partition the network into domains and collect intra- and inter-domain QoS information to select appropriate providers.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=167969","timestamp":"2014-04-16T13:39:48Z","content_type":null,"content_length":"41531","record_id":"<urn:uuid:1c0aed1b-635b-4e0a-9cb7-0ae47575e1ce>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Point Charges: direction and magnitude 1. The problem statement, all variables and given/known data Four point charges are located at the corners of a square with sides of length a. Two of the charges are + q, and two are - q. 1)Find the direction of the net electric force exerted on a charge + Q, located at the center of the square, for the following arrangement of charge: the charges alternate in sign \left( { + q, - q, + q, - q} \right) as you go around the square. 2)Find the magnitude of the net electric force exerted on a charge + Q, located at the center of the square, for the following arrangement of charge: the charges alternate in sign \left( { + q, - q, + q, - q} \right) as you go around the square. 3)Find the magnitude of the net electric force exerted on a charge + Q, located at the center of the square, for the following arrangement of charge: the two positive charges are on the top corners, and the two negative charges are on the bottom corners. Express your answer in terms of the variables q, Q, a, and appropriate constants. 4) Find the direction of the net electric force exerted on a charge + Q, located at the center of the square, for the following arrangement of charge: the two positive charges are on the top corners, and the two negative charges are on the bottom corners. 2. Relevant equations F = k (qQ/r^2) ? 3. The attempt at a solution For question 1 I think it is magnitude = 0 because of the pull directions (b/c the signs alternate). For 4, I think the pull will be downward b/c the Q+ will be attracted to the neg charges. For 2 and 3 I'm not exactly sure what they are asking, so can somebody help me with my comprehension of what they are asking? Thanks!
{"url":"http://www.physicsforums.com/showthread.php?t=372855","timestamp":"2014-04-21T01:59:38Z","content_type":null,"content_length":"41158","record_id":"<urn:uuid:c6deef50-4d19-4c59-ab2d-89ac56c374ea>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - sci.math.research Discussion: sci.math.research A moderated newsgroup that focuses on research-level mathematics. To learn more about sci.math.research, including how to post an article, what to post, and other matters pertaining pertaining to the newsgroup, please read its charter online. Due to persistent technical errors, the Math Forum has disabled our posting mechanism to this moderated group. Please post to this group via a newsreader.
{"url":"http://mathforum.org/kb/forum.jspa?forumID=253&start=45","timestamp":"2014-04-19T12:50:38Z","content_type":null,"content_length":"38926","record_id":"<urn:uuid:9fe302bf-f8e3-4944-a147-18f603776f25>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Xah's Math Blog Archive 2010-06 〜 2010-10 Tron lightcycles TRON Light Cycle Optimal Strategy Added a new section to bottom of: The Problems of Traditional Math Notation. Mathematician Marijke Van Gans died (1955 〜 2009) Gravity simulator (requires Adobe Flash). Source www.nowykurier.com See also: Great Math Programs. Torus Autologlyph For more info and where to buy, see: Geometric Pattern on Sphere and Torus. Detexify is a tool that lets you draw a math symbol and it shows you the code for LaTeX. The tool is created by Daniel Kirsch. At http://detexify.kirelabs.org/classify.html. See also: Math Symbols in Unicode. If you are a emacs user, you can set your emacs up so that any frequently used symbols can be entered by a single shortcut key, or a abbreviation. See: Emacs and Unicode Tips. Thanks to Tim Tran for Donation. Please subscribe, and YOU, donate! Or, please link from your blog. Thank you for YOUR support! Ian Stewart has a new book out. • Professor Stewart's Cabinet of Mathematical Curiosities By Ian Stewart. amazon Read 1/3 of his Flatterland amazon in ≈2002. See also: FLATLAND: A Romance of Many Dimensions, and Flatland: A Introduction (by Xah Lee) for many subsequent books and films on Flatland. It is one of my favorite book, say, in top 5, of all books in my life. Thanks to R Michael Underwood for the tip. Mathematicians Richard Palais and Hermann Karcher, have released a new version of their math visualization software, the 3DXM. The main change is that it now has button-like interface in place of menus, where each button is a icon of the surface or math subject. This makes it much more attractive, and easier to use. Check it out. 3DXM screenshot. Download at 3DXM. Note: the new version is for Mac only. For Windows or Linux users, there's always the Java version at the same download location. Though, the Java version has only some 50% of surfaces or other math A fantastic java applet to draw Voronoi diagram interactively. Very nice. Voro Glide @ www.pi6.fernuni-hagen.de… See also: Great Math Programs. Math Prizes and Nobel Ignobility. (commentary) My friend, Richard Palais, co-authored with his son Robert Palais a new book Differential Equations, Mechanics, and Computation. I've been hired to help them update the site. The result is this: ode-math.com. Half of the book is free in PDF files. Also, lots of java applets and animation files are coming. You can buy the book at Amazon: amazon. However, for some reason, Amazon doesn't have extra copies. img src Celtic Knots, Truchet tiles, Combinatorial Patterns. Discovered that there are quite a lot articles in remembrance of Martin Gardner. See bottom of: Martin Gardner (1914 〜 2010). blog comments powered by
{"url":"http://xahlee.info/math/blog_past_2010-06.html","timestamp":"2014-04-20T12:06:15Z","content_type":null,"content_length":"13401","record_id":"<urn:uuid:c224fe49-ea92-4274-960e-351d67d835ff>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
The joy of IIB Matrix Models Posted by Urs Schreiber As readers of sci.physics.research might remember, a while ago I had learned about the IIB Matrix Model and had fallen in love with it. Unfortunately at that time I was busy with other things and didn’t find the time to absorb the technical details. Two events now made me have a second look at the literature on this model. One is, maybe surprisingly, my encounter with Pohlmeyer invariants. The other is the review T. Azuma, Matrix models and the gravitational interaction which appeared recently. What do Pohlmeyer inavariants have to do with proposals for nonperturbative string theory? There is at least one intriguing technical similarity: Pohlmeyer invariants are classical gauge invariant observables of the bosonic string which map any configuration of the string (at constant worldsheet time) to the number obtained by picking any constant $U\left(N\to \infty \right)$ gauge connection on target space and evaluating its Wilson line around the loop formed by the string at the given worldsheet time. In the paper K. Pohlmeyer & K.-H. Rehren, The invariant charges of the Nambu-Goto Theory: Their Geometric Origin and Their Completeness it is shown that from the knowledge of the values of all these Wilson lines one can reconstruct the form of the surface swept out by the string. What does this have to do with the IIB Matrix Model, though? For completeness let me recall that the IIB Matrix model is obtained either by a complete dimensional reduction of 10d SYM or of a matrix regularization of the Green-Schwarz IIB superstring. Either way one is left with the simple action (1)$S=-\frac{1}{{g}^{2}}\mathrm{Tr}\left(\frac{1}{4}\left[{A}_{\mu },{A}_{u }\right]\left[{A}^{\mu },{A}^{u }\right]+\frac{1}{2}\overline{\Psi }{\Gamma }^{\mu }\left[{A}_{\mu },\Psi \right]\right)\ Here $A$ and $\Psi$ are $N×N$ Hermitian matrices and $\Psi$ is furthermore a Majorana-Weyl spinor in ten dimensions. All these objects are constant, i.e. do not depend on any coordinate parameters - there are none in this model. There are many possible routes to rederive known string theory from this action. Let me just list a few important papers. The best starting point to read about the IIB Matrix Model is probably H. Aoki, S. Iso, H. Kawai, Y. Kitazawa, A. Tsuchiya, T. Tada, IIB Matrix Model. This is based in part on M. Fukuma, H. Kawai, Y. Kitazawa, A. Tsuchiya, String Field Theoy from IIB Matrix Model, where intriguing hints are given, that Wilson loops in this model of constant gauge connections satisfy the equations of motion of closed string field theory. I’ll have to say more about this below. Here I just note that this way of reobtaining strings from the IIB model is complementary to realizing that the action $S$ above is the matrix regularization of the $\mathrm{GS}$ action. In fact the authors argue that one way one arrives at F-strings, while the other way one arrives at D-strings. This is incidentally the point where the idea behind the Pohlmeyer invariants reappears in the IIB Matrix Model: In both cases Wilson loops of large-$N$ constant connections around a loop describe physical configurations of a string which is identified with this loop! Of course this is nothing but an aspect of string/gauge duality, somehow, but it is a particularly nice one, I think. In order to see how the IIB model fits into the very big picture the paper A. Connes, M. Douglas, A. Schwarz, Noncommutative Geometry and Matrix Theory: Compactification on Tori is probably indispensable. Therein it is discussed how the compactified IIB Matrix Model is the same as the BFSS Matrix Model at finite temperature! T. Azumo has more interesting references in his thesis paper. One of them looks like a valuable review text, apparently private notes by S. Shinohara, but unfortunately (for me!) this postscript is written in Japanese! :-) I want to say more about the IIB Model soon. Today my aim is to get started by trying to work out the central steps involved in the proof that Wilson loops of the IIB Matrix Model satisfy equations of motion of string field theory. I am motivated by the fact that the respective derivation in the above mentioned papers involves some rather messy looking formulas which unfortunately may obscure the absolutely beautiful mechanisms that are involved. These are what I want to work out. So the goal is to derive from the action (1)$S\left(A,\Psi \right)=-\frac{1}{{g}^{2}}\mathrm{Tr}\left(\frac{1}{4}\left[{A}_{\mu },{A}_{u }\right]\left[{A}^{\mu },{A}^{u }\right]+\frac{1}{2}\overline{\Psi }{\Gamma }^{\mu }\left[{A}_{\mu },\ Psi \right]\right)$ equations of motion for the expectation values (2)$⟨F\left(A,\Psi \right)⟩=\int 𝒟A\phantom{\rule{thinmathspace}{0ex}}𝒟\Psi \phantom{\rule{thinmathspace}{0ex}}F\left(A,\Psi \right)\mathrm{exp}\left(-S\left(A,\Psi \right)\right)$ of observables $F\left(A,\Psi \right)$ and to show that for $F$ a Wilson line of $A$ around an abstract loop (in the beginning there is no spacetime in which this loop is embedded in this model, this spacetime arises as a derived concept!) these equations of motion describe propagation of relativistic strings as well as their splitting and joining interactions. I’ll closely follow the papers by Kawai, Tsuchiya et al. but my goal shall be to focus on the cruciual steps that illuminate how strings and their propagation and interaction arises from calculation with matrices only. To do so, my first stel shall be to ignore the fermionic contribution and concentrate on the bosonic part of the model, i.e. to consider the action (3)$S\left(A\right)=-\frac{1}{{g}^{2}}\mathrm{Tr}\frac{1}{4}\left[{A}_{\mu },{A}_{u }\right]\left[{A}^{\mu },{A}^{u }\right]\phantom{\rule{thinmathspace}{0ex}}.$ In order to get anything like a Wilson loop from this action consider the abstract circle ${\mathrm{S}}^{1}$ and any function (4)$k:{\mathrm{S}}^{1}\to {R}^{\left(9,1\right)}$ (5)$\sigma ↦{k}^{\mu }\left(\sigma \right)$ on this circle. To any such function $k$ we may associate an observable $w\left(k\right)$ on the space of large constant matrices ${A}_{\mu }$ by writing (6)$v\left(k\right):=𝒫\mathrm{exp}\left({\int }_{{\mathrm{S}}^{1}}d\sigma \phantom{\rule{thinmathspace}{0ex}}{k}^{\mu }\left(\sigma \right){A}_{\mu }\right)$ As for now the ${k}^{\mu }\left(\sigma \right)$ have no physical interpretation and are just auxiliary functions that label all kinds of observables in the model. But we will see that $k$ acquires the meaning of the momentum density on a string’s worldsheet - momentum, that is, in a space which is nothing but the Fourier dual of the space of $k$ themselves. In order to make things a little simpler lets consider a regularized version of these Wilson loop obervables. Introduce a large integer (8)$M=\left[2\pi /ϵ\right]\phantom{\rule{thinmathspace}{0ex}},$ (9)${k}^{\mu }\left(nϵ\right)=:{k}_{n}^{\mu }$ and approximate (10)$v\left(k\right)\approx \prod _{n=1}^{M}\mathrm{exp}\left(iϵ\phantom{\rule{thinmathspace}{0ex}}{k}_{n}^{\mu }{A}_{\mu }\right)\phantom{\rule{thinmathspace}{0ex}}.$ The point is that this will allow us to conveniently identify matrix multiplication with differentiation with respect to $k$ by means of the formula (11)$-i\frac{\partial }{ϵ\partial {k}_{n}^{\mu }}{U}_{n}={A}_{\mu }{U}_{n}+𝒪\left(ϵ\right)\phantom{\rule{thinmathspace}{0ex}}.$ This way the matrices ${A}_{\mu }$ become related to the ‘spacetime’ associated with the ${k}^{\mu }$. Next we need to choose some sort of Schwinger-Dyson equation for the Matrix Model, re-express it but replacing matrix multiplication by differentiation and read off the sought-after equation of motion for the Wilson loops. It turns out that a useful Schwinger-Dyson equation to consider is (12)$0={k}_{M\mu }\int 𝒟A\phantom{\rule{thinmathspace}{0ex}}\frac{\partial }{\partial {A}_{\mu }^{\alpha }}\left\{\mathrm{Tr}\left({t}^{\alpha }v\left({k}^{1}\right)\right)w\left({k}^{2}\right)\cdots Here we write (13)${A}_{\mu }={A}_{\mu }^{\alpha }{t}^{\alpha }$ where ${t}^{\alpha }$ are the generators of U(N) normalized so as to satisfy (14)${t}_{\mathrm{ij}}^{\alpha }{t}_{\mathrm{ji}}^{\beta }={\delta }^{\alpha \beta }\phantom{\rule{thinmathspace}{0ex}}.$ Evaluating the above expression using the product rule gives us three different kinds of terms, depending on whether thr derivative acts on the action, the first Wilson loop or the remaining Wilson (15)$0={\underset{⏟}{{k}_{M\mu }\int 𝒟A\phantom{\rule{thinmathspace}{0ex}}\left(\frac{\partial }{\partial {A}_{\mu }^{\alpha }}{e}^{-S\left(A\right)}\right)\mathrm{Tr}\left({t}^{\alpha }v\left({k}^ {1}\right)\right)w\left({k}^{2}\right)\cdots w\left({k}^{l}\right)}}_{=\left(f\right)}+{\underset{⏟}{{k}_{M\mu }\int 𝒟A\phantom{\rule{thinmathspace}{0ex}}\left(\frac{\partial }{\partial {A}_{\mu }^{\ alpha }}\mathrm{Tr}\left({t}^{\alpha }v\left({k}^{1}\right)\right)\right)w\left({k}^{2}\right)\cdots w\left({k}^{l}\right){e}^{-S\left(A\right)}}}_{=\left(s\right)}+{\underset{⏟}{{k}_{M\mu }\sum _{b= 2}^{l}\int 𝒟A\phantom{\rule{thinmathspace}{0ex}}\left(\frac{\partial }{\partial {A}_{\mu }^{\alpha }}w\left({k}^{b}\right)\right)\mathrm{Tr}\left({t}^{\alpha }v\left({k}^{1}\right)\right)w\left({k}^ {2}\right)\cdots \stackrel{^}{w\left({k}^{b}\right)}\cdots w\left({k}^{l}\right){e}^{-S\left(A\right)}}}_{=\left(j\right)}\phantom{\rule{thinmathspace}{0ex}}.$ I have called these terms $\left(f\right)$, $\left(s\right)$ and $\left(j\right)$ because they will be seen to describe the free propagation, the splitting and the joining of the strings desribed by the various ${k}^{i}$. This works as follows: The $\left(f\right)$ term yields (16)$\left(f\right)={k}_{M\mu }\frac{1}{{g}^{2}}\int 𝒟A\phantom{\rule{thinmathspace}{0ex}}\mathrm{Tr}\left({t}^{\alpha }v\left({k}^{1}\right)\right)\mathrm{Tr}\left(\left[{t}^{\alpha },{A}_{u }\ right]\left[{A}^{\mu },{A}^{u }\right]\right)w\left({k}^{2}\right)\cdots w\left({k}^{l}\right){e}^{-S\left(A\right)}={k}_{M\mu }\frac{1}{{g}^{2}}\int 𝒟A\phantom{\rule{thinmathspace}{0ex}}\mathrm{Tr}\ left(\left[{A}_{u },\left[{A}^{\mu },{A}^{u }\right]\right]v\left({k}^{1}\right)\right)w\left({k}^{2}\right)\cdots w\left({k}^{l}\right){e}^{-S\left(A\right)}\approx \frac{i}{{g}^{2}ϵ}{\left(-i\frac {\partial }{ϵ\partial {k}_{M-1}}--i\frac{\partial }{ϵ\partial {k}_{1}}\right)}^{2}⟨w\left({k}^{1}\right)w\left({k}^{2}\right)\cdots w\left({k}^{l}\right)⟩\phantom{\rule{thinmathspace}{0ex}}.$ By replacing matrices by derivatives these can be taken out of the integral and become differential operators on the space of multi-loops. In fact, recalling that in the end ${k}^{\mu }$ will be identified with a momentum density on the string define the operator (17)${x}^{\mu }\left(\sigma \right):=-i\frac{\delta }{\delta {k}^{\mu }\left(\sigma \right)}$ on the space of (expectation values of) Wilson loops. The the above can be rewritten as (18)$\left(f\right)=\frac{iϵ}{{g}^{2}}{x}^{\prime 2}\left(0\right)⟨w\left({k}^{1}\right)w\left({k}^{2}\right)\cdots w\left({k}^{l}\right)⟩\phantom{\rule{thinmathspace}{0ex}}.$ It’s nice how the big old matrix term condenses to this concise form, but this is not quite the free equation of motion for the string that we expected to see. No problem. The reason is that the splitting term $\left(s\right)$ contains a contributions which doesn’t describe any splitting at all and has hence to be included in the free piece $\left(f\right)$ One finds (19)$\left(s\right)={k}_{M\mu }iϵ\int 𝒟A\phantom{\rule{thinmathspace}{0ex}}\sum _{j=1}^{M}\mathrm{Tr}\left({t}^{\alpha }\left(\prod _{n=1}^{j}{U}_{n}\left({k}^{1}\right)\right){k}_{j}^{\mu }{t}^{\ alpha }\left(\prod _{n=j+1}^{M}{U}_{n}\left({k}^{1}\right)\right)\right)w\left({k}^{2}\right)\cdots w\left({k}^{l}\right){e}^{-S\left(A\right)}={k}_{M\mu }iϵ\int 𝒟A\phantom{\rule{thinmathspace}{0ex}} \sum _{j=1}^{M}{k}_{j}^{\mu }\mathrm{Tr}\left(\prod _{n=1}^{j}{U}_{n}\left({k}^{1}\right)\right)\mathrm{Tr}\left(\prod _{n=j+1}^{M}{U}_{n}\left({k}^{1}\right)\right)w\left({k}^{2}\right)\cdots w\left ({k}^{l}\right){e}^{-S\left(A\right)}={k}_{M\mu }iϵ\int 𝒟A\phantom{\rule{thinmathspace}{0ex}}\sum _{j=1}^{M-1}{k}_{j}^{\mu }\mathrm{Tr}\left(\prod _{n=1}^{j}{U}_{n}\left({k}^{1}\right)\right)\mathrm {Tr}\left(\prod _{n=j+1}^{M}{U}_{n}\left({k}^{1}\right)\right)w\left({k}^{2}\right)\cdots w\left({k}^{l}\right){e}^{-S\left(A\right)}+\mathrm{iN}ϵ\left({k}_{M}{\right)}^{2}⟨w\left({k}^{1}\right)w\ left({k}^{2}\right)\cdots w\left({k}^{l}\right)⟩\phantom{\rule{thinmathspace}{0ex}}.$ It is fun to see how the partial derivative with respect to ${A}_{\mu }$ splits the string associated with ${k}^{1}$ in half and how the contraction of the ${t}^{\alpha }$ glues the remaining open ends so as to form two new closed strings. However in one of these processes a piece of string of vanishing length is split off and produces not another string but a kinematical term proportional to $k\cdot k$. Taking this term together with previously found $\left(f\right)$ gives (20)$\left(f\right)+\left({s}^{\prime }\right)=i\left(Nϵ{\left(k\left(0\right)\right)}^{2}+\frac{ϵ}{{g}^{2}}{x}^{\prime 2}\left(0\right)\right)⟨w\left({k}^{1}\right)w\left({k}^{2}\right)\cdots w\left Now, with a little tweaking of $N,ϵ$ and $g$, taking careful limits, etc., this begins to look like the Hamiltonian constraint of the free superstring! (The limits are subtle and I think Aoki et al. dont have them under rigorous control, either. But they give a lot of arguments for why the advertised string field theory should be obtained in the correctly taken limit. Furthermore, my prefactors might contain errors. They are not precisely what I see in the literature, which, on the other hand, does not seem to be fully consistent, either. I need to check that.) Good, finally one can convince oneself that the last term, $\left(j\right)$, really does describe the joining of two strings. One gets: (21)$\left(j\right)={k}_{M\mu }iϵ\int 𝒟A\phantom{\rule{thinmathspace}{0ex}}\sum _{b=2}^{l}\sum _{j=1}^{M}\mathrm{Tr}\left({t}^{\alpha }v\left({k}^{1}\right)\right)\mathrm{Tr}\left(\left(\prod _{n=1}^ {j}{U}_{n}\left({k}^{b}\right)\right){k}_{j}^{\mu }{t}^{\alpha }\left(\prod _{n=j+1}^{M}{U}_{n}\left({k}^{b}\right)\right)\right)w\left({k}^{2}\right)\cdots \stackrel{^}{w\left({k}^{b}\right)}\cdots w\left({k}^{l}\right){e}^{-S}={k}_{M\mu }iϵ\int 𝒟A\phantom{\rule{thinmathspace}{0ex}}\sum _{b=2}^{l}\sum _{j=1}^{M}{k}_{j}^{\mu }\mathrm{Tr}\left(\left(\prod _{n=1}^{j}{U}_{n}\left({k}^{b}\right)\ right)v\left({k}^{1}\right)\left(\prod _{n=j+1}^{M}{U}_{n}\left({k}^{b}\right)\right)\right)w\left({k}^{2}\right)\cdots \stackrel{^}{w\left({k}^{b}\right)}\cdots w\left({k}^{l}\right){e}^{-S\left(A\ Here now the partial derivative with respect to ${A}_{\mu }$ cuts open one of the strings and the contraction over ${t}^{\alpha }$ glues the ends with the open ends of the first string. The claim is that the three terms together give proper closed string field theory in an appropriate limit $ϵ\to 0$,. $N\to \infty$. I very much enjoy the train of thoughts that leads to this result. Posted at February 20, 2004 8:08 PM UTC Re: The joy of IIB Matrix Models BTW, does anyone know of any attempts to study the IIB matrix model on nontrivial backgrounds like pp-waves? Is there any literature on that? I know that there is considerable activity in BFSS models on pp-wave backgrounds, but did anybody look at, say, the calculation which I sketched above for cases where the SYM theory is defined on a nontrivial background? Posted by: Urs Schreiber on March 5, 2004 8:30 PM | Permalink | PGP Sig | Reply to this The Deep Thought Project Everybody knows Douglas Adams’ famous story about the people who set up a computer to calculate the answer to simply everything. Interestingly, there are really people trying to do that, in a sense. The idea is that if the Matrix Models of string theory contain the full non-perturbative information about the theory, and if they can in principle be solved explicitly - well, then just set up a computer to solve them and see what happens! I was indirectly asked to provide some references regarding these attempts. Here they are: Yoshihisa Kitazawa, Yastoshi Takayama, Dan Tomino, Correlators of Matrix Models on Homogeneous Spaces (2004) Jun Nishimura, Toshiyuki Okubo, Fumihiko Sugino Testing the Gaussian expansion method in exactly solvable matrix models (2003) Jun Nishimura, Lattice Superstring and Noncommutative Geometry (2003) H. Kawai, S. Kawamoto, T. Kuroki, T. Matsuo, S. Shinohara, Mean Field Approximation of IIB Matrix Model and Emergence of Four Dimensional Space-Time (2002) H. Kawai, S. Kawamoto, T. Kuroki & S. Shinohara, Improved perturbation theory and four-dimensional space-time in IIB matrix model (2002) Werner Krauth, Hermann Nicolai, Matthias Staudacher, Monte Carlo Approach to M-Theory (1998) H. Aoki, S. Iso, H. Kawai, Y. Kitazawa, T. Tada, Space-Time Structures from IIB Matrix Model (1998) Posted by: Urs Schreiber on March 26, 2004 6:36 PM | Permalink | PGP Sig | Reply to this
{"url":"http://golem.ph.utexas.edu/string/archives/000314.html","timestamp":"2014-04-21T01:12:17Z","content_type":null,"content_length":"54133","record_id":"<urn:uuid:a3249d30-28fd-4e4c-99ec-c3a12599cedb>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Drexel University Mechanical Eng SLIDING MODE CONTROL OF A SUSPENDED PENDULUM Theory Simulations Real Time Control References Sliding mode control is an efficient tool to control complex high-order dynamic plants operating under uncertainty conditions due to its order reduction property and its low sensitivity to disturbances and plant parameter variations. Its robustness property comes with a price, which is high control activity. The principle of sliding mode control is that; states of the system to be controlled are first taken to a surface (sliding surface) in state space and then kept there with a shifting law based on the system states. Once sliding surface is reached the closed loop system has low sensitivity to matched and bounded disturbances, plant parameter variations [1]. For the theory of sliding mode control click here. Sliding mode control can be conveniently used for both non-linear systems and systems with parameter uncertainties due to its discontinuous controller term. That discontinuous control term is used to negate the effects of non-linearities and/or parameter Suspended pendulum is a 2^nd order non-linear system due to the sine term: The fact that this change is sinusoidal makes sliding mode control attractive to use as a controller for suspended pendulum systems, for sinusoidal functions are bounded. This can be extended to robotics and systems with moving linkages for that kind of systems have inertias that show sinusoidal characteristic. The parameters of the pendulum system that is controlled with sliding mode control is given below. The period of the suspended pendulum is about 4 seconds. The simplified form of the equation of motion is, The sliding mode control input is, The control signal has two parts; first part is state feedback control law and following that is the discontinuous term that overcomes the sine term. s represents the sliding surface. Due to the chattering phenomenon of the sliding mode control it is convenient to replace the sign function with a continuous approximation. Following is the pseudo sliding mode control law. The procedure to design the controller parameters is as follows: 1- Find the bound of the non-linearity or uncertainty and if possible decrease the magnitude of the bound by defining some part of the non-linearity or uncertainty with a linear combination of the 2- Either select a sliding surface and find state feedback parameters or design a state feedback controller that would impose a sliding surface. Note that these can be totally different approaches. Following are some options to deal with the non-linear term, sine. i- ρ can be selected as . ii- ρ can be selected as In this case the equation of the system would be . Note that this equation is the same as the equation of motion that is linearized around 0 degrees. However sliding mode control is not restricted to work around the equilibrium point because the difference between sin(θ) and θ is bounded and can be rejected with the discontinuous term of the sliding mode control law. iii- ρ can also be selected by treating the non-linear term as an uncertainty in the system parameter a[1] as in the below equation. Note that sin(θ)/θ is a bounded function. Nominal value can be used as system parameter and max and min values would determine the bound of the uncertainty. For further discussion see reference [1]. In this study ρ is taken as a[1] is 10.78 and η is chosen as 1.22. η is a small parameter that would make the resulting system equation insensitive to the non-linear term. The state feedback parameters are found by using pole placement and then corresponding sliding surface is obtained. The magnitude of the poles are Φ and m. They are selected as 2 rad/s equal to each other; the linear part of the system would resemble a critically damped second order system. Making the system critically damped has two significances: First is not to cross the sliding surface and second is to have real valued sliding surface. Choosing 2 rad/s as the poles of the two first order decaying systems mean that the pendulum would respond to a step input in 2*(4*(1/2s)) = 4 seconds, (time constants of the two first order systems are 1/(2s)). It should be kept in mind that this is an approximation. The system is a 2^nd order system with a discontinuous term. Finally δ, which approximates the discontinuous term of the sliding mode control law is taken as 0.1. At around 0.1 the chattering is not observed in the simulations performed. Integral action is going to be added to the sliding mode control soon!!! For questions about this tutorial please feel free to contact vefa@drexel.edu.
{"url":"http://www.pages.drexel.edu/~vn43/tutorials/sliding_mode_control/smcsp/smcsp.html","timestamp":"2014-04-20T10:47:32Z","content_type":null,"content_length":"12131","record_id":"<urn:uuid:2f8ebc1d-0677-446a-9621-107e58b4ff30>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
fisher exact for > 2x2 table Rob: Fisher's exact test is conceptually possible for any r x c contingency table problem and uses the observed multinomial table probability as the test statistic. Other tests for r x c contingency tables use a different test statistic (Chi-squared, likelihood ratio, Zelterman's). It is possible that the probabilities for any of these procedures may differ slightly for the same table configuration even if the probabilities for each test are calculated by enumerating all possible permutations (hypergeometric) under the null hypothesis. See Mielke and Berry 2007 (Permutation Methods: A distance function approach) Chps 6 and7. Mielke has provided efficient Fortran algorithms for enumerating the exact probabilities for 2x2, 3x2, 4x2, 5x2, 6x2 ,3x3,and even 2x2x2 tables for Fisher's exact and Chi-square statistics. I don't remember whether Cyrus Meta's algorithms for Fisher's exact can do more. But the important point to keep in mind is that it is possible to use different statistics for evaluating the same null hypothesis for r x c tables (Fisher's exact uses one form, Chi-square uses another, etc.) and the probabilities can be computed by exact enumeration of all permutations (what people expect Fisher's exact to do but also possible for Chi-square statistic) or by some approximation (asymptotic distribution, Monte Carlo resampling). The complete enumeration of test statistics under the null becomes computationally intractable for large dimension r x c problems whether using the observed table probability (like Fisher's exact) as a test statistic or other like Chi-square statistic. So in short, yes you can use Fisher's exact on your 4 x 2 problem, and the result might differ from using a Chi-square statistic even if you compute the P-value for the Chi-square test by complete enumeration. Note that the minimum expected cell size for the Chi-square test is related to whether the Chi-square distributional approximation (an asymptotic argument) for evaluating the Chi-square statistic will be reasonable and is irrelevant if you calculate your probabilities by exact enumeration of all permutations. Brian S. Cade, PhD U. S. Geological Survey Fort Collins Science Center 2150 Centre Ave., Bldg. C Fort Collins, CO 80526-8818 [hidden email] tel: 970 226-9326 viostorm < [hidden email] [hidden email] 04/29/2011 01:23 PM Re: [R] fisher exact for > 2x2 table Sent by: [hidden email] After I shared comments form the forum yesterday with the biostatistician indicated this: "Fisher's exact test is the non-parametric analog for the Chi-square test for 2x2 comparisons. A version (or extension) of the Fisher's Exact test, known as the Freeman-Halton test applies to comparisons for tables greater than 2x2. SAS can calculate both statistics using the following proc freq; tables a * b / fisher;" Do people here still stand by position fisher exact test can be used for contingency tables ? Sorry to both you all so much it is just important a paper I am writing and planning to submit soon. ( I have a 4x2 table but does not meet expected frequencies requirements for chi-squared.) I guess people here have suggested R implements, the following, which unfortunately are unavailable at least easily at my library but at least the titles indicates it is extending it to RxC Mehta CR, Patel NR. A network algorithm for performing Fisher's exact test in r c contingency tables. Journal of the American Statistical Association Mehta CR, Patel NR. Algorithm 643: FEXACT: A FORTRAN subroutine for exact test on unordered r x c contingency tables. ACM Transactions on Mathematical Software 1986;12:154-61. The only reason I ask again is he is exceptionally clear on this point. Thanks again, viostorm wrote: > Thank you all very kindly for your help. > -Rob > -------------------------------- > Robert Schutt III, MD, MCS > Resident - Department of Internal Medicine > University of Virginia, Charlottesville, Virginia viostorm wrote: > Thank you all very kindly for your help. > -Rob > -------------------------------- > Robert Schutt III, MD, MCS > Resident - Department of Internal Medicine > University of Virginia, Charlottesville, Virginia View this message in context: Sent from the R help mailing list archive at Nabble.com. [hidden email] mailing list PLEASE do read the posting guide and provide commented, minimal, self-contained, reproducible code. [[alternative HTML version deleted]] [hidden email] mailing list PLEASE do read the posting guide and provide commented, minimal, self-contained, reproducible code.
{"url":"http://r.789695.n4.nabble.com/fisher-exact-for-gt-2x2-table-td3481979.html","timestamp":"2014-04-16T04:10:59Z","content_type":null,"content_length":"155140","record_id":"<urn:uuid:8f797435-04f7-4d7b-ba50-f2d50a058039>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Need someone to write Hashtable code... April 22nd, 2012, 03:06 PM #1 Junior Member Join Date Apr 2012 Thanked 0 Times in 0 Posts Here are the requirements for what I need done: Write a Java program that implements two hashing tables using different hashing functions. Insert the same data items into both, and evaluate your results as follows: 1. Use the Random class to generate a minimum of 40 seven-digit integers. You will need to access the individual digits of these integers…you may leave them as numeric or convert them to strings. (You may find it beneficial to be able to test repeatedly with the same data, so you might want to provide a way of saving the generated data to a file. You might give your user the option of using an existing file or generating a new set of numbers each time you run your program.) 2. Implement your hash tables with arrays. You may use two separate arrays or one multi-column array. 3. Using your generated data, insert each item into the first of your hash tables using an extraction method hashing function, perhaps the last two digits of each seven digit number (or string). 4. Use a division method hashing function for the second. Base your divisor on the number of items you generate. 5. Use linear probing for collisions. 6. When you hash tables are fully populated, find each of your original data items by computing the appropriate hash key and searching each table. 7. Produce a formatted report with the following items: a. How many comparisons were necessary to find each item in each table? b. What is the total number of comparisons necessary to find the items in each table? c. What is the average number of comparisons necessary to find an item in each table? 8. Generate multiple iterations of random numbers. Do you results with multiple sets show a trend indicating that one hash function is better than the other for this type of data? I will pay $50 for writing the program. Payment will be done with paypal. Thanks for any help Here are the requirements for what I need done: Write a Java program that implements two hashing tables using different hashing functions. Insert the same data items into both, and evaluate your results as follows: 1. Use the Random class to generate a minimum of 40 seven-digit integers. You will need to access the individual digits of these integers…you may leave them as numeric or convert them to strings. (You may find it beneficial to be able to test repeatedly with the same data, so you might want to provide a way of saving the generated data to a file. You might give your user the option of using an existing file or generating a new set of numbers each time you run your program.) 2. Implement your hash tables with arrays. You may use two separate arrays or one multi-column array. 3. Using your generated data, insert each item into the first of your hash tables using an extraction method hashing function, perhaps the last two digits of each seven digit number (or string). 4. Use a division method hashing function for the second. Base your divisor on the number of items you generate. 5. Use linear probing for collisions. 6. When you hash tables are fully populated, find each of your original data items by computing the appropriate hash key and searching each table. 7. Produce a formatted report with the following items: a. How many comparisons were necessary to find each item in each table? b. What is the total number of comparisons necessary to find the items in each table? c. What is the average number of comparisons necessary to find an item in each table? 8. Generate multiple iterations of random numbers. Do you results with multiple sets show a trend indicating that one hash function is better than the other for this type of data? I will pay $50 for writing the program. Payment will be done with paypal. Thanks for any help Please check your PM Replied to your private message. Presuming this is homework - these forums are not here to help you cheat - they are here to help you learn. I Highly recommend trying the problem, and posting questions where you are stuck. If not, and if you do receive a solution, you should realize this is academic dishonesty, and those contributing to this behavior are doing the same - and do get banned from forums for such behavior, but more importantly do get you kicked out of school (for good reason) Code Tags | Java Tutorials | SSCCE | Getting Help | What Not To Do April 23rd, 2012, 01:52 PM #2 Junior Member Join Date Aug 2011 My Mood Thanked 1 Time in 1 Post April 23rd, 2012, 08:47 PM #3 Junior Member Join Date Apr 2012 Thanked 0 Times in 0 Posts April 23rd, 2012, 08:58 PM #4 Super Moderator Join Date Oct 2009 Thanked 779 Times in 725 Posts Blog Entries
{"url":"http://www.javaprogrammingforums.com/paid-java-projects/15287-need-someone-write-hashtable-code.html","timestamp":"2014-04-18T18:58:00Z","content_type":null,"content_length":"60004","record_id":"<urn:uuid:edcfbd4e-d19b-4e61-be7c-ae069959910a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Go4Expert - View Single Post - The background view of Multicore Computing Technology Hello sir,,,,,,I like that,,,,,,,"We have an interesting viewpoint for the multicore architecture. Far far times ago, someone started to talk about it. That, to increase our computation scalability dominating the red eyes of the thermal dissipation and the size of the processors, but the speed of the light of the total computation process. Lets think, for a microprocessor running at 1Gz frequency. So, the basic cycle is 1ns (10^-9 sec) of the computation process. So, now the speed of the light is 3*10^8m/sec in an empty space. Some experiments already have shown us that the speed of light will be 1/3 of the space in the Silicon. So, the speed will be 10^8 meter/second in Silicon........".......
{"url":"http://www.go4expert.com/articles/background-view-multicore-computing-post76555/","timestamp":"2014-04-20T03:44:45Z","content_type":null,"content_length":"6331","record_id":"<urn:uuid:231af83d-fcb6-47f0-812d-7e3ec8954f34>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Lewisville, TX Statistics Tutor Find a Lewisville, TX Statistics Tutor ...I'd like to help you master your subject and keep or improve your current grades. I have learned how to work with and motivate many different personality and learning types.I tutor high school, introductory and first chemistry. I am NOT available to tutor organic chemistry or biochemistry. 4 Subjects: including statistics, chemistry, economics, vocabulary ...I have tutored and taught the English/Verbal sections of the ACT, SAT, GRE, and GMAT. I have tutored all aspects of it, ranging from mechanics, with focus on punctuation and sentence structure, to rhetoric, with focus on conciseness and paragraph structure. I cannot guarantee success, but many ... 41 Subjects: including statistics, chemistry, ASVAB, logic ...My success in the classroom was always based on the fact that if I can make the student look forward to class then they will succeed. I know there are several tutors out there that charge less than I do but ask if they are certified math teachers or are they just filling in until they can get an... 15 Subjects: including statistics, chemistry, calculus, geometry ...The course includes units: Operation of Rational Numbers,Proportions and Percent, Algebraic reasoning, Transformation and dilation, 2d and Pythagorean Theorem, 3D wrap and filling, Data Analysis, Probability, Solving one Variable equations. I am a Texas state certified teacher (math 4-12), I tea... 20 Subjects: including statistics, calculus, physics, biology My goal is to provide you good foundations and a good understanding on the subject that you are going to learn. In addition, I will teach you techniques that you can use in other learning activities too. I have two masters (Physics and MBA) and I passed the dissertation defense for a PhD in Statistics. 7 Subjects: including statistics, Italian, Microsoft Excel, SQL
{"url":"http://www.purplemath.com/Lewisville_TX_Statistics_tutors.php","timestamp":"2014-04-17T01:22:01Z","content_type":null,"content_length":"24313","record_id":"<urn:uuid:17772450-a090-4c0a-80c5-da09596c7ff3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
A family of generators of minimal perfect hash functions - Information Processing Letters , 1992 "... A new algorithm for generating order preserving minimal perfect hash functions is presented. The algorithm is probabilistic, involving generation of random graphs. It uses expected linear time and requires a linear number words to represent the hash function, and thus is optimal up to constant facto ..." Cited by 42 (0 self) Add to MetaCart A new algorithm for generating order preserving minimal perfect hash functions is presented. The algorithm is probabilistic, involving generation of random graphs. It uses expected linear time and requires a linear number words to represent the hash function, and thus is optimal up to constant factors. It runs very fast in practice. Keywords: Data structures, probabilistic algorithms, analysis of algorithms, hashing, random graphs , 1994 "... An ordered minimal perfect hash table is one in which no collisions occur among a predefined set of keys, no space is unused and the data are placed in the table in order. A new method for creating ordered minimal perfect hash functions is presented. It creates hash functions with representation spa ..." Cited by 1 (0 self) Add to MetaCart An ordered minimal perfect hash table is one in which no collisions occur among a predefined set of keys, no space is unused and the data are placed in the table in order. A new method for creating ordered minimal perfect hash functions is presented. It creates hash functions with representation space requirements closer to the theoretical lower bound than previous methods. The method presented requires approximately 17% less space to represent generated hash functions and is easy to implement. However, a high time complexity makes it practical for small sets only (size ! 1000). Keywords: Data Structures, Hashing, Perfect Hashing 1 Introduction A hash table is a data structure in which a number of keyed items are stored. To access an item with a given key, a hash function is used. The hash function maps from the set of keys, to the set of locations of the table. If more than one key maps to a given location, a collision occurs, and some collision resolution policy must be followed. O...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3380897","timestamp":"2014-04-16T10:12:16Z","content_type":null,"content_length":"15172","record_id":"<urn:uuid:34bc7909-9992-44ce-a868-b4abc4d2d93f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Prizes MCA2013 This morning the winners of the Mathematical Congress of the Americas received their awards: the MCA prize, the Americas Prize and the Solomon Lefschetz medals The following prize winners of the Mathematical Congress of the Americas Award had been chosen by the Awards Committee. was born in Santiago de Chile in 1976. He got his PhD at ÉcoleNormaleSupérieurede Lyon. His mathematical work focuses on interactions between Group Theory and Dynamical Systems, with incursions in Geometry, Topology, and Probability. His first major result concerns Kazhdan group actions on the circle. was born in 1976 in Bogotá, Colombia. He got his Ph. D. in Berkeley.He is a model theorist whose contributionshave given common ground to the two major subareas of model theory: stability theory and o-minimality.He has been Chair of the Department of Mathematic at the University of Los Andes where he is a full professor. was born in México city in 1976. He got his PhD at the University of Paris. He has developed his career in the Centro de Investigación en Matemáticas CIMAT at Guanajuato since 2005 in the areas of Probability and Stochastic Processes. His major contributions areLévy processes, self-similar Markov processes, general Markov processes and regenerative sets. was born in São Paulo, Brazil in 1976. He obtained his PhD at the University of Texas at Austin, under Prof. Luis A. Caffarelli. His research concerns non-linear partial differential equations and their applications. Teixeira hasmade contributions to the theory of fully nonlinear equations, degenerate elliptic and parabolic PDEs, free boundary theory, and geometric measure analysis. was born in 1987 in Buenos Aires, Argentina. He received his Ph. D. from the University of Buenos Aires. Walsh works in ergodic theory and his major contributionshave focused on the limiting behavior of nonconventional ergodic averages. received his Ph. D. from the University of California at Berkeley in 1966. His career combines a strong interest in research in the mathematical area of algebraic geometry with a lifelong interest in undergraduate education and the promotion of mathematics in the developing world. Early in his carreer he spent two years as a Peace Corps volunteer in Peru and Chile. He has been awarded with a Laurea Honoris Causa by the University of Turin, in Italy and an HonorisCausa Doctorate from the University of Santiago in Chile. Professor Clemens was not able to join us today in this ceremony, but in his statement of acceptance he donates “the monies connected with this award to UMALCA, whose efforts in the less mathematically developed countries of the region have reached and continue to reach so many, to such great effect”. was founded in a meeting held in 1995, at IMPA, Brasil, where the mathematical societies from Argentina, Brasil, Chile, Colombia, Cuba, México, Perú, Uruguay and Venezuela decided to create a regional organization to foster cooperation and academic exchange between mathematicians. The Union received later mathematical societies from Ecuador, Paraguay, Bolivia and Costa Rica. Flagship activities of UMALCA are the Latin American Congress of Mathematicians and the Emalca schools organized in places and regions where mathematics is less developed. These schools receive academic and financial support from institutions in several Latin American countries and from the International Center for Pure and Applied Mathematics, CIMPA in France. The President of the Executive Committe of UMALCA Servet Martínez receives the Prize. was born in Buenos Aires, Argentina, in 1948. He completed his Ph. D. in mathematics at the Universidad de Buenos Aires. In 1973, he moved to the United States where he has held positions at the University of Minnesota, the University of Chicago, the Courant Institute, the Institute for Advanced Study and the University of Texas at Austin. Since his early work on free boundary problems his extraordinary talent and intuition began to show. Among his many important contributions we mention the study of fully nonlinear elliptic partial differential equations and his joint work with Kohn and Nirenberg on partial regularity of solutions of the incomprensible Navier-Stokes equation in 3 space dimensions. He is a member of the National Academy of Sciences, the American Academy of Arts and Sciences, the Academy of Medicine, Engineering and Science of Texas, the Association for Women in Mathematics. He has received many honorary distinctions and prizes. was born in Uberaba, in the Brazilian state of Minas Gerais in 1940. He obtained his Ph.D. degree at the University of Berkeley under the supervision of Steve Smale, a 1966 Fields Medalist. In his thesis, he proved that gradient-like dynamical systems in lower dimensions are stable. This remarkable result was later extended with Smale to all dimensions and they formulated the famous Stability Conjectures that proposed precise conditions for a dynamical system to be stable. In 1968, Palis returned to Rio de Janeiro to undertake a career at the Instituto de Matemática Pura e Aplicada (IMPA), an institution of which he would rapidly become part of the soul and a main driving force. His influence and leadership were fundamental in making IMPA one of the finest scientific centers in the developing world and a reference for excellence in mathematics in global terms. In the early 1980’s, Palis pioneered another major topic in dynamics, the theory of homoclinic tangencies. Later, Palis formulated a series of conjectures on the key mechanism underlying global instabilities which have been a central topic of research in the area in the last decade or so. The influence of Jacob Palis in the mathematics of the Latin America are deep and numerous; among others, the creation and continuous support of the work of UMALCA.
{"url":"http://www.mca2013.org/prizes.html","timestamp":"2014-04-21T07:49:22Z","content_type":null,"content_length":"51907","record_id":"<urn:uuid:d03e0b45-6444-4232-a622-16b17a8c5687>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
AC Circuits Terminology: Phase shift vs Phase Angle Good question war485, these terms are often mixed up, and yes they are different. We talked about AC circuits and phase shifts were discussed. ..... It's like a sinusoid so fine. So in alternating circuits the instantaneous voltage or current may be represented by V = V as a function of time. Now you are actually taking the cosine of an angle so (ωt) is really an angle, say θ. This angle is called the phase angle and it tells us "How far along the cycle the voltage wave is, relative the the voltage at zero, V 1) This applies to all waves, not just electrical ones 2) I have used cos since it is not zero at θ = zero) The times t and t correspond to two angles θ and θ [2] The difference between these angles is called phase difference. Now consider a second wave, counted from the same (arbitrary) zero point in time. There is no reason for this wave to peak at the same instant as the first, even if it is of the same frequency. If the second wave does have the same frequency, the difference in time and therefore θ = ωt , is the difference in angle is [itex]\varphi[/itex] = (θ - θ ) between the time of occurrence of the peak value for the second wave and the peak of the first. [itex]\varphi[/itex] is called the phase shift of the second wave relative to the first. Since θ = (θ + [itex]\varphi[/itex]) we can write V = V + [itex]\varphi[/itex]) To plot it on the same axes as the first wave. The phase shift may be positive or negative and this corresponds to a shift forwards or backwards along the horizontal axis. I have used cos rather than sin since it peaks at zero. We need to compare (positive) peaks since the waves may be sloping backwards or forwards where they cross zero. The peaks are the only values that occur exactly once in a cycle. Every other point in the cycle occurs more than once. Does this help?
{"url":"http://www.physicsforums.com/showthread.php?p=4167374","timestamp":"2014-04-17T21:29:07Z","content_type":null,"content_length":"32046","record_id":"<urn:uuid:c62b159e-1b70-4ab5-baf8-acefa1e5cb1f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
triangular number Think of a rack of billiard balls, or Pascal's Triangle. You start with 1, add two to get 3, add three to get 6, add four to get 10, add five to get 15, and so forth, each time adding the next natural number to the previous sum. The numbers you produce using this algorithm are the triangular numbers. The nth triangular number is defined as the number of elements in a triangle with n rows, that is: n+(n-1)+(n-2)+...+2+1. Because of this, the nth triangular number can be quickly calculated for any even n: The nth triangular number = n+(n-1)+(n-2)+...+(n-n/2)+...+2+1 = n+(n-1)+1+(n-2)+2+...+(n-n/2) = n+n+n+...+n (repeated n/2 times) +(n-n/2) = n*(n/2)+n-(n/2) = n(n/2+1-1/2) = n(n/2+1/2) = n(n+1)/2 Similarly, for any The nth triangular number = n+(n-1)+(n-2)+...+(n-(n-1)/2)+(n-1)/2+...+2+1 = n+(n-1)+1+(n-2)+2+...+(n-((n-1)/2)+((n-1)/2)) = n+n+n+...+n (repeated 1+(n-1)/2 times) = n+n+n+...+n (repeated (n+1)/2 times) = n(n+1)/2 We can generalize that the nth triangular number equals n(n+1)/2 for any natural number n, and if we like, prove it using mathematical induction
{"url":"http://www.everything2.com/title/triangular+number","timestamp":"2014-04-21T07:05:17Z","content_type":null,"content_length":"44181","record_id":"<urn:uuid:a1de3b87-eb32-4807-89cf-e0477782e97e>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] interrupting large matrix operations David Cournapeau cournape@gmail.... Fri Sep 10 07:22:03 CDT 2010 On Fri, Sep 10, 2010 at 9:05 PM, Tom K. <tpk@kraussfamily.org> wrote: > OK, I know it's my problem if I try to form a 15000x15000 array and take the > cosine of each element, but the result is that my python session completely > hangs - that is, the operation is not interruptible. > t=np.arange(15360)/15.36e6 > t.shape=(-1,1) > X=np.cos(2*np.pi*750*(t-t.T)) > <hangs indefinitely> > I'd like to hit "control-c" and get out of this hung state. > What would it take to support this? It is difficult to support this in every case. The basic way to handle ctr+c is to regularly check whether the corresponding signal has been sent during computation. The problem is when to check this - too often, and it can significantly slow down the processing. For ufuncs, I am a bit surprised it is not done until the end of the processing, though. What happens exactly when you do Ctrl+C ? It may take a long time, but it should raise a keyboard interrupt at the end (or after the intermediate computation t-t.T which may take quite some time too). > (I'm running ancient numpy and python at work, so if this is already > supported in later versions, my apologies) What does ancient mean ? Could you give us the version (numpy.__version__) More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-September/052701.html","timestamp":"2014-04-16T16:44:17Z","content_type":null,"content_length":"4114","record_id":"<urn:uuid:ef09f57b-02a6-44d9-9a63-617b53b68922>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
illustrate the sum of interior angles theorem Illustrate That Interior Angles of a Triangle Sum to 180° You can illustrate the fact that the sum of the interior angles of a triangle is 180° by folding the triangle. 1. Cut out a triangle of any shape. Orient the triangle so the longest side is the base. 2. Fold the base onto itself so that the crease goes through the topmost vertex of the triangle. This crease will be the altitude of the triangle. 3. Fold the vertex down so it touches the base at the altitude. (This crease is a midsegment of the triangle). 4. Fold the other two vertices so that all three meet at the same point. 5. See that all angles form a linear angle and sum to 180°.
{"url":"http://www.cutoutfoldup.com/406-illustrate-that-interior-angles-of-a-triangle-sum-to-180-.php","timestamp":"2014-04-18T10:36:34Z","content_type":null,"content_length":"5236","record_id":"<urn:uuid:dc7bca7a-3cae-4b95-8233-f70a9b01367c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Class Forcing and Genericity: Predense sets vs Dense classes up vote 9 down vote favorite In short my question is: why do we use definable classes in the definition of genericity for class forcing, instead of predense sets. To elaborate, in Sy's book and indeed other sources on the subject of class forcing, $G$ is generic if it meets every definable, dense class of $M$, the ground model; a natural extension of the usual situation. Of course classes can get a bit tricky so one would naturally wonder whether one can use predense sets in their place: intuitively the answer should be "no," since otherwise we'd probably go ahead and do so. Indeed, Stanley outright says that this "definable genericity" is stronger than "internal genericity" [1]. Thus my question is really twofold: 1. [S:What is the problem with the standard argument that genericity defined for predense sets is the same::S] Let $\mathbb P$ be a definable class forcing. Let $D$ be a predense set of $M$, let $\tilde D$ be the class of extensions of $D$, which is definable ($\tilde D=\{p\in\mathbb P\mid \exists q\in D (p\leq q)\}$) and dense (as usual: if $p\in\mathbb P$ then there is a $q\in D$ compatible with $p$, hence an $r \in \tilde D$ extending $q$.) So by definable genericity, there is $r\in G\cap\ tilde D$, which must come from extending a $q\in D$, which is also in $G$ since it's a filter. I assume I'm doing something naughty with those proper classes, but I don't see where. So, with a little more thought it's obvious that it's the other direction that's the issue, i.e., if $G$ is internally generic, when is it definably generic. Thus I would like to know about this with respect to the pretameness condition that Sy defines: $\mathbb P$ is pretame iff given a $p$, any $M$-definable sequence of dense classes can be refined to a sequence (in $M$) of predense sets below some $q\leq p$. 2. Does class forcing work out OK if we use predense sets? Can the extension satisfy ZFC(-) (in the language with a predicate for the generic) 3. If not in general, are there useful conditions which make it work? 4. What are some examples of cases where the difference matters? I'm actually most interested in the situation of Prikry forcing over a model with an $M$-ultrafilter. [1] Stanley, M.C., 2003, Outer Models and Genericity. JSL. set-theory forcing I wonder... – Asaf Karagila Feb 3 at 17:16 add comment 1 Answer active oldest votes Let me point out that meeting all pre-dense sets is not generally the same as meeting all dense classes. Consider the forcing $\mathbb{P}$ that adds a generic function from $\text{Ord}$ to $V$. (One can use this forcing to force global choice — see Victoria Gitman's account.) That is, conditions are functions $p:\alpha\to V$ for some ordinal $\alpha$, and the order is extension of functions. If a filter $G\subset\mathbb{P}$ meets all dense classes, then it is easy to see that $\cup G$ is a surjective function total function from $\text{Ord}$ to $V$: for any set $x$, it is dense that the conditions have $x$ in their range, and for any ordinal $\alpha$, it is dense that the conditions are defined at $\alpha$. But that argument breaks down completely if one tries to use only predense sets: the reason is that the forcing $\mathbb{P}$ has no pre-dense sets at all! (except for sets that contain the up vote empty function) For any set $B$ of nontrivial conditions, there is a condition $p$ that is incompatible with every element of $B$. So it turns out that every filter vacuously meets all 9 down pre-dense sets, and the forcing technology would not be doing what we want. You might reply that this example is because we are using the partial order, rather than the Boolean algebra. But that reply has problems, because in a case like this, there is no way to formalize the Boolean completion without going to meta-classes, as the antichains are generally proper classes here. So we cannot so easily go to the Boolean completion of the forcing. One can transform $\mathbb{P}$ into forcing that also adds sets, for example, by using the generic filter to determine what happens next, for example, to code $G$ into the GCH pattern. add comment Not the answer you're looking for? Browse other questions tagged set-theory forcing or ask your own question.
{"url":"http://mathoverflow.net/questions/156557/class-forcing-and-genericity-predense-sets-vs-dense-classes","timestamp":"2014-04-20T03:42:34Z","content_type":null,"content_length":"55366","record_id":"<urn:uuid:7cd7a5dd-8bf7-4ade-98d6-b86f8484c216>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Please Help Posted by Catherine on Tuesday, March 15, 2011 at 6:09pm. A) If x^2+y^3−xy^2=5, find dy/dx in terms of x and y. B) Using your answer for dy/dx, fill in the following table of approximate y-values of points on the curve near x=1 y=2. 0.96 ______ 0.98 ______ 1.02 ______ 1.04 ______ C) Finally, find the y-value for x=0.96 by substituting x=0.96 in the original equation and solving for y using a computer or calculator. y(0.96)= ________ D) How large (in magnitude) is the difference between your estimate for y(0.96) using dy/dx and your solution with a computer or calculator? • Calculus Please Help - bobpursley, Tuesday, March 15, 2011 at 6:12pm I am not going to do this for you. What are you stuck on, or dont understand? • Calculus Please Help - Catherine, Tuesday, March 15, 2011 at 6:27pm In the first part... I got (-2x+y^2)/(3y^2+x2y) but apparently that's wrong which will make everything else wrong.... I just need to know how to do that part right... • Calculus Please Help - Catherine, Tuesday, March 15, 2011 at 7:01pm I see what is wrong with my equation... now I have (-2x+y^2)/(3y^2-x2y) and for the second part I tried to plug 1 for x and all the other values for y and then in the third part I plugged 2 for y and 0.96 for x and I got them all wrong: my values are: 0.96 = -1.2765 0.98= -1.1285 1.02= -0.8875 1.04= -0.7885 and for part C) 0.255 and For part D) -1.5315 Related Questions Calculus (pleas help!!!) - A) If x^2+y^3−xy^2=5, find dy/dx in terms of x ... calculus - If x^{2} + y^3 - x y^2 = 5, find dy/dx in terms of x and y. Using ... Calculus - Use implicit differentiation to find dy/dx if cos xy = 2x^2 - 3y. I'm... Calculus - I am unsure of how to take the derivative of this equation. It may be... Calculus (repost) - Use implicit differentiation to find dy/dx if cos xy = 2x^2... Math, Calculus - help Please... Differential Equations Problems... please give ... Match, Calculus - help Please... Differential Equations Problems... please give ... Math - help Please... Differential Equations Problems... please give the ... Calculus - How do you us implicit differentiation to solve x( y + 2 )^5 = 8 x( y... calc - 1 + x = sin(xy^2) find dy/dx by implicit differentiation 0 + 1 = cos(xy^2...
{"url":"http://www.jiskha.com/display.cgi?id=1300226989","timestamp":"2014-04-16T10:38:18Z","content_type":null,"content_length":"9778","record_id":"<urn:uuid:ee2e9415-c510-4517-b8fd-b07028e92cd0>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiplying Complicated Polynomials - Concept When multiplying polynomials, sometimes we come across complicated polynomials that we can use substitution and multiplication to solve. When we have a trinomial where a binomial follows the (a-b) (a+b) format, simply subsitute the entire binomial in for one of the variables and simplify. Multiplying more complicated polynomials, so we know that when we multiply a minus b times a plus b we can foil this up. This a got distributed to both items over here, this negative b got distributed by the both items and we end up with another polynomial. So in doing that we know we end up with the a times the a which is a squared, distributing this a to the b and the a to the negative b those 2 are equal and opposite so those cancel out to nothing and we have the b times the negative b ending up with negative b squared. So the standard foil operations. If we want to multiply out these 2 polynomials things get a little bit more complicated, we could do the exact same approach as we did before which would be take this 3x multiply it by the 3x the 1 and the 3y, take the 1 add it by each of these 3 and the negative 3 add to each 3 and then combine like terms. That seems like a pretty long arduous process, so what I want to do is take a step back and look at this and see if there's anything that we can do to make our life easier okay. So what I see is there's a 3x plus 1 and a 3x plus 1 in both okay. So I'm just sort of group those off to the side and sort of distinguish them for the rest of the problem. We also then have a minus 3y and a plus 3y okay. So we have one item minus something else and that same item plus something else. There's actually a really close relationship between this problem and this one up here. So what we can do is if we say let a equal 3x plus 1 and let b equal 3y what we've actually done is turn this equation into a minus b and this equation into a plus b. We just did this calculation up here so we know that this is going to be a squared minus b squared okay. More specifically we know that a is a 3x plus 1 so this turns into 3x plus 1 squared and our b turns into 3y squared okay. Now let's say we're dealing with having to multiply our 3 terms we just have to square something a lot easier, so we just would have to foil this out becomes 9x squared plus we're going to have the 2 of the 1 times the 3x that becomes 6x plus 1 and 3y squared, square goes to both thing so this ends up minus 9 squared. So using a little bit of a shortcut started seeing some similarities in these 2 equations we can make a substitution and make our life a lot easier than if we had to take each of these elements and distribute it through. multiplying polynomials shortcuts
{"url":"https://www.brightstorm.com/math/algebra/polynomials/multiplying-complicated-polynomials/","timestamp":"2014-04-21T05:11:51Z","content_type":null,"content_length":"59120","record_id":"<urn:uuid:3abbc702-eef5-4f88-aaf0-3e7f3634ce96>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Miami Gardens, FL Geometry Tutor Find a Miami Gardens, FL Geometry Tutor ...If you really want to know something, you have to figure it out, investigate, experiment, read, question, form hypotheses, test them, find the similarities and differences, connect the dots, notice patterns, and then take your acquired knowledge with a grain of salt. Be humble, especially when y... 61 Subjects: including geometry, English, Spanish, reading ...I have knowledge on how to successfully pass the end of year exam. With my knowledge, patience and dedication, success is a plus. Currently I am certified to teach Math K-12. 18 Subjects: including geometry, chemistry, biochemistry, cooking ...I have a proven track record of getting my student's A's. I firmly believe that with a proper structure and customized guideline every student has the ability to succeed in their class. My love for teaching stems from my passion to learn. 15 Subjects: including geometry, chemistry, calculus, physics ...I am uniquely qualified to tutor the GMAT because of the diversity of courses of study that I have pursued. I have joint majors in math and economics, graduate studies in business administration, and doctoral studies in philosophy and logic. I have been a business owner, and I have also taught ... 24 Subjects: including geometry, calculus, statistics, GRE ...The session begins at the scheduled time. 2. Sessions are at least one hour. 3. If you do not cancel a session within the given cancellation period you will still be charged for the session.I took intro to Advanced Math which is like Discrete Math on steroids. 13 Subjects: including geometry, calculus, algebra 1, algebra 2
{"url":"http://www.purplemath.com/miami_gardens_fl_geometry_tutors.php","timestamp":"2014-04-19T04:51:33Z","content_type":null,"content_length":"24185","record_id":"<urn:uuid:5015a478-9273-42c1-8d03-f6c83096eb5f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
enriched functor enriched functor Enriched category theory Could not include enriched category theory - contents Enriched functors are used in place of functors in enriched category theory: like functors they send objects to objects, but instead of mapping hom-sets to hom-sets they assign morphisms in the enriching category between hom-objects, while being compatible with composition and units in the obvious way. Given two categories $C, D$enriched in a monoidal category $V$, an enriched functor $F: C \to D$ consists of • A function $F_0: C_0 \to D_0$ between the underlying collections of objects; • A $(C_0 \times C_0)$-indexed collection of morphisms of $V$, $F_{x, y}: C(x, y) \to D(F_0x, F_0y)$ [where denotes the hom-object in ], compatible with the enriched identities and compositions of $C$ and $D$; • such that the following diagrams commute for all $a, b, c \in C_0$: □ respect for composition: $\array{ C(b,c) \otimes C(a,b) &\stackrel{\circ_{a,b,c}}{\to}& C(a,c) \\ \downarrow^{F_{b,c} \otimes F_{a,b}} && \downarrow^{F_{a,c}} \\ D(F_0(b), F_0(c)) \otimes D(F_0(a), F_0(b)) &\stackrel {\circ_{F_0(a),F_0(b), F_0(c)}}{\to}& D(F_0(a), F_0(c)) }$ □ respect for units: $\array{ && I \\ & {}^{j_a}\swarrow && \searrow^{j_{F_0(a)}} \\ C(a,a) &&\stackrel{F_{a,a}}{\to}&& D(F_0(a), F_0(a)) }$ The standard reference on enriched category theory is • Max Kelly, Basic Concepts of Enriched Category Theory (web) Revised on March 11, 2014 02:53:21 by Urs Schreiber
{"url":"http://www.ncatlab.org/nlab/show/enriched+functor","timestamp":"2014-04-21T04:33:18Z","content_type":null,"content_length":"26559","record_id":"<urn:uuid:4fef88a6-ab0b-4a27-a6e8-5c3ca6a452a4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 17: The Product Rule and Differentiating Vectors Home | 18.013A Tools Glossary Index Up Previous Next Chapter 17: The Product Rule and Differentiating Vectors The product rule for differentiation applies as well to vector derivatives. In fact it allows us to deduce rules for forming the divergence in non-rectangular coordinate systems. This can be accomplished by finding a vector pointing in each basis direction with 0 divergence. 17.1 Introduction 17.2 The Product Rule and the Divergence 17.3 The Divergence in Spherical Coordinates
{"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/HTML/chapter17/contents.html","timestamp":"2014-04-17T16:18:24Z","content_type":null,"content_length":"2147","record_id":"<urn:uuid:a71d33b5-f17b-42d5-acee-d6b8fb9e1d90>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Vocabulary 1. in a right triangle, the side opposite the right angle 6. inequalities that have the same solution (2 Words) 7. the sum of the lengths of the sides 8. a solid formed by polygons 11. the distance around a circle 14. the shape of a circle 15. the distance from the center of a circle to any point on the circle 16. when you multiply a number by itself its called a (2 Words) 18. a fixed point, all points in a circle are the same distance from. 19. the ratio of the circumference and diameter of any circle 21. an equation that states that two ratios are equivalent 22. the number of square units occupied by the space inside the circle 23. a circle has 360 of these 24. a term that has a variable is the number part of the term
{"url":"http://www.armoredpenguin.com/crossword/Data/2013.04/1014/10141004.346.html","timestamp":"2014-04-17T10:06:16Z","content_type":null,"content_length":"106307","record_id":"<urn:uuid:d60411c2-fa0e-4f2c-be3f-c79b811b5c2e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00335-ip-10-147-4-33.ec2.internal.warc.gz"}