content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
[Numpy-discussion] RE: default axis for numarray
Perry Greenfield perry at stsci.edu
Tue Jun 11 11:53:02 CDT 2002
<Eric Jones writes>:
<Konrad Hinsen writes>:
> > What needs to be improved in that area?
> Comparisons of complex numbers. But lets save that debate for later.
No, no, let's do it now. ;-) We for one would like to know for
numarray what should be done.
If I might be presumptious enough to anticipate what Eric would
say, it is that complex comparisons should be allowed, and that
they use all the information in the complex number (real and imaginary)
so that they lead to consistent results in sorting.
But the purist argues that comparisons for complex numbers are
meaningless. Well, yes, but there are cases in code where you
don't which such comparisons to cause an exception. But even
more important, there is at least one case which is practical.
It isn't all that uncommon to want to eliminate duplicate values
from arrays, and one would like to be able to do that for
complex values as well. A common technique is to sort the values
and then eliminate all identical adjacent values. A predictable
comparison rule would allow that to be easily implemented.
Eric, am I missing anything in this? It should be obvious that we
agree with his position, but I am wondering if there are any arguments
we have not heard yet that outweigh the advantages we see.
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2002-June/001446.html","timestamp":"2014-04-17T13:39:40Z","content_type":null,"content_length":"3921","record_id":"<urn:uuid:48b76baa-6675-404b-8013-60e45391c594>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Increasing and Decreasing Functions
February 27th 2008, 12:05 PM #1
Junior Member
Feb 2008
Increasing and Decreasing Functions
Hi, I am stuck on this problem, the instructions indicate to find the intervals of increase and decrease for the given function.
h(u) = square root of 9-u^2 The square root is over the entire term.
We were told to differentiate first, and then to factor to find the zeros. When I try to differentiate I am getting 3-(1/2)X^-(3/2) This doesn't look right so I need some clarification as to
whether I am differentiating correctly.
BTW The final answer should be:
h(u) is increasing for -3< u < 0
h(u) is decreasing for 0< u < 3
Hi, I am stuck on this problem, the instructions indicate to find the intervals of increase and decrease for the given function.
h(u) = square root of 9-u^2 The square root is over the entire term.
We were told to differentiate first, and then to factor to find the zeros. When I try to differentiate I am getting 3-(1/2)X^-(3/2) This doesn't look right so I need some clarification as to
whether I am differentiating correctly.
BTW The final answer should be:
h(u) is increasing for -3< u < 0
h(u) is decreasing for 0< u < 3
$h(u) = \sqrt{9 - u^2}$.
From the chain rule: $h^{'}(u) = \frac{1}{2} (9 - u^2)^{-1/2} \times (-2u) = -\frac{u}{\sqrt{9 - u^2}}$
where I've used the index laws $\sqrt{a} = a^{1/2}$ and $a^{-1/2} = \frac{1}{a^{1/2}} = \frac{1}{\sqrt{a}}$.
Important note: h(u) is only defined for $-3 \leq u \leq 3$ .....
So was I correct then? 3-(1/2)U^-(1/2) is the derivative?
Will this factor?
Unfortunately not, no.
Are you familiar with the chain rule? Here it is:
This theorem should be memorized, if you haven't already. So, applying it here, we can see the benefits. Our initial function:
Let $g(u)=9-u^2$
Can you finish it up?
Sorry, but I can't see how either of the answers you've come up with are the same as the correct answer (see my first reply) of
$h^{'}(u) = -\frac{u}{\sqrt{9 - u^2}}$.
The solution to $h^{'}(u) = 0$ is clearly u = 0, obtained by setting the numerator equal to zero. The factoring business you mentioned is completely irrelevant .......
1. $h^{'}(u) < 0$ when u > 0 AND $-3 < u < 3$ (recall the important note from my earlier reply and also look at where $h^{'}(u) < 0$ is undefined), that is, $0 < u < 3$.
2. $h^{'}(u) > 0$ when yada yada
I figured it out with the chain rule finally, thanks guys.
February 27th 2008, 12:13 PM #2
February 27th 2008, 12:18 PM #3
Junior Member
Feb 2008
February 27th 2008, 02:08 PM #4
Senior Member
Feb 2008
February 27th 2008, 03:21 PM #5
February 27th 2008, 03:58 PM #6
Junior Member
Feb 2008 | {"url":"http://mathhelpforum.com/calculus/29361-increasing-decreasing-functions.html","timestamp":"2014-04-20T07:31:08Z","content_type":null,"content_length":"50072","record_id":"<urn:uuid:69f46457-480f-4036-a60f-911de6a47144>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why We Should Switch To A Base-12 Counting System
Humans, for the most part, count in chunks of 10 — that's the foundation of the decimal system. Despite its near-universal adoption, however, it's a completely arbitrary numbering system that emerged
for one very simple reason: We have five fingers on each hand. But as many mathematicians like to point out, base-10 is not without its problems. The number 12, they argue, is where it's really at.
Here's why we should have adopted a base-12 counting system — and how we could still make it work.
Indeed, it's regrettable that we failed to evolve an ideal set of fingers to help us come up with numbering system suitable for counting and calculating. Instead, with our 10 fingers, we are stuck
with the clunky decimal system.
Taking a closer look at base-10, we can see how frustratingly limited it really is. Ten has a paltry two factors (a divisor that produces whole numbers), namely 5 and 2. Moreover, these numbers are
not very useful in-and-of themselves; 5 is a prime number that cannot divide any further, and 2 is a frustratingly small integer to work with.
Defenders of base-10 highlight its ability to allow for the moving of fraction points after multiplication or division — but that's not a trait exclusive to base-10. It's not ten-ness that allows for
this property. More accurately, it's a characteristic that belongs to all bases — a property of the place value notation we use for expressing numbers, along with a symbol for zero.
Interestingly, base-10 is not universal across human societies. The Mayans were known to use a base 20-system, and the Babylonians developed a system using sets of 60. Base-8 and base-16 (the
hexadecimal system) have also been used, mostly for computational reasons (quarters and eighths are simplified).
But these alternative sets are still not ideal for day-to-day, human applications. Base-20 is not great for finger counting; many of us wear shoes when we're doing math, nor can we move our toes with
any kind of dexterity. Base-8 is simply too small, and base-16 and base-60 are too unwieldy.
Luckily, there's a base that sits in between these — a numbering system that has a plethora of characteristics that simply make it the best choice for counting and calculating.
Introducing the Dozenal System
Also called the duodecimal system, the "dozenal" system was initially popularized in the 17th century when mathematicians began to recognize the limitations of base-10.
Later, during the 1930s, F. Emerson Andrews published a book, New Numbers: How Acceptance of a Duodecimal Base Would Simplify Mathematics, in which he cogently argued for the change. He noticed that,
due to the myriad occurrences of 12 in many traditional units of weight and measures, many of the advantages claimed for the metric system could also be adopted by the dozenal system.
Indeed, examples of base-12 systems abound. A carpenter's ruler has 12 subdivisions, grocers deal in dozens and grosses (12 dozen equals a gross), pharmacists and jewelers use the 12 ounce pound, and
minters divide shillings into 12 pence. Even our timing and dating system depends on it; there are 12 months in the year, and our day is measured in 2 sets of 12. Additionally, in geometry, a circle
is replete with subsets and supersets of 12 — what's measured in degrees (a 360 degree circle consists of 30 sets of 12).
It's also obvious that someone in our history was thinking along these lines. It's the largest number with a single-morpheme name in English (i.e. the word "twelve"). After that, we hit thirteen,
fourteen, fifteen, and so on — derivatives of three, four and five. Clearly, it was natural to think in terms of dozens.
Three decades after Andrews's book, the brilliant mathematician A. C. Aitken made a similar case. Writing in The Listen in 1962, he noted:
The duodecimal tables are easy to master, easier than the decimal ones; and in elementary teaching they would be so much more interesting, since young children would find more fascinating things
to do with twelve rods or blocks than with ten. Anyone having these tables at command will do these calculations more than one-and-a-half times as fast in the duodecimal scale as in the decimal.
This is my experience; I am certain that even more so it would be the experience of others.
Since the time of Andrews and Aitken, the dozenal movement has garnered a number of enthusiastic supporters, including the advent of the Dozenal Society of America and the Dozenal Society of Great
The basic argument from these so-called dozenalists is that it makes mathematics easier to conceptualize and understand, especially for children and students. Here's why they're right.
It's All About the Factors
First and foremost, 12 is a highly composite number — the smallest number with exactly four divisors: 2, 3, 4, and 6 (six if you count 1 and 12). As noted, 10 has only two. Consequently, 12 is much
more practical when using fractions — it's easier to divide units of weights and measures into 12 parts, namely halves, thirds, and quarters.
Moreover, with base-12, we can use these three most common fractions without having to employ fractional notations. The numbers 6, 4, and 3 are all whole numbers. On the other hand, with base-10, we
have to deal with unwieldy decimals, ½ = 0.5, ¼ = 0.25, and worst of all, the highly problematic ⅓ = 0.333333333333333333333.
And similar to the base-16 hexadecimal system, the dozenal system is exceptionally friendly to computer science. The number 12 has two factors that are prime numbers, 2 and 3. This means that the
reciprocals of all smooth numbers (a number which factors completely into small prime numbers), such as 2, 3, 4, 6, 7, 8, have a terminating representation in duodecimal (we'll get to counting in
duodecimal in just a bit). Twelve just happens to be the smallest number with this feature, thus making it an extremely efficient number for encryption purposes and for computing fractions — and this
includes the decimal, vigesimal, binary, octal, and hexadecimal systems.
Interestingly, the dozenal system would also make it easier to tell time. Five minutes is a 12th of an hour, so instead of saying "five past one," we could say "one and a twelfth" hours. Ten past one
would be 1;2, a quarter past one 1;3, and so on (the symbol ";" is used as the fractional point).
But this would require a new clock. For it to work, both the hour hand and the minute hand would point to the precise time. In the conventional decimal clock, the minute hand awkwardly points to a
number that has to be multiplied by five.
Notation and Pronunciation
As you look at the graphic of the clock to your above left, you're probably wondering what those funny symbols and words are. That's because, for a base-12 to work, we need to add two new symbols for
11 and 12 (remember, these are representations of numbers, and are not alphabetic; the number 12 is derived from having one complete set of 10 (hence the 1 in the first column), and an additional
number 2 in the second column to denote two additional increments).
Recognizing the advantages of a base-12 system, Andrews designed a new notation to account for two new numbers. Instead of using "A" and "B" for 10 and 11 (as per the hexadecimal system), Andrews
suggested a script X (U+1D4B3) and E (U+2130), with 10 duodecimal representing 12 decimal. So the first 12 numbers would look like 1, 2, 3, 4, 5, 6, 7, 8, 9, X, E, 10.
Others have suggested that 10 could be written as "T" and the number eleven "E." Mathematician Isaac Pitman wanted to use a rotated "2" for ten and a reversed "3" for eleven (as per the clock above).
Other schemas use "*" for 10 and "#" for 11 (which is phone and computer keyboard friendly).
For fractions, the decimal 0.5 would be written in duodecimal as 0;6 (remember, a half of 10 is different than a half of 12).
If this is confusing, you can always use the dozenal/decimal calculator.
For numbers that go beyond 12, we would add a prefix to the value denoting the number of sets. So, for the numbers 13, 14, and 15, we'd write 11, 12, and 13. And for the numbers 22, 23, and 24, we'd
write 1X, 1E, and 20.
In terms of pronunciation, Donald P. Goodman, president of the Dozenal Society of America, says that X should be called "ten", E called "elv" and 10 pronounced "unqua." So, when counting, we'd say,
"...eight, nine, ten elv, unqua."
Interestingly, in the 1973 episode "Little Twelvetoes" of the Schoolhouse Rock! television series, an alien child uses a base-12 system and pronounces the last three numbers "dek," "el" and "doh."
"Dek" was derived from the prefix "deca", while "el" was short for "eleven," and "doh" a shortening of "dozen." Many dozenalists have adopted this particular pronunciation system.
Now, to pronounce numbers greater than 12, like duodecimal 15, we would say doh-five, which is a compound of doh, which is twelve, and five. We can extend this for other numbers such as duodecimal
64, which would be pronounced as six-doh-four. If we were to reach and surpass the number EE, (el-doh-el), we need a new word for the digits in the third column over.
The word for 144 decimal, or 100 dozenal, is called "gros" (the ‘s' is silent) So, a three-digit dozenal number, such as 25X, would be pronounced as "two-gros-five-doh-dek." In decimal, this number
is 358.
Counting Fingers
Critics of the dozenal system say that it would undermine the benefits of finger counting.
But as dozenalists are happy to point out, each finger consists of three parts. So, starting with the index finger, and using the thumb as a pointer, we can immediately denote the first three digits
(working our way from bottom to the top of the finger). Then, the middle finger can denote 4, 5, 6, the middle finger, 7, 8, 9, and so on. Using this system, our two hands gives us a total of 24
numbers to work with. Some finger-counters work their way from left to right, designating the tips of their fingers 1, 2, 3, 4.
Even better, we can use our second hand to display the number of completed base 12's. Consequently, we can use our fingers to go up to 144 (12 x 12).
For example, if you take the thumb of your left hand and place it on the middle joint of your middle finger (which is the 5th base 12, equalling 60 decimal), and you do the same on your right hand
(which signifies the 5th increment), we get the number 65 decimal.
Could We Ever Switch Over?
Unfortunately, converting to the dozenal system at this point would be exceptionally difficult, and over-the-top expensive. While the long-term benefits are obvious, it's probably not worth the
short-term pain. But that said, living with a sub-optimal counting system from here to eternity seems sad.
That said, dozenalists like Donald Goodman say it's not completely impossible. He argues that converting the currency would be the first and most crucial step, followed by an organized education
campaign on the matter in the schools (As an aside, and in regards to this last step, this is exactly how the metric system was popularized and taught in Canada; I vividly remember the day when, as a
child, our teacher came in and said, "Kids, from here on in, it's the metric system — no exceptions").
Goodman is skeptical, however, that any one procedure could work everywhere, suggesting that it would have to be tailored to local circumstances.
"Most dozenalists believe that we should let dozenals speak for themselves," he told the Guardian. "As time goes on, and as more people learn about dozenals, more people will use them; after a while,
people won't want to use decimals anymore." No official, top-down change is really needed, he argues, except for things like money and legal recognition for dozenal measurement systems.
So, what do you think? Has the time come for the dozenal system?
Special thanks to Calvin Dvorsky for helping me write this article!
Sources: Dozenal Society of Great Britain, Dozenal Society of America, The Guardian.
Images: Shutterstock/ArtisticPhoto, Guardian, gorpub.
81 425Reply | {"url":"http://io9.com/5977095/why-we-should-switch-to-a-base-12-counting-system?tag=science","timestamp":"2014-04-17T16:39:51Z","content_type":null,"content_length":"108956","record_id":"<urn:uuid:e4c3e6c5-5752-47f2-8834-7c5b624dbe2a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Outline of Class 35: More Binary Representation
Held: Monday, April 6, 1998
• If you haven't done so already, read the handout on IEEE representation of real numbers.
• Start reading chapter 8 of Bailey.
• Any questions on assignment five?
• Today's brown-bag lunch is on The Design of C++. C++ is an object-oriented language with a syntax not unlike Javas, but with some very different design decisions. I encourage you to attend (but I
won't be there due to a prior commitment).
• One disadvantage of the three standard representations of signed integers (signed magnitude, one's complement, two's complement) is that all three support only a fixed range of values.
• In a biased representation of a range of integers, you select a bias (offset) and then (traditionally) use the standard positive-only representation.
□ If the bias is b, you represent n using the positive-only representation of b+n.
□ To represent the numbers from -1 to 254 in one byte, you use a bias of 1.
□ To represent the numbers from -255 to 0 in one byte, you use a bias of 255.
• Alternately, you can think of a biased representation as taking the series of bits, computing the corresponding positive integer, and then subtracting the bias to determine the actual value
• The most typical bias is 2^(m-1), where m is the number of bits used to represent the number. This is called excess 2^(m-1).
□ For one byte, the bias is 128. This means that the smallest number we can represent is -128 and the largest is 127.
• In most computers, characters are represented as integers, using a mapping between integers and characters (and back again).
• For example, one might decide that 'A' was 66, 'B' was 67, and so on and so forth.
• In designing such a code, you need to consider how many possible characters you wish to allow. This helps you determine how many bits or bytes to allow per character.
• It turns out that there are fewer than 128 different characters available on the standard US keyboard. So, we might use seven bits (even expanded to eight bits for an even byte) to represent our
• However, as we incorporate other languages or other symbols (such as the copyright or registered trademark signs), we may need more bits and bytes.
• At one point, each manufacturer had its own encoding. This made transmission of data between machines more complicated than it should be. These days, there are standards.
• The standard on most US-based computers is ASCII, the American Standard Code for Information Interchange. It uses eight bits per character. You can determine the ASCII encoding by typing man
ascii on our HP's.
• At one time, IBM promoted EBCDIC (I have no idea what it stands for, perhaps "extended binary coding of diverse characters"; a reference tells me that it's "extended binary-coded decimal
interchange code"). One interesting aspect of EBCDIC is that it doesn't code the characters in sequence (that is, it's not guaranteed that if "A" has code n, then "B" has code n+1).
• The big coding standard these days is Unicode. Java supports it, and it's huge. I'm happy if you know it exists (you don't need to know the details). Unicode uses two bytes per character.
• What if we want to deal with numbers that may have a fractional part (something after the decimal point)?
• We need to think about the meaning of bits after the point. Traditionally, we continue the meaning we use in decimal.
□ The first bit after the binary point is 2^-1. The next bit is 2^-2, and so on and so forth.
□ For example 0.1 is 1/2, 0.01 is 1/4, and 0.11 is 3/4.
• Let's try some exercises in conversion (and think about our conversion algorithm)
□ Fraction = decimal = binary
□ 7/16 = .4375 = ?
□ 1/3 = .333... = ?
□ 1/10 = .1 = ?
• Observe that this changes the numbers we can represent with a finite number of digits. For example, our handout suggests that 2/5 cannot be represented in a finite number of binary digits.
• Nonetheless, this seems like the best way to represent numbers with fractional parts.
□ Are there others? Yes. One might use sets of four bits to represent decimal digits. This is clearly less efficient.
• However, there are still further design decisions to make. For example, how do we place the decimal point?
• In fixed-precision or fixed-point representation, you pick some number of bits that come after the decimal point, and use those to represent the factional part.
• This limits your accuracy for small numbers. For example, if you've only allowed three bits after the decimal point, your accuracy is limited to about 1/8. This means that you'd represent both (1
/16) and (-3/64) as 0.000.
• This limits the overall size of your numbers. For example, if you've only allocated 13 bits to the whole part, your largest number can't be bigger than 2^14-1 or about 16,000.
• However, computation is relatively cheap. You can simply use standard integer computation and then shift the decimal point.
• On the other hand, this can limit accuracy.
• To handle the aforementioned problems, you might instead let the decimal point move ("float") and use extra bits to indicate where the decimal point is positioned.
• In floating-point representation, you use something similar to scientific notation (+/- n.nnnn * 10^x), and represent
□ the digits,
□ the exponent, and
□ the sign separately.
• For example, in decimal .125 might be represented as
□ + for the sign
□ 12 for the twelve
□ -1 for the exponent (10^-1)
• As in the cases above, some things get a little bit confusing as we move to binary. In particular, our exponents are powers of two, instead of powers of ten.
• So, you would not represent .125 as
□ for the sign
□ 00111101 for the 125
□ 11111111 for the -1 (in two's complement)
• Instead, you might represent .125 as
□ for the sign
□ 00000001 for 1
□ 11111101 for exponent (-3 in two's complement)
□ Because .125 is 1/8 or .001 in fixed-precision binary.
• It turns out that mathematics are complicated in floating point. Plauger tells us that floating point computations take up as much microcode to implement the basic floating point operations as it
does to implement everything else on a typical small computer.
• Designing floating point representations (and computation) is still nontrivial. You must still concern yourselves with a number of issues.
□ How many bits will you use for each component?
□ How will you represent each component? Signed-magnitude, two's complement, as a biased value? Will you use the same representation for each component, or different ones?
□ Will you use a separate bit for the sign (in effect, doing signed-magnitude)?
• The IEEE (Institute for Electrical and Electronics Engineers, or some such) serves as a standards body for many issues in computing. They issue language, protocol, design, and other standards.
□ (The IEEE does a number of other things, but that is the most pertinent to our current concerns.)
□ One of their mostly widely used standards is the IEEE Standard for Binary Floating-Point Arithmetic (IEEE standard 754) which discusses not just representation of floating point numbers, but
also computation with those numbers. This standard was released in 1985.
□ As suggested earlier, some of the first issues in the design of a floating point representation are how to allocate bits and represent components.
• The IEEE single-precision representation uses 32 bits, with
□ one bit for the sign (effectively using signed-magnitude)
□ 23 bits for the mantissa
□ eight bits for the exponent
• The actual order is
• The mantissa is represented as an unsigned value. The base position of the decimal is right before the mantissa, although the exponent can shift it.
• The exponent uses a 127-bias representation.
• But there are also some tricks ...
• The smallest exponent allowed is -126 (represented as 00000001) and the largest exponent allowed is 127 (represented as 11111110).
□ Observe that this leaves 00000000 and 11111111 as undefined exponent strings.
□ 00000000 is used for "close to zero" and effects other issues
□ 11111111 is used for "error"
• In the standard representations (those not close to zero), a special trick is used to get one more bit of accuracy.
□ For nonzero numbers, it's clear that in standard scientific notation, using binary (+/- b.bbb * 2^x), the mantissa will always be between 1 and 2.
☆ If it's less than one, we should simply shift the bits left and decrease the exponent
☆ If it's more than two, we should shift the bits right
□ So, we can just assume the leftmost bit is 1 and not bother including it in our representation.
□ This bit is called the hidden bit.
• In the representations of numbers close to zero, the hidden bit isn't used.
• Exercises
□ What is 0 1000 1000 00000000000000000000000?
□ What is 1 1000 1000 00000000000000000000000?
□ What is 0 0000 1000 00000000000000000000000?
□ What is 0 1000 1000 01000000000000000000000?
□ How do you represent 0?
□ How do you represent 1?
□ What is the smallest number you can represent?
□ What is the largest number you can represent? | {"url":"http://www.math.grin.edu/~rebelsky/Courses/CS152/98S/Outlines/outline.35.html","timestamp":"2014-04-16T16:09:24Z","content_type":null,"content_length":"19256","record_id":"<urn:uuid:1da64f3b-9b6e-4354-a2db-1024e0f5ffe3>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Einstein's Special Theory of Relativity Get Warp Speed Extension
Print 51 comment(s) - last by QES.. on Nov 8 at 10:06 AM
New theory describes faster than light travel, could explain CERN's results Some of the greatest physicists of the twentieth century, including Albert Einstein, consider the speed of light a sort of
universal "speed limit". But over the past couple decades physicists theorized that it should be possible to break this law and get away with it -- to travel faster than the speed of light.
I. CERN Results Potentially Described
One of several possible routes to faster-than-light travel was potentially demonstrated when researchers at CERN, the European physics organization known for maintaining the Large Hadron Collider,
sent high-energy particles through the Earth's crust from Geneva, Switzerland to INFN Gran Sasso Laboratory in Italy. In a result that is today highly controversial, the team claimed that the
particles were observed travelling in excess of the speed of light.
Now physics theory may finally be catching up. Math researchers at the University of Adelaide -- located in the middle South of Australia -- have developed new formulas to describe the relationship
between energy, mass, and velocity (which incorporates length and time) for objects traveling faster than the speed of light. The formulas modify Einstein's Theory of Special Relativity, a
fundamental pillar of our understanding of the universe.
[Einstein formulated his Theory of Special Relativity in 1905. [Image Source: AP]]
Math professor Jim Hill, a co-author of the paper writes, "Questions have since been raised over the experimental results [from CERN] but we were already well on our way to successfully formulating a
theory of special relativity, applicable to relative velocities in excess of the speed of light."
He elaborates, "Our approach is a natural and logical extension of the Einstein Theory of Special Relativity, and produces anticipated formulae without the need for imaginary numbers or complicated
The study's other co-author, Dr. Barry Cox, adds, "We are mathematicians, not physicists, so we've approached this problem from a theoretical mathematical perspective... Our paper doesn't try and
explain how this could be achieved, just how equations of motion might operate in such regimes."
II. Placating the Critics
The authors obviously recognize the controversy surrounding both experimental and theoretical work regarding challenging the light speed limitation attached to the special theory of relativity. Write
the authors in the abstract, "In this highly controversial topic, our particular purpose is not to enter into the merits of existing theories, but rather to present a succinct and carefully reasoned
account of a new aspect of Einstein's theory of special relativity, which properly allows for faster than light motion."
[Many believe faster-than-light travel may be possible. [Image Source: LucasFilm, Ltd.]]
The paper proposes two sets of equations -- one based on an invariant set of "frame transitions", the other based on a "frame transition" with the invariance limitation removed. The authors suspect
that if faster than light travel is possible, that the physical behavior of the faster-than-light travelling object is described by one of these equations.
Note, such work is relatively independent from forms of faster-than-light travel that do not violate Einstein's Theory of Special Relativity, such as warping space via a massive energy source.
The paper was published [abstract] in the prestigious peer-reviews journal The Proceedings of the Royal Society A.
Source: RSPA
This article is over a month old, voting and posting comments is disabled
I am giving up on these so called physicist.
They are actual physicists, not just called that.
You better let them know where they are going wrong though, you're obviously the real expert.
No, the group making this latest claim are mathematicians who know little to nothing about physics.
"If a man really wants to make a million dollars, the best way would be to start his own religion." -- Scientology founder L. Ron. Hubbard
Most Popular ArticlesA Bug's Life: Female Cave Bugs Have Penises, Penetrate Males for Three Days
April 17, 2014, 7:20 PM
HTC Hires Former Samsung Marketing Chief Who Developed "Galaxy" Brand
April 18, 2014, 6:00 PM
NASA Finds "Habitable Zone" Planet Sized Similar to Earth
April 18, 2014, 3:13 PM
Dotcom Bomb: U.S. Case Against Megaupload is Crumbling
April 22, 2014, 6:00 PM
Thanks to Government Crackdown, Chinese "Porn Cop" Has Watched 600K Adult Videos
April 21, 2014, 12:00 PM
Latest Blog Posts
More Blog Posts | {"url":"http://www.dailytech.com/article.aspx?newsid=27913&commentid=812403&threshhold=1&red=1358","timestamp":"2014-04-24T20:09:27Z","content_type":null,"content_length":"56113","record_id":"<urn:uuid:eb228873-390c-4a00-80e6-810350ee632c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Orthogonal polyomial contrasts (unequal spacing | N)
orpoly Orthogonal polyomial contrasts (unequal spacing | N) orpoly
SAS Macro Programs: orpoly
$Version: 1.1 (10 Jun 2003)
Michael Friendly
York University
The orpoly macro ( )
Orthogonal polyomial contrasts (unequal spacing | N)
For ANOVA models with quantitative factor variables, it is most useful to describe and analyse the factor effects using tests for linear, quadratic, etc. effects. These tests could be carried out
with a regression model, but orthogonal polynomial contrasts provide a way to do the same tests, in an ANOVA framework with PROC GLM (or PROC MIXED). But, you need to find the appropriate contrast
The ORPOLY macro finds contrast coefficients for orthogonal polynomials for testing a quantitative factor variable, and constructs CONTRAST (or ESTIMATE) statements (for use with PROC GLM or PROC
MIXED) using these values.
This is most useful when either (a) the factor levels are unequally spaced (Trials=1 2 4 10), or (b) the sample sizes at the different levels are unequal. In these cases, the 'standard' orthogonal
polynomial coefficients cannot be used. The ORPOLY macro uses the SAS/IML orpoly() function to find the correct values, and to construct the required CONTRAST (or ESTIMATE) statements.
When the factor levels are equally spaced, *and* sample sizes are equal, the POLY macro provides a simpler way to generate the contrast coefficients, and the associated INTER macro generates
contrasts for interactions among the polynomial contrasts.
The ORPOLY macro uses the SAS/IML orpoly() function to find the correct values, and to construct the required CONTRAST (or ESTIMATE) statements.
The ORPOLY macro is defined with keyword parameters. The VAR= parameter must be specified. The arguments may be listed within parentheses in any order, separated by commas. For example:
%orpoly(var=A, file=temp);
Default values are shown after the name of each parameter.
The name of the input data set [Default: DATA=_LAST_]
The name(s) of quantitative factor variable(s) for orthogonal polynomial contrasts.
Maximum degree of orthogonal polynomial contrasts
Fileref for contrast statements. The default, FILE=PRINT, simply prints the generated statements in the listing file.
To use the generated contrast statements directly following a PROC GLM step, use the FILE= parameter to assign a fileref and create temporary file, which may be used in a GLM step.
Type of statement generated: CONTRAST, ESTIMATE, or BOTH [Default: TYPE=CONTRAST]
Generate some data, with linear & quad A effects, linear B. Levels of B are unequally spaced;
data testit;
do a=1 to 5;
do b=1, 5, 9, 13;
do obs=1 to 2;
y = a + a*a + b + 5*normal(0);
Assign a filename for the CONTRAST statements, generate them with ORPOLY, and %INCLUDE in the GLM step.
filename poly 'orpoly.tst' mod;
%orpoly(data=testit, var=a b, file=poly);
proc glm data=testit;
class a b;
model y=a b a*b;
%include poly;
The ORPOLY macro generates the following lines, which are used in the PROC GLM step:
contrast "A-lin" A -0.22361 -0.11180 -0.00000 0.11180 0.22361;
contrast "A-quad" A 0.18898 -0.09449 -0.18898 -0.09449 0.18898;
contrast "A-3rd" A -0.11180 0.22361 0.00000 -0.22361 0.11180;
contrast "B-lin" B -0.21213 -0.07071 0.07071 0.21213;
contrast "B-quad" B 0.15811 -0.15811 -0.15811 0.15811;
See also
dummy Macro to create dummy variables
stat2dat Convert summary dataset to raw data equivalent | {"url":"http://www.datavis.ca/sasmac/orpoly.html","timestamp":"2014-04-16T04:11:58Z","content_type":null,"content_length":"5914","record_id":"<urn:uuid:6b2275ab-2119-4701-8bda-87847f8577fb>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 23
--> Information Retrieval: Finding Needles in Massive Haystacks Susan T. Dumais Bellcore 1.0 Information Retrieval: the Promise and Problems This paper describes some statistical challenges we
encountered in designing computer systems to help people retrieve information from online textual databases. I will describe in detail the use of a particular high-dimensional vector representation
for this task. Lewis (this volume) describes more general statistical issues that arise in a variety of information retrieval and filtering applications. To get a feel for the size of information
retrieval and filtering problems, consider the following example. In 1989, the Associated Press News-wire transmitted 266 megabytes of ascii text representing 84,930 articles containing 197,608
unique words. A term-by-document matrix describing this collection has 17.7 billion cells. Luckily the matrix is sparse, but we still have large p (197,000 variables) and large n (85,000
observations). And, this is just one year's worth of short articles from one source The promise of the information age is that we will have tremendous amounts of information readily available at our
fingertips. Indeed the World Wide Web (WWW) has made terabytes of information available at the click of a mouse. The reality is that it is surprisingly difficult to find what you want when you want
it! Librarians have long been aware of this problem. End users of online catalogs or the WWW, like all of us, are rediscovering this with alarming regularity. Why is it so difficult to find
information online? A large part of the problem is that information retrieval tools provide access to textual data whose meaning is difficult to model. There is no simple relational database model
for textual information. Text objects are typically represented by the words they contain or the words that have been assigned to them and there are hundreds of thousands such terms. Most text
retrieval systems are word based. That is, they depend on matching words in users' queries with words in database objects. Word matching methods are quite efficient from a computer science point of
view, but not very effective from the end users' perspective because of the common vocabulary mismatch or verbal disagreement problem (Bates, 1986; Furnas et al., 1987). One aspect of this problem
(that we all know too well) is that most queries retrieve irrelevant information. It is not unusual to find that 50% of the information retrieved in response to a query is irrelevant. Because a
single word often has more than one meaning (polysemy), irrelevant materials will be retrieved. A query about "chip", for example, will
OCR for page 23
--> return articles about semiconductors, food of various kinds, small pieces of wood or stone, golf and tennis shots, poker games, people named Chip, etc. The other side of the problem is that we
miss relevant information (and this is much harder to know about!). In controlled experimental tests, searches routinely miss 50-80% of the known relevant materials. There is tremendous diversity in
the words that people use to describe the same idea or concept (synonymy). We have found that the probability that two people assign the same main content descriptor to an object is 10-20%, depending
some on the task (Furnas et al., 1987). If an author uses one word to describe an idea and a searcher another word to describe the same idea, relevant material will be missed. Even a simple concrete
object like a "viewgraph" is also called a "transparency", "overhead", "slide", "foil", and so on. Another way to think about these retrieval problems is that word-matching methods treat words as if
they are uncorrelated or independent. A query about "automobiles" is no more likely to retrieve an article about "cars" than one "elephants" if neither article contains precisely the word automobile.
This property is clearly untrue of human memory and seems undesirable in online information retrieval systems (see also Caid et al., 1995). A concrete example will help illustrate the problem. 2.0 A
Small Example A textual database can be represented by means of a term-by-document matrix. The database in this example consists of the titles of 9 Bellcore Technical Memoranda. There are two classes
of documents -5 about human-computer interaction and 4 about graph theory. Title Database: c1: Human machine interface for Lab ABC computer applications c2: A survey of user opinion of computer
system response time c3: The EPS user interface management system c4: System and human system engineering testing of EPS c5: Relation of user-perceived response time to error measurement m1: The
generation of random, binary, unordered trees m2: The intersection graph of paths in trees m3: Graph minors IV: Widths of trees and well-quasi-ordering m4: Graph minors: A survey The term-by-document
matrix corresponding to this database is shown in Table 1 for terms occurring in more than one document. The individual cell entries represent the frequency with which a term occurs in a document. In
many information retrieval applications these frequencies are transformed to reflect the ability of words to discriminate among documents. Terms that are very discriminating are given high weights
and undiscriminating terms are given low weights. Note also the large number of 0 entries in the matrix-most words do not occur in most documents, and most documents do not contain most words
OCR for page 23
--> Table 1 Sample Term-by-Document Matrix (12 terms × 9 documents) C1 C2 C3 C4 C5 M1 M2 M3 M4 human 1 0 0 1 0 0 0 0 0 interface 1 0 1 0 0 0 0 0 0 computer 1 1 0 0 0 0 0 0 0 user 0 1 1 0 1 0 0 0 0
system 0 1 1 2 0 0 0 0 0 response 0 1 0 0 1 0 0 0 0 time 0 1 0 0 1 0 0 0 0 EPS 0 0 1 1 0 0 0 0 0 survey 0 1 0 0 0 0 0 0 1 trees 0 0 0 0 0 1 1 1 0 graph 0 0 0 0 0 0 1 1 1 minors 0 0 0 0 0 0 0 1 1
Consider a user query about "human computer interaction." Using the oldest and still most common Boolean retrieval method, users specify the relationships among query terms using the logical
operators AND, OR and NOT, and documents matching the request are returned. More flexible matching methods which allow for graded measures of similarity between queries and documents are becoming
more popular. Vector retrieval, for example, works by creating a query vector and computing its cosine or dot product similarity to the document vectors (Salton and McGill, 1983; van Rijsbergen,
1979). The query vector for the query "human computer interaction" is shown in the table below. Table 2. Query vector for "human computer interaction", and matching documents Query C1 C2 C3 C4 C5 M1
M2 M3 M4 1 human 1 0 0 1 0 0 0 0 0 0 interface 1 0 1 0 0 0 0 0 0 1 computer 1 1 0 0 0 0 0 0 0 0 user 0 1 1 0 1 0 0 0 0 0 system 0 1 1 2 0 0 0 0 0 0 response 0 1 0 0 1 0 0 0 0 0 time 0 1 0 0 1 0 0 0 0
0 EPS 0 0 1 1 0 0 0 0 0 0 survey 0 1 0 0 0 0 0 0 1 0 trees 0 0 0 0 0 1 1 1 0 0 graph 0 0 0 0 0 0 1 1 1 0 minors 0 0 0 0 0 0 0 1 1 This query retrieves three documents about human-computer interaction
(C1, C2 and C4) which could be ranked by similarity score. But, it also misses two other relevant documents (C3 and C5) because the authors wrote about users and systems rather than humans and
computers. Even the more flexible vector methods are still word-based and plagued by the problem of verbal disagreement.
OCR for page 23
--> A number of methods have been proposed to overcome this kind of retrieval failure including: restricted indexing vocabularies, enhancing user queries using thesauri, and various AI knowledge
representations. These methods are not generally effective and can be time-consuming. The remainder of the paper will focus on a powerful and automatic statistical method, Latent Semantic Indexing,
that we have used to uncover useful relationships among terms and documents and to improve retrieval. 3.0 Latent Semantic Indexing (LSI) Details of the of the LSI method are presented in Deerwester
et al. (1990) and will only be summarized here. We begin by viewing the observed term-by-document matrix as an unreliable estimate of the words that could have been associated with each document. We
assume that there is some underlying or latent structure in the matrix that is partially obscured by variability in word usage. There will be structure in this matrix in so far as rows (terms) or
columns (documents) are not independent. It is quite clear by looking at the matrix that the non-zero entries cluster in the upper left and lower fight comers of the matrix. Unlike word-matching
methods which assume that terms are independent, LSI capitalizes on the fact that they are not. We then use a reduced or truncated Singular Value Decomposition (SVD ) to model the structure in the
matrix (Stewart, 1973). SVD is closely related to Eigen Decomposition, Factor Analysis, Principle Components Analysis, and Linear Neural Nets. We use the truncated SVD to approximate the
term-by-document matrix using a smaller number of statistically derived orthogonal indexing dimensions. Roughly speaking, these dimensions can be thought of as artificial concepts representing the
extracted common meaning components of many different terms and documents. We use this reduced representation rather than surface level word overlap for retrieval. Queries are represented as vectors
in the reduced space and compared to document vectors. An important consequence of the dimension reduction is that words can no longer be independent; words which are used in many of the same
contexts will have similar coordinates in the reduced space. It is then possible for user queries to retrieve relevant documents even when they share no words in common. In the example from Section
2, a two-dimensional representation nicely separates the human-computer interaction documents from the graph theory documents. The test query now retrieves all five relevant documents and none of the
graph theory documents. In several tests, LSI provided 30% improvements in retrieval effectiveness compared with the comparable word matching methods (Deerwester et al., 1990; Dumais, 1991). In most
applications, we keep k~100-400 dimensions in the reduced representation. This is a large number of dimensions compared with most factor analytic applications! However, there are many fewer
dimensions than unique words (often by several orders of magnitude) thus providing the desired retrieval benefits. Unlike many factor analytic applications, we make no attempt to rotate or interpret
the underlying dimensions. For information retrieval we simply want to represent terms, documents, and queries in a way that avoids the unreliability, ambiguity and redundancy of individual terms as
OCR for page 23
--> A graphical representation of the SVD is shown below in Figure 1. The rectangular term-by-document matrix, X, is decomposed into the product of three matrices-X = T0 S0 D0', such that T0 and D0
have orthonormal columns, S0 is diagonal, and r is the rank of X. This is the singular value decomposition of X. T0 and D0 are the matrices of left and right singular vectors and S0 is the diagonal
matrix of singular values which by convention are ordered by decreasing magnitude. Figure 1. Graphical representation of the SVD of a term-by-document matrix. Recall that we do not want to
reconstruct the term-by-document matrix exactly. Rather, we want an approximation that captures the major associational structure but at the same time ignores surface level variability in word
choice. The SVD allows a simple strategy for an optimal approximate fit. If the singular values of S0 are ordered by size, the first k largest may be kept and the remainder set to zero. The product
of the resulting matrices is a matrix which is only approximately equal to X, and is of rank k (Figure 2). The matrix is the best rank-k approximation to X in the least squares sense. It is this
reduced model that we use to approximate the data in the term-by-document matrix. We can think of LSI retrieval as word matching using an improved estimate of the term-document associations using ,
or as exploring similarity neighborhoods in the reduced k-dimensional space. It is the latter representation that we work with-each term and each document is represented as a vector in k-space. To
process a query, we first place a query vector in k-space and then look for nearby documents (or terms).
OCR for page 23
--> Figure 2. Graphical representation of the reduced or truncated SVD 4.0 Using LSI For Information Retrieval We have used LSI on many text collections (Deerwester, 1990; Dumais, 1991; Dumais,
1995). Table 3 summarizes the characteristics of some of these collections. Table 3. Example Information Retrieval data sets database ndocs nterms (>1 doc) non-zeros density cpu for svd; k=100 MED
1033 5831 52012 .86% 2 mins TM 6535 16637 327244 .30% 10 mins ENCY 30473 75714 3071994 .13% 60 mins TREC-sample 68559 87457 13962041 .23% 2 hrs TREC 742331 512251 81901331 .02% —— Consider the MED
collection, for example. This collection contains 1033 abstracts of medical articles and is a popular test collection in the information retrieval research community. Each abstract is automatically
analyzed into words resulting in 5831 terms which occur in more than one document. This generates a 1033 × 5831 matrix. Note that the matrix is very sparse-fewer than 1% of the cells contain non-zero
values. The cell entries are typically transformed using a term weighting scheme. Word-matching methods would use this matrix. For LSI, we then compute the truncated SVD of the matrix keeping the k
largest singular values and the corresponding left and fight singular vectors. For a set of 30 test queries, LSI (with k=100) is 30% better than the comparable word-matching method (i.e., using the
raw matrix with no dimension reduction) in retrieving relevant documents and omitting irrelevant ones. The most time consuming operation in the LSI analysis is the computation of the truncated SVD.
However, this is a one time cost that is incurred when the collection is indexed and
OCR for page 23
--> not for every user query. Using sparse-iterative Lanczos code (Berry, 1992) we can compute the SVD for k=100 in the MED example in 2 seconds on a standard Sun Sparc 10 workstation. The
computational complexity of the SVD increases rapidly as the number of terms and documents increases, as can be seen from Table 3. Complexity also increases as the number of dimensions in the
truncated representation increases. Increasing k from 100 to 300 increases the CPU times by a factor of 9-10 compared with the values shown in Table 3. We find that we need this many dimensions for
large heterogeneous collections. So, for a database of 68k articles with 14 million non-zero matrix entries, the initial SVD takes about 20 hours for k=300. We are quite pleased that we can compute
these SVDs with no numerical or convergence problems on standard workstations. However, we would still like to analyze larger problems more quickly. The largest SVD we can currently compute is about
100,000 documents. For larger problems we run into memory limits and usually compute the SVD for a sample of documents. This is represented in the last two rows of Table 3. The TREC data sets are
being developed as part of a NIST/ARPA Workshop on information retrieval evaluation using larger databases than had previously been available for such purposes (see Harman, 1995). The last row (TREC)
describes the collection used for the adhoc retrieval task. This collection of 750k documents contains about 3 gigabytes of ascii text from diverse sources like the APNews wire, Wall Street Journal,
Ziff-Davis Computer Select, Federal Register, etc. We cannot compute the SVD for this matrix and have had to subsample (the next to last row, TREC-sample). Retrieval performance is quite good even
though the reduced LSI space is based on a sample of less than 10% of the database (Dumais, 1995). We would like to evaluate how much we loose by doing so but cannot given current methods on standard
hardware. While these collections are large enough to provide viable test suites for novel indexing and retrieval methods, they are still far smaller than those handled by commercial information
providers like Dialog, Mead or Westlaw. 5.0 Some Open Statistical Issues Choosing the number of dimensions. In choosing the number of dimensions to keep in the truncated SVD, we have to date been
guided by how reasonable the matches look. Keeping too few dimensions fails to capture important distinctions among objects; keeping more dimensions than needed introduces the noise of surface level
variability in word choice. For information retrieval applications, the singular values decrease slowly and we have never seen a sharp elbow in the curve to suggest a likely stopping value. Luckily,
there is a range of values for which retrieval performance is quite reasonable. For some test collections we have examined retrieval performance as a function of number of dimensions. Retrieval
performance increases rapidly as we move from only a few factors up to a peak and then decreases slowly as the number of factors approaches the number of terms at which point we are back at
word-matching performance. There is a reasonable range of values around the peak for which retrieval performance is well above word matching levels.
OCR for page 23
--> Size and speed of SVD. As noted above, we would like to be able to compute large analyses faster. Since the algorithm we use is iterative the time depends some on the structure of the matrix. In
practice, the complexity appears to be O(4*z + 3.5*k), where z is the number of non-zeros in the matrix and k is the number of dimensions in the truncated representation. In the previous section, we
described how we analyze large collections by computing the SVD of only a small random sample of items. The remaining items are "folded in" to the existing space. This is quite efficient
computationally, but in doing so we eventually loose representational accuracy, especially for rapidly changing collections. Updating the SVD. An alternative to folding in (and to recomputing the
SVD) is to update the existing SVD as new documents or terms are added. We have made some progress on methods for updating the SVD (Berry et al., 1995), but there is still a good deal of work to be
done in this area. This is particularly difficult when a new term or document influences the values in other rows or columns-e.g., when global term weights are computed or when lengths are
normalized. Finding near neighbors in high dimensional spaces. Responding to a query involves finding the document vectors which are nearest the query vector. We have no efficient methods for doing
so and typically resort to brute force, matching the query to all documents and sorting them in decreasing order of similarity to the query. Methods like kd-trees do not work well in several hundred
dimensions. Unlike the SVD which is computed once, these query processing costs are seen on every query. On the positive side, it is trivial to parallelize the matching of the query to the document
vectors by putting subsets of the documents on different processors. Other models of associative structure. We chose a dimensional model as a compromise between representational richness and
computational tractability. Other models like nonlinear neural nets or overlapping clusters may better capture the underlying semantic structure (although it is not at all clear what the appropriate
model is from a psychological or linguistic point of view) but were computationally intractable. Clustering time, for example, is often quadratic in the number of documents and thus prohibitively
slow for large collections. Few researchers even consider overlapping clustering methods because of their computational complexity. For many information retrieval applications (especially those that
involve substantial end user interaction), approximate solutions with much better time constants might be quite useful (e.g., Cutting et al., 1992). 6.0 Conclusions Information retrieval and
filtering applications involve tremendous amounts of data that are difficult to model using formal logics such as relational databases. Simple statistical approaches have been widely applied to these
problems for moderate-sized databases with promising results. The statistical approaches range from parameter estimation to unsupervised analysis of structure (of the kind described in this paper) to
supervised learning for filtering applications. (See also Lewis, this volume.) Methods for handling more complex models and for extending the simple models to massive data sets are needed for a wide
variety of real world information access and management applications.
OCR for page 23
--> 7.0 References Bates, M.J. Subject access in online catalogs: A design model. Journal of the American Society for Information Science, 1986, 37(6), 357-376. Berry, M. W. Large scale singular
value computations. International Journal of Supercomputer Applications, 1992, 6, 13-49. Berry, M. W. and Dumais, S. T. Using linear algebra for intelligent information retrieval. SIAM: Review, 1995.
Berry, M. W., Dumais, S. T. and O'Brien, G. W. The computational complexity of alternative updating approaches for an SVD-encoded indexing scheme. In Proceedings of the Seventh SIAM Conference on
Parallel Processing for Scientific Computing, 1995. Caid, W. R, Dumais, S. T. and Gallant, S. I. Learned vector space models for information retrieval. Information Processing and Management , 1995,
31(3), 419-429. Cutting, D. R., Karger, D. R., Pederson, J. O. and Tukey, J. W. Scatter/Gather: A cluster-based approach to browsing large document collections. In Proceedings of ACM: SIGIR'92,
318-329. Deerwester, S., Dumais, S. T., Landauer, T. K., Furnas, G. W. and Harshman, R. A. Indexing by latent semantic analysis. Journal of the Society for Information Science, 1990, 41(6), 391-407.
Dumais, S. T. Improving the retrieval of information from external sources. Behavior Research Methods, Instruments and Computers, 1991, 23(2), 229-236. Dumais, S. T. Using LSI for information
filtering: TREC-3 experiments. In: D. Harman (Ed.), Overview of the Third Text REtrieval Conference (TREC3). National Institute of Standards and Technology Special Publication 500-225, 1995,
pp.219-230. Furnas, G. W., Landauer, T. K., Gomez, L. M. and Dumais, S.T. The vocabulary problem in human-system communication. Communications of the A CM, 1987, 30(11), 964-971. Lewis, D.
Information retrieval and the statistics of large data sets , [this volume]. D. Harman (Ed.), Overview of the Third Text REtrieval Conference (TREC3). National Institute of Standards and Technology
Special Publication 500-225, 1995. Salton, G. and McGill, M.J. Introduction to Modem Information Retrieval . McGraw-Hill, 1983. Stewart, G. W. Introduction to Matrix Computations. Academic Press,
1973 van Rijsbergen, C.J. Information retrieval. Buttersworth, London, 1979
OCR for page 23
This page in the original is blank. | {"url":"http://www.nap.edu/openbook.php?record_id=5505&page=23","timestamp":"2014-04-19T12:49:18Z","content_type":null,"content_length":"74324","record_id":"<urn:uuid:95bb487b-317f-4294-a5dc-5ffc318a0898>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Data Structure
Miss.Rohini A. Shinde Notes
Introduction To Data Structure
Introduction to data structure
Data structures are used in almost every program or software system. Specific data structures are essential ingredients of many efficient algorithms, and make possible the management of huge amounts
of data, such as large databases and internet indexing services. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor
in software design.
Data are simply values or set of values.
A data item refers to a single unit of values.
Data item that are divided into sub items are called group items; those which are not are called elementary items. Computer science is concerned with study of data which involves
Machine that hold the data.
Technologies used for processing data.
Methods for obtaining information from data.
Structure for representing data.
Data are simply values or sets of values.A data item refers to a single unit of values.Data items that are devided into subitems are called as GROUP tem.
Data items which are not devided into subitems are called as ELEMENTARY item.
For example:
Name of students can be devided into three subitems are first name,middle name and last name.So name is a GROUP item.
The Pin code can not be devided into subitems therefore is called as Elementary items.
Raw data is of little value in its present form unless it is organized into a meaningful format. If we organize data or process data so that it reflect some meaning then this meaningful or processed
data is called information.
Data type:
Data type is a term used to describe information type that can be processed by a computer system and which is supported by a programming language. More formally we define the data type as “ a term
which refers to the kind of data that a variable may take in “. Several different types of data can be processed by a computer system e.g. Numeric data, text data, video data, audio data, spatial
data etc. A brief classification of data types is as shown in fig.
Data type
Built in or primary User defined Derived
int, Enumeration, Array
real, union Function
char, structure Pointer
Boolean class
Data Structure: Data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently. When we define a data structure we are in fact creating a new data
type of our own i.e. using predefined types or previously user defined types. The basic types of data structure includes : files, lists, arrays, trees etc. Different kinds of data structures are
suited to different kinds of applications, and some are highly specialized to specific tasks. The choice of a particular data structure for an application depends upon two factors:
1. Structure must reflect the actual relationship of the data in the real world.
2. The structure should be simple enough so that one can efficiently process the data when necessary.
Classification of Data Structure: Data structures are normally classified into two categories:
1. Primitive Data Structure: Primitive data structure are native to machine‟s hardware. Some primitive data structure are
· Integer
· Character
· Real Number
· Logical Number
· Pointer
2. Non-primitive Data Structure: These type of data structure are derived from the primitive data structures. Some non primitive data structures are
· Array
· List
· Files
· Tree
· Graph
Entity: An entity is something that has certain attributes or properties which may be assign values,The values can be numeric or non numeric.
For example: Employee of an organization is an entity.
The attributes associated with this entity are employee number,name age, sex address,etc.
Every attributes has certain value.The collection of all entities form an entity set.All the employees in an organization forms entity set.
The term Information is also used for data with given attributes.Information means meaningful data.
Field is single elementary unit of information reprenting an attribute of entity.
For example:
Name,age,sex are all fields.
Record: A record is s collection of fields values of a given entity. i.e.one record for one employee.
File: A fileis s collection of records of entities . i.e.collection of records of all employesss form a file.
A record in a file may contain several fields.There are certain fields having unique value.Such a field is primary key.For example employee no is a primary key.
A file can have fixed length record or variable length record.
fixed length record: In fixed length record all record contain the same data item with same amount of space assigned to each data item.
variable length record: In variable length record, file records may contain different lengths.
Apart from fields,records and files, the data can also be organized into more complex types of structures.Data can be organsed into different ways.The logical mathematical model of a particular
organization is called a Data Structure.The model should be reflect the actual relationship of data in real world and it should be simple enough so that data can be processed whenever required.Some
of the data structures are arrays,linked lists,trees,queue & graph.
An array is a list of finite no of elements.For example an array of names or an array of marks.
Linked list is a list of elements associated with a link to elements of other list.
Tree represents hierarchical relationship between various elements.
Queue is first in first out linear list where deletion can take place only at one end.
Graph represents relationship between pairs of elements.
Data Structure Operations:
Following are operations can be performed on a data structure.
1. Traversing: Traversing means accessing each record(element) only once so that it can be processed.
2. Searching: Searching means finding the location of recors(element) with given key value or finding all records that satisfies condition.
3. Inserting: Inserting means adding new record(element) to the structure.
4. Deleting: Deleting means removing the record(element) from the structure.
5. Sorting: Sorting means arranging the records(elements) in some logical order. For example: arranging names in alphabetical order.
6. Merging: Merging means combining the records in two different files into a single file.
Algorithm is finite set of instructions that can be followed to find a solution for given problem. An algorithm must have following characteristics
1. Input: An algorithm must accept input if supplied externally.
2. Output: An algorithm must give at least one output after processing input data.
3. Definiteness: Each instruction in algorithm must be lucid and unambiguous.
4. Effectiveness: Each instruction in an algorithm must be practicable. Any person either expert or novice user must be able to perform the operation with only pencil and paper.
5. Finiteness: Algorithm must terminate after a finite number of steps.
An algorithm is a finite step-by-step list of well defined instructions for solving a particular problem.
We will needed to write an algorithm to different data structure operations.
In first part of algorithm we tell the purpose of algorithm,it list for variable & input data.
The second part is list of steps.
The steps in algorithm is executed one after other.
Control goes transfer to statement n,by using the statement like “Go to step n”.
The exit & stop statements completes the algorithm.The data may be assigned to variable by using read statement & it is displayed by write or print statement.
For example:
Algorithm 1 : Purpose of algorithm and input data and variables used.
Step 1 : ……………………………………..
Step 2 : ……………………………………..
Step n : ……………………………………..
Control Structures:
There are three types of flow of control(or logic):
1. Sequential flow(Sequential flow)
2. Conditional flow(selection logic)
3. Repetitive flow(Iteration logic)
1.Sequential flow
Algorithms consists of modules.Each module is a set of steps.In sequential flow the module. Are executed one after the other.
Module A
Module B
Module C
2.Conditional Flow:
In conditional flow, one or other module is selected depending on condition. There are three conditional structure-
1. Single alternative:
This has the General form-
If condition, then:
[Module A]
[End of if structure]
Explanation:It is a selection statement, we have to choice a single alternative like if statement. If condition is true then it will execute respective module.
1. Double alternative:
This has the General form-
If condition, then:
[Module A]
[Module B]
[End of if structure]
1. Multiple alternative:
This has the general form-
If condition(1),then:
[Module A[1]]
ELSE if(condition2) then:
[Module A[2]]
ELSE if(condition M)then:
[Module A[M]]
[Module B]
[End of if structure]
In Multiple alternative, first check the condition1,if condition1 is true then it execute first module (moduleA1) otherwise it check condition2.If condition2 is true then it execute second module
(module A2). this procedure will continue till last condition(Condition M). If all conditions are failed then it execute else part, i.e. Module B.
-The logic of this structure allows only one module to be executed.
In repeat for loop first set the initial value, then check the condition. If condition is true then it will execute respective Module(body of loop) then increment or decrement the value of variable.
Again it check the condition, code inside the for loop is executed till condition is getting satisfied.
3. Repititive Flow:
Here certain moule is executed repetedly unity condition satisfies.
The repeat for loop has the form:
Repeat for k=R to S by T
[End of loop]
Here k is index variable, Initial value of K is R and final value is S.T. is increment.
Repeat While is another repetitive flow. It has form:
Repeat While Condition:
[End of Loop]
Here looping continuous until condition is false.
In while loop first check the condition,if condition is true then it execute the body of loop or Module.Again it check the condition. Again condition is true then it execute Module. This procedure
will continue upto condition becomes false.
If condition becomes false it does not execute Module
Write an algorithm to find roots of quadratic equation
Where a≠0 and roots are given by,
This algorithm inputs coefficients A,B,C of a quadratic equation & find real roots,if any.
Step1: Read:A,B,C
Step2: Set D= b^2-4ac
Step3: if D>0 then
a) Set X1=(-B+√D)/2A and
Set X2=(-B-√D)/2A
b) Write:’Unique solution’,X
Write:’No real roots’
[End of IF structure]
Step 4: EXIT
A linear array is a list of a finite number n of homogeneous data elements (i.e., data
elements of the same type) such that:
(a) The elements of the array are referenced respectively by an index set consisting of n consecutive numbers.
(b) The elements of the array are stored respectively in successive memory locations.
We will analyze this for Linear Arrays.
The number n of elements is called the length or size of the array. If not explicitly stated,
we may assume the index set consists of the integers 1, 2, 3, ….., n.
The general equation to find the length or the number of data elements of the array is,
Length = UB – LB + 1 (1.4a)
where, UB is the largest index, called the upper bound, and LB is the smallest index,
called the lower bound of the array. Remember that length=UB when LB=1.
Also, remember that the elements of an array A may be denoted by the subscript notation,
A1, A2, A3……..An
Let us consider the following example (a),
The following figures (a) and (b) depict the array DATA.
Figure (a)
Figure (b)
Length = UB – LB + 1
Here UB=5 & LB=0,
Length = 5 –0 + 1 =6
Consider LA be a linear array in the memory of the computer. As we know that the
memory of the computer is simply a sequence of addressed location as pictured in
figure as given below.
Figure: Computer Memory
Consider let A be a collection of data elements stored in the memory of the computer.
Suppose we want to either print the contents of each element of A or to count the number of elements of A with a given property. This can be accomplished by traversing A, that is, by accessing and
processing (frequently called visiting) each element of A exactly once.
The following algorithm is used to traversing a linear array LA.
As we know already, here, LA is a linear array with lower bound LB and upper bound
UB. This algorithm traverses LA applying an operation PROCESS to each element of
1. [Initialize counter] Set K:= LB.
2. Repeat Steps 3 and 4 while K≤UB
3. [Visit element] Apply PROCESS to LA[K]
4. [Increase counter] Set K:= K + 1
[End of Step 2 loop]
5. Exit.
Let A be a collection of data elements in the memory of the computer. “Inserting” refers to the operation of adding another element to the collection A, and “deleting” refers to the operation of
removing one of the elements from A. Let we discuss the inserting and deleting an element when A is a linear array.
Inserting an element at the “end” of a linear array can be easily done provided the
memory space allocated for the array is large enough to accommodate the additional
element. On the other hand, suppose we need to insert an element in the middle of the array. Then, on the average, half of the elements must be moved downward to new locations to accommodate the new
element and keep the order of the other elements.
Similarly, deleing an element at the “end” of an array presents no difficulties, but deleting an element somewhere in the middle of the array would require that each subsequent element be moved one
location upward in order to “fill up” the array.
Consider the other example
Suppose NAME is an 8-element linear array, and suppose five names are in the array, as in Figure (a). Observe that the names are listed alphabetically, and suppose we want to keep the array names
alphabetical at all times. If, Ford is adding to the array, then,
Johnson, Smith and Wagoner must each be move downward one location, as given in Figure (b). Next if we add Taylor to this array; then Wagner must be move, as in Figure (c). Last, when we remove Davis
from the array, then, the five names Ford, Johnson, Smith, Taylor and Wagner must each be move upward one location, as in Figure (d). We may observe, clearly that such movement of data would be very
expensive if thousands of names are in the array
Consider let A be a collection of data elements stored in the memory of the computer.
Suppose we want to either print the contents of each element of A or to count the number of elements of A with a given property. This can be accomplished by traversing A, that is, by accessing and
processing (frequently called visiting) each element of A exactly once.
The following algorithm is used to traversing a linear array LA.
As we know already, here, LA is a linear array with lower bound LB and upper bound
UB. This algorithm traverses LA applying an operation PROCESS to each element of
1. [Initialize counter] Set K:= LB.
2. Repeat Steps 3 and 4 while K≤UB
3. [Visit element] Apply PROCESS to LA[K]
4. [Increase counter] Set K:= K + 1
[End of Step 2 loop]
5. Exit.
Let A be a collection of data elements in the memory of the computer. “Inserting” refers to the operation of adding another element to the collection A, and “deleting” refers to the operation of
removing one of the elements from A. Let we discuss the inserting and deleting an element when A is a linear array.
Inserting an element at the “end” of a linear array can be easily done provided the
memory space allocated for the array is large enough to accommodate the additional
element. On the other hand, suppose we need to insert an element in the middle of the array. Then, on the average, half of the elements must be moved downward to new locations to accommodate the new
element and keep the order of the other elements.
Similarly, deleing an element at the “end” of an array presents no difficulties, but deleting an element somewhere in the middle of the array would require that each subsequent element be moved one
location upward in order to “fill up” the array.
Consider the other example
Suppose NAME is an 8-element linear array, and suppose five names are in the array, as in Figure (a). Observe that the names are listed alphabetically, and suppose we want to keep the array names
alphabetical at all times. If, Ford is adding to the array, then,
Johnson, Smith and Wagoner must each be move downward one location, as given in Figure (b). Next if we add Taylor to this array; then Wagner must be move, as in Figure (c). Last, when we remove Davis
from the array, then, the five names Ford, Johnson, Smith, Taylor and Wagner must each be move upward one location, as in Figure (d). We may observe, clearly that such movement of data would be very
expensive if thousands of names are in the array
The following algorithm inserts a data element ITEM into the Kth position in a linear
array LA with N elements i.e. INSERT(LA,N,K,ITEM).
Here LA is a linear array with N elements and K is a positive integer such that K<N.
This algorithm inserts an element ITEM into the Kth position in LA.
The following algorithm deletes the Kth element from a linear array LA and assigns it to
a variable ITEM i.e. DELETE{LA.N,K,ITEM).
Here LA is a linear array with N elements and K is a positive integer such that K<N.
This algorithm deletes the Kth element from LA.
Sorting means rearranging elements of array in increasing order,i.e.
A[0]<A[1] <A[2]………………. <A[n]
Where A is array name.
For example,suppose A originally is the list
After sorting A is the list
In sorting we may rearrange the data in decreasing also.There are many sorting techniques,here we discuss the simple sorting technique,Bubble sort.
Suppose the list of numbers, A[0]<A[1] <A[2]………………. <A[n]
In memory.
Step1: Compare A[1] and A[2] and arrange them in desired order,so that A[1]<A[2].Then compare A[2] A[2] and A[3] and arrange them so that A[2]<A[3].
Continue this process until we compareA[n-1]<A[N].
Step 1 involves n-1 comparisons.During this step,the largest element is coming like a bubble to the nth position.When step 1 is completed,A[N} will be the largest element.
Step 2: Repeat step 1 with one less comparision. Step 2 involves N-2 comparision,when step 2 is completed we get second element A[N-2}.
Step 3: Repeat step 1 with one less comparision. Step 3 involves N-3 comparision,
Step N-1: Compare A[1] and A[2] and arrange them so that A[1]<A[2].
After N-1 steps,the list will be sorted in increasing order.Each step is called as ‘pass’.So that bubble sort has n-1 passes.
Algorithm: (BUBBLE SORT) BUBBLE (DATA, N)
Here DATA is an array with N elements. This algorithm sorts the elements in DATA.
1. Repeat step 2 and 3 for K=1 to N-1
2. Set PTR :=1 [Initializes pass pointer PTR]
3. Repeat while PTR <= N-K [Executes pass]
a. If DATA [PTR]> DATA [PTR+1], then:
Interchange DATA [PTR] and DATA [PTR+1].
[End of if structure].
b. Set PTR: = PTR+1.
[End of inner loop].
4. EXIT.
Many times we need to find a record with the help of key. Key is the unique value associated with each record which distinguishes records from each other. Searching is operation that finds the given
key value in the list of elements. Searching algorithm accepts two arguments the key and list in which key is to be found. There are two standard algorithm for searching – Linear Search and Binary
search. Linear Searching This is the simplest method for searching a element into the list of elements. We start from the beginning of the list and search the element by examining each consequetive
element in the list until the element found in the list or the list is finished. Algorithm
1. Read N elements into an array A.
2. Read the element to be searched into KEY.
3. Repeat step 4 for I=0 to N-1 by 1.
4. IF A[I] = KEY THEN PRINT “Element found”.
5. IF element is not in the list PRINT “Element not found”.
Algorithm: (Linear Search) LINEAR (DATA, N, ITEM, LOC)
Here DATA is linear array with N elements, and ITEM is a given item of information. This algorithm finds the location LOC of ITEM in DATA, or sets LOC: = 0 if search is unsuccessful.
1. [Insert ITEM at the end of DATA] Set DATA [N+1]:= ITEM
2. [Initialize counter] Set LOC: =1.
3. [Search for ITEM]
Repeat while DATA [LOC]! = ITEM
Set LOC: = LOC+1.
[End of loop].
4. [Successful?] If LOC= N+1, then Set LOC: =0.
5. EXIT
Binary Search:
Binary Search: Suppose DATA is an array which is stored in increasing numerical order or alphabetically. Then there is an extremely efficient searching algorithm called binary search, which can be
used to find the location LOC of a given ITEM of information in DATA.
The Binary Search algorithm applied to our array DATA works as follows. During each stage of our algorithm, our search of ITEM is reduced to a segment of element of DATA:
DATA [BEG], DATA [BEG+1], DATA [BEG+2], …… DATA [END]
Note that the variables BEG and END denote, respectively, the beginning and end location of the segment under consideration. The algorithm compares ITEM with the middle element DATA [MID] of the
segment, where MID is obtained by
MID = INT ((BEG + END)/2)
If DATA [MID] =ITEM, then the search is successful and we set LOC:= MID. Otherwise a new segment of DATA is obtained as follows:
a) If ITEM < DATA[MID], then ITEM can appear only in the left half of the segment:
DATA [BEG], DATA [BEG+1], …. DATA [MID-1]
So we reset END: = MID-1 and begin search again.
b) If ITEM > DATA[MID], then ITEM can appear only in the right half of the segment:
DATA [MID+1], DATA [MID+2], … DATA[END]
So we reset BEG: = MID +1 and begin search again.
Algorithm: (Binary Search) BINARY (DATA, LB, UB, ITEM, LOC)
Here DATA is stored array with lower bound LU and upper bound UB, and ITEM is given item of information. The variables BEG, END and MID denote, respectively, the beginning end and middle location of
the segment of an element of DATA. This algorithm finds the location LOC of ITEM in DATA or set LOC= NULL.
1. [Initialize segment variable]
Set BEG: = LB, END: = UB and MID: = INT ((BEG+END)/2).
2. Repeat Step 3 and 4 while BEG<= END and DATA [MID]!= ITEM.
3. If ITEM < DATA[MID], then
Set END: = MID-1
Set BEG: = MID+1.
[End of if structure]
4. Set MID: =INT((BIG+END)/2)
[End of Step 2 loop]
5. If DATA[MID]= ITEM then
Set LOC: = MID
Set LOC: = NULL.
[End of if structure]
6. EXIT.
Pointer Arrays:
A variable is called pointer if it points to an element in list.The pointer variable contains the address of an element in the list.
An array is called pointer array if each element of array is a pointer.The use of pointer array can be stored in memory.The most efficient method is to form two arrays.One is member consisting of
list of all members one after the other and another pointer array group containing the starting locations of different groups.
It is shown in fig.
Record is a collection of fields.A record may contain non homogeneous data,i.e.data item in a record may have different data types.In records,the natural ordering of element is not possible.
The element in records can be described by level numbers.For example,A hospital keeps a record of new born baby.It contains following data items:Name,Sex,Birthday,Father,Mother.
Birthday is a group item consisting month,day,year,
Father & Mother are also group item with subitem name & age.
The structure of this item is shown below.
1.New born
2. Name
2. Sex
2. Birthday
3. Month
3. Day
3. Year
2. Father
3. Name
3. Age
2. Mother
3. Name
3. Age
The number to left of each variable is called a level number.
Each group item is followed by its subitem.The level of subitem is 1 more than the level of group item.
To indicate number of records in a file we may write
1 Newborn(20)
It indicates a file of 20 records.
For example:
In above record,if we want to refer to the age of the father then it can be done by writing Newborn.father.Age.
If there are 20 records and we want to refer to sex of sixth newborn then it can be done by writing Newborn.sex[6].
Linked List:
A linked list, or one-way list, is a linear collection of data elements, called nodes, where
the linear order is given by means of pointers. That is, each node is divided into two parts: the first part contains the information of the element, and the second part, called the link field or
next pointer field, contains the address of the next node in the list.
Figure is a schematic diagram of a linked list with 6 nodes, Each node is pictured with two parts. The left part represents the information part of the node, which may contain an entire record of
data items (e.g., NAME, ADDRESS,...). The right part represents the Next pointer field of the node, and there is an arrow drawn from it to the next node in the list. This follows the usual practice
of drawing an arrow from a field to a node when the address of the node appears in the given field. The pointer of the last node contains special value, called the null pointer, which is any invalid
Let LIST be a linked list. Then LIST will be maintained in memory as follows. First of all, LIST requires two linear arrays-we will call them here INFO and LINK - such that INFO[K] and LINK[K]
contain the information part and the next pointer field of a node of LIST respectively. START contains the location of the beginning of the list, and a next pointer sentinel-denoted by NULL-which
indicates the end of the list.
The following examples of linked lists indicate that more than one list may be maintained in the same linear arrays INFO and LINK. However, each list must have its own pointer variable giving the
location of its first node.
START = 9, so INFO[9] = N is the first character.
LINK[9] = 3, so INFO[3] = O is the second character.
LINK[3] = 6, so 1NFO[6] = (blank) is the third character.
LINK[6] = 11, so INFO[11] = E is the fourth character.
LINK[11] = 7, so INFO[7] = X is the fifth character.
LINK[7] = 10, so INFO[10] = I is the sixth character.
LINK[10] = 4, so INFO[4] = T is the seventh character.
LINK[4] = 0, so the NULL value, so the list has ended.
A tree is a connected acyclic graph in which one vertex is singled out as the root & several other vertices connected to it by an edge inherit a parent child relationship. Again each node is further
connected to their child nodes making set of trees
Binary Tree
In computer science a most important type of tree structure is a binary tree. In this type of tree any node will have atmost two branches i.e. degree of each node will be atmost two. We distinguish
the two branches of tree as left subtree and right subtree.
A binary tree T is defined as a finite set of elements, called nodes, such that-
a) T is empty (called the null tree or empty tree) or
b) T contains a distinguished node R,called root of T and remaining nodes of T form ordered pair of disjoint binary trees T1 and T2.
If T contains root R,then two trees T1 & T2 are left & right subtrees of R.If T1 is nonempty then its root is called as left successor of R & if T2 is nonempty then its root is called as right
successor or R.
T contains 11 nodes (A to L).Root of T is A at the top of fig.B is left successor & C is right successor of node A.Left subtree of node A consist of nodes B,D,E & F. Right subtree of node A consist
of nodes C,G,H,J,K & L.
Any node in binary tree has 0,1 or 2 successors. The node with no successors are called terminal nodes.
The two binary trees are said to be similar if they have same structure.
The algebraic expression E=(a-b)/((c*d)+e) can be represented by binary tree as shown in fig.
Fig. expression E=(ab)/((c*d)+e)
Suppose N is node in T with left successor S1 & right successor S2,Then N is called parent of S1 & S2.S1 is left child of N & S2 is right child of N.The line drawn from node N to a successor is
called an edge & sequence of consecutive edge is called a path.
A terminal node is leaf & path ending in a leaf is called a branch. The defth(hight) of tree T is the maximum no of nodes in a branch of T.
The tree T is said to be complete if all its levels,except the last.have maximum mo of possible nodes.
A binary tree T is said to be a 2 tree or an extended binary tree if each node N has either 0 or 2 children.In such a case,the nodes with 2 children are called internal nodes & the nodes with 0
children are called external nodes,It is shown in fig.
Fig.Extended 2-tree
The depth(or hight) of a tree T is the maximum no of nodes in a branch of T.This is one more than the largest level no of T,
For example:Following tree has depth 5.
The maximum no of nodes symmetric binary with depth 5 are 31.
Representation of Binary tree in memory
Let T be a binary tree.T can be represented in memory either by link reprentation or by using array alled sequential representation.The requirement in any representation is that one should have
direct access to root R given any node N of T.
The linked representation uses three parallel arrays,INFO,LEFT & RIGHT & a pointer variable ROOT.Each node N of Y will correspond to a location K such that:
1) INFO[K] contains the data at node N.
2) LEFT[K] contains the location of left child of node N.
3) RIGHT[K] contains the location of right child of node N.
ROOT will contain location of root R of T.
A sample representation is shown in fig.
Another method is sequential representation.Here a linear array is used.The root R is stored in TREE[1].If a node n occupies TREE[K],then its left child in TREE[2*K]& right child TREE [2*K+1].An
example shown below.
Stack And Queue:
Stacks and Queues are two data structures that allow insertions and deletions operations only at the beginning or the end of the list, not in the middle. A stack is a linear structure in which items
may be added or removed only at one end. Figure pictures three everyday examples of such a structure: a stack of dishes, a stack of pennies and a stack of folded towels.
Stacks are also called last-in first-out (LIFO) lists. Other names used for stacks are "piles" and "push-down lists. Stack has many important applications in computer science.The notion of recursion
is fundamental in computer science. One way of simulating recursion is by means of stack structure. Let we learn the operations which are performed by the Stacks.
Figure Of STACK
A queue is a linear structure in which element may be inserted at one end called the rear, and the deleted at the other end called the front. Figure pictures a queue of people waiting at a bus stop.
Queues are also called first-in first-out (FIFO) lists. An important example of a queue in computer science occurs in a timesharing system, in which programs with the same priority form a queue while
waiting to be executed. Similar to stack operations, operations that are define a queue.
Figure of Queue
**********ALL THE BEST********** | {"url":"http://rohinishindeblog.blogspot.com/2012/10/introductionto-data-structure.html","timestamp":"2014-04-17T16:47:06Z","content_type":null,"content_length":"344008","record_id":"<urn:uuid:9bddd957-22b5-49c9-9e96-be980934e86b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about timings on Darren Wilkinson's research blog
Regular readers of this blog will know that in April 2010 I published a short post showing how a trivial bivariate Gibbs sampler could be implemented in the four languages that I use most often these
days (R, python, C, Java), and I discussed relative timings, and how one might start to think about trading off development time against execution time for more complex MCMC algorithms. I actually
wrote the post very quickly one night while I was stuck in a hotel room in Seattle – I didn’t give much thought to it, and the main purpose was to provide simple illustrative examples of simple Monte
Carlo codes using non-uniform random number generators in the different languages, as a starting point for someone thinking of switching languages (say, from R to Java or C, for efficiency reasons).
It wasn’t meant to be very deep or provocative, or to start any language wars. Suffice to say that this post has had many more hits than all of my other posts combined, is still my most popular post,
and still attracts comments and spawns other posts to this day. Several people have requested that I re-do the post more carefully, to include actual timings, and to include a few additional
optimisations. Hence this post. For reference, the original post is here. A post about it from the python community is here, and a recent post about using Rcpp and inlined C++ code to speed up the R
version is here.
The sampler
So, the basic idea was to construct a Gibbs sampler for the bivariate distribution
$f(x,y) = kx^2\exp\{-xy^2-y^2+2y-4x\},\qquad x>0,y\in\Bbb{R}$
with unknown normalising constant $k>0$ ensuring that the density integrates to one. Unfortunately, in the original post I dropped a factor of 2 constructing one of the full conditionals, which meant
that none of the samplers actually had exactly the right target distribution (thanks to Sanjog Misra for bringing this to my attention). So actually, the correct full conditionals are
$\displaystyle x|y \sim Ga(3,y^2+4)$
$\displaystyle y|x \sim N\left(\frac{1}{1+x},\frac{1}{2(1+x)}\right)$
Note the factor of two in the variance of the full conditional for $y$. Given the full conditionals, it is simple to alternately sample from them to construct a Gibbs sampler for the target
distribution. We will run a Gibbs sampler with a thin of 1000 and obtain a final sample of 50000.
Let’s start with R again. The slightly modified version of the code from the old post is given below
for (i in 1:N) {
for (j in 1:thin) {
I’ve just corrected the full conditional, and I’ve increased the sample size and thinning to 50k and 1k, respectively, to allow for more accurate timings (of the faster languages). This code can be
run from the (Linux) command line with something like:
time Rscript gibbs.R
I discuss timings in detail towards the end of the post, but this code is slow, taking over 7 minutes on my (very fast) laptop. Now, the above code is typical of the way code is often structured in R
– doing as much as possible in memory, and writing to disk only if necessary. However, this can be a bad idea with large MCMC codes, and is less natural in other languages, anyway, so below is an
alternative version of the code, written in more of a scripting language style.
for (i in 1:N) {
for (j in 1:thin) {
This can be run with a command like
time Rscript gibbs-script.R > data.tab
This code actually turns out to be a slightly slower than the in-memory version for this simple example, but for larger problems I would not expect that to be the case. I always analyse MCMC output
using R, whatever language I use for running the algorithm, so for completeness, here is a bit of code to load up the data file, do some plots and compute summary statistics.
contour(x,y,z,main="Contours of actual (unnormalised) distribution")
contour(fit$x1,fit$x2,fit$fhat,main="Contours of empirical distribution")
Another language I use a lot is Python. I don’t want to start any language wars, but I personally find python to be a better designed language than R, and generally much nicer for the development of
large programs. A python script for this problem is given below
import random,math
def gibbs(N=50000,thin=1000):
print "Iter x y"
for i in range(N):
for j in range(thin):
print i,x,y
It can be run with a command like
time python gibbs.py > data.tab
This code turns out to be noticeably faster than the R versions, taking around 4 minutes on my laptop (again, detailed timing information below). However, there is a project for python known as the
PyPy project, which is concerned with compiling regular python code to very fast byte-code, giving significant speed-ups on certain problems. For this post, I downloaded and install version 1.5 of
the 64-bit linux version of PyPy. Once installed, I can run the above code with the command
time pypy gibbs.py > data.tab
To my astonishment, this “just worked”, and gave very impressive speed-up over regular python, running in around 30 seconds. This actually makes python a much more realistic prospect for the
development of MCMC codes than I imagined. However, I need to understand the limitations of PyPy better – for example, why doesn’t everyone always use PyPy for everything?! It certainly seems to make
python look like a very good option for prototyping MCMC codes.
Traditionally, I have mainly written MCMC codes in C, using the GSL. C is a fast, efficient, statically typed language, which compiles to native code. In many ways it represents the “gold standard”
for speed. So, here is the C code for this problem.
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <gsl/gsl_rng.h>
#include <gsl/gsl_randist.h>
void main()
int N=50000;
int thin=1000;
int i,j;
gsl_rng *r = gsl_rng_alloc(gsl_rng_mt19937);
double x=0;
double y=0;
printf("Iter x y\n");
for (i=0;i<N;i++) {
for (j=0;j<thin;j++) {
printf("%d %f %f\n",i,x,y);
It can be compiled and run with command like
gcc -O4 -lgsl -lgslcblas gibbs.c -o gibbs
time ./gibbs > datac.tab
This runs faster than anything else I consider in this post, taking around 8 seconds.
I’ve recently been experimenting with Java for MCMC codes, in conjunction with Parallel COLT. Java is a statically typed object-oriented (O-O) language, but is usually compiled to byte-code to run on
a virtual machine (known as the JVM). Java compilers and virtual machines are very fast these days, giving “close to C” performance, but with a nicer programming language, and advantages associated
with virtual machines. Portability is a huge advantage of Java. For example, I can easily get my Java code to run on almost any University Condor pool, on both Windows and Linux clusters – they all
have a recent JVM installed, and I can easily bundle any required libraries with my code. Suffice to say that getting GSL/C code to run on generic Condor pools is typically much less straightforward.
Here is the Java code:
import java.util.*;
import cern.jet.random.tdouble.*;
import cern.jet.random.tdouble.engine.*;
class Gibbs
public static void main(String[] arg)
int N=50000;
int thin=1000;
DoubleRandomEngine rngEngine=new DoubleMersenneTwister(new Date());
Normal rngN=new Normal(0.0,1.0,rngEngine);
Gamma rngG=new Gamma(1.0,1.0,rngEngine);
double x=0;
double y=0;
System.out.println("Iter x y");
for (int i=0;i<N;i++) {
for (int j=0;j<thin;j++) {
System.out.println(i+" "+x+" "+y);
It can be compiled and run with
javac Gibbs.java
time java Gibbs > data.tab
This takes around 11.6s seconds on my laptop. This is well within a factor of 2 of the C version, and around 3 times faster than even the PyPy python version. It is around 40 times faster than R.
Java looks like a good choice for implementing MCMC codes that would be messy to implement in C, or that need to run places where it would be fiddly to get native codes to run.
Another language I’ve been taking some interest in recently is Scala. Scala is a statically typed O-O/functional language which compiles to byte-code that runs on the JVM. Since it uses Java
technology, it can seamlessly integrate with Java libraries, and can run anywhere that Java code can run. It is a much nicer language to program in than Java, and feels more like a dynamic language
such as python. In fact, it is almost as nice to program in as python (and in some ways nicer), and will run in a lot more places than PyPy python code. Here is the scala code (which calls Parallel
COLT for random number generation):
object GibbsSc {
import cern.jet.random.tdouble.engine.DoubleMersenneTwister
import cern.jet.random.tdouble.Normal
import cern.jet.random.tdouble.Gamma
import Math.sqrt
import java.util.Date
def main(args: Array[String]) {
val N=50000
val thin=1000
val rngEngine=new DoubleMersenneTwister(new Date)
val rngN=new Normal(0.0,1.0,rngEngine)
val rngG=new Gamma(1.0,1.0,rngEngine)
var x=0.0
var y=0.0
println("Iter x y")
for (i <- 0 until N) {
for (j <- 0 until thin) {
println(i+" "+x+" "+y)
It can be compiled and run with
scalac GibbsSc.scala
time scala GibbsSc > data.tab
This code takes around 11.8s on my laptop – almost as fast as the Java code! So, on the basis of this very simple and superficial example, it looks like scala may offer the best of all worlds – a
nice, elegant, terse programming language, functional and O-O programming styles, the safety of static typing, the ability to call on Java libraries, great speed and efficiency, and the portability
of Java! Very interesting.
James Durbin has kindly sent me a Groovy version of the code, which he has also discussed in his own blog post. Groovy is a dynamic O-O language for the JVM, which, like Scala, can integrate nicely
with Java applications. It isn’t a language I have examined closely, but it seems quite nice. The code is given below:
import cern.jet.random.tdouble.engine.*
import cern.jet.random.tdouble.*
rngEngine= new DoubleMersenneTwister(new Date())
rngN=new Normal(0.0,1.0,rngEngine)
rngG=new Gamma(1.0,1.0,rngEngine)
println("Iter x y")
for(i in 1..N){
for(j in 1..thin){
println("$i $x $y")
It can be run with a command like:
time groovy Gibbs.gv > data.tab
Again, rather amazingly, this code runs in around 35 seconds – very similar to the speed of PyPy. This makes Groovy also seem like a potential very attractive environment for prototyping MCMC codes,
especially if I’m thinking about ultimately porting to Java.
The laptop I’m running everything on is a Dell Precision M4500 with an Intel i7 Quad core (x940@2.13Ghz) CPU, running the 64-bit version of Ubuntu 11.04. I’m running stuff from the Ubuntu (Unity)
desktop, and running several terminals and applications, but the machine is not loaded at the time each job runs. I’m running each job 3 times and taking the arithmetic mean real elapsed time. All
timings are in seconds.
R 2.12.1 (in memory) 435.0
R 2.12.1 (script) 450.2
Python 2.7.1+ 233.5
PyPy 1.5 32.2
Groovy 1.7.4 35.4
Java 1.6.0 11.6
Scala 2.7.7 11.8
C (gcc 4.5.2) 8.1
If we look at speed-up relative to the R code (in-memory version), we get:
R (in memory) 1.00
R (script) 0.97
Python 1.86
PyPy 13.51
Groovy 12.3
Java 37.50
Scala 36.86
C 53.70
Alternatively, we can look at slow-down relative to the C version, to get:
R (in memory) 53.7
R (script) 55.6
Python 28.8
PyPy 4.0
Groovy 4.4
Java 1.4
Scala 1.5
C 1.0
The findings here are generally consistent with those of the old post, but consideration of PyPy, Groovy and Scala does throw up some new issues. I was pretty stunned by PyPy. First, I didn’t expect
that it would “just work” – I thought I would either have to spend time messing around with my configuration settings, or possibly even have to modify my code slightly. Nope. Running python code with
pypy appears to be more than 10 times faster than R, and only 4 times slower than C. I find it quite amazing that it is possible to get python code to run just 4 times slower than C, and if that is
indicative of more substantial examples, it really does open up the possibility of using python for “real” problems, although library coverage is currently a problem. It certainly solves my
“prototyping problem”. I often like to prototype algorithms in very high level dynamic languages like R and python before porting to a more efficient language. However, I have found that this doesn’t
always work well with complex MCMC codes, as they just run too slowly in the dynamic languages to develop, test and debug conveniently. But it looks now as though PyPy should be fast enough at least
for prototyping purposes, and may even be fast enough for production code in some circumstances. But then again, exactly the same goes for Groovy, which runs on the JVM, and can access any existing
Java library… I haven’t yet looked into Groovy in detail, but it appears that it could be a very nice language for prototyping algorithms that I intend to port to Java.
The results also confirm my previous findings that Java is now “fast enough” that one shouldn’t worry too much about the difference in speed between it and native code written in C (or C++). The Java
language is much nicer than C or C++, and the JVM platform is very attractive in many situations. However, the Scala results were also very surprising for me. Scala is a really elegant language
(certainly on a par with python), comes with all of the advantages of Java, and appears to be almost as fast as Java. I’m really struggling to come up with reasons not to use Scala for everything!
Speeding up R
MCMC codes are used by a range of different scientists for a range of different problems. However, they are very (most?) often used by Bayesian statisticians who use the algorithms to target a
Bayesian posterior distribution. For various (good) reasons, many statisticians are heavily invested in R, like to use R as much as possible, and do as much as possible from within the R environment.
These results show why R is not a good language in which to implement MCMC algorithms, so what is an R-dependent statistician supposed to do? One possibility would be to byte-code compile R code in
an analogous way to python and pypy. The very latest versions of R support such functionality, but the post by Dirk Eddelbuettel suggests that the current version of cmpfun will only give a 40%
speedup on this problem, which is still slower than regular python code. Short of a dramatic improvement in this technology, the only way forward seems to be to extend R using code from another
language. It is easy to extend R using C, C++ and Java. I have shown in previous posts how to do this using Java and using C, and the recent post by Dirk shows how to extend using C++. Although
interesting, this doesn’t really have much bearing on the current discussion. If you extend using Java you get Java-like speedups, and if you extend using C you get C-like speedups. However, in case
people are interested, I intend to gather up these examples into one post and include detailed timing information in a subsequent post. | {"url":"http://darrenjw.wordpress.com/tag/timings/","timestamp":"2014-04-19T09:23:30Z","content_type":null,"content_length":"44161","record_id":"<urn:uuid:5186c92e-b9b5-467e-9850-f784b7e2421a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Richmond Hill, NY Algebra 1 Tutor
Find a Richmond Hill, NY Algebra 1 Tutor
...My students include those from Hunter College High School, Stuyvesant, Bronx Science, Brooklyn Tech, etc., all referred by parents. I helped many students got into their dream schools or honor
classes. I have two master degrees (physics and math) and have very deep understanding of physics and math concepts.
12 Subjects: including algebra 1, calculus, algebra 2, physics
...I have a 98% client satisfaction rate and 96% of my students have gained admission to their top choice schools. Most importantly, every student has succeeded qualitatively rather than just
quantitatively, gaining confidence, resourcefulness, and strength of character. My tutoring philosophy begins and ends with the student's sense of self, which I endeavor to illuminate and enrich.
36 Subjects: including algebra 1, English, reading, Spanish
...I've had the pleasure of teaching guitar as an add-on to private tutoring sessions and as a high school music teacher. Thank you for your time and I hope to hear from you soon.I have experience
as a private math tutor for students grades 2-11. I was an SAT math class instructor and tutor, and have emphasized that students acquire a solid understanding of basic algebra.
22 Subjects: including algebra 1, English, reading, Japanese
...I have always been an avid reader, and have received numerous awards throughout my academic career for reading comprehension, analytical reading, and more. I'm always happy to help students
with reading, and teach them the skills necessary in a testing environment. I have extensive knowledge of...
17 Subjects: including algebra 1, reading, French, writing
...My name is Chris and would love to help you get better at math! Whether it is working through problems or spending time learning specific skills, I can help! I have taught students from grades
6-12 in NYC, Micronesia and even out in Hawaii.
10 Subjects: including algebra 1, calculus, SAT math, Regents | {"url":"http://www.purplemath.com/Richmond_Hill_NY_algebra_1_tutors.php","timestamp":"2014-04-19T15:14:33Z","content_type":null,"content_length":"24481","record_id":"<urn:uuid:92c3d6c7-c7e5-41db-95a5-63c8eff4fc75>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
Berwyn Heights, MD Math Tutor
Find a Berwyn Heights, MD Math Tutor
...I base this knowing that every student is unique in their learning styles and no one approach works for all students. I help students identify weak areas in their core knowledge areas that the
MCAT tests and help students come up with a study plan to fill in those gaps. I also emphasize skills ...
2 Subjects: including algebra 1, MCAT
...I would like to see students try solving problems on their own first and treat me with respect so that it can be reciprocated. I was born and raised in Seoul, Korea where my parents still
live. I came to the States to finish high school and college.
17 Subjects: including trigonometry, SAT math, linear algebra, precalculus
...I have a BS in mechanical engineering and have taken up through differential equations in college and up through calculus 3 AP in high school. I have a BS in mechanical engineering and use
pre-calculus daily in my work for determination of loads, decomposing forces and loads, and in free-body diagrams. I have a BS in mechanical engineering, and took trig in high school.
10 Subjects: including calculus, algebra 1, algebra 2, Microsoft Excel
Hello there! My name is Molly, and I tutor privately in the Bowie, Maryland, area. I currently help children with their math, science, writing, language arts and other general homework subjects
on a daily basis.
16 Subjects: including algebra 1, English, prealgebra, geometry
...I am willing to set up long-term tutoring schedules or meet for isolated sessions.One-on-one tutoring provides a unique opportunity to engage children in comparative religion studies. I have a
Master's Degree in Liberal Arts and one of my special focuses was Philosophy and Theology. I am well-v...
16 Subjects: including SAT math, reading, English, writing
Related Berwyn Heights, MD Tutors
Berwyn Heights, MD Accounting Tutors
Berwyn Heights, MD ACT Tutors
Berwyn Heights, MD Algebra Tutors
Berwyn Heights, MD Algebra 2 Tutors
Berwyn Heights, MD Calculus Tutors
Berwyn Heights, MD Geometry Tutors
Berwyn Heights, MD Math Tutors
Berwyn Heights, MD Prealgebra Tutors
Berwyn Heights, MD Precalculus Tutors
Berwyn Heights, MD SAT Tutors
Berwyn Heights, MD SAT Math Tutors
Berwyn Heights, MD Science Tutors
Berwyn Heights, MD Statistics Tutors
Berwyn Heights, MD Trigonometry Tutors
Nearby Cities With Math Tutor
Berwyn, MD Math Tutors
Brentwood, MD Math Tutors
College Park Math Tutors
Colmar Manor, MD Math Tutors
Cottage City, MD Math Tutors
Edmonston, MD Math Tutors
Greenbelt Math Tutors
Landover Hills, MD Math Tutors
Mount Rainier Math Tutors
North Brentwood, MD Math Tutors
North College Park, MD Math Tutors
Riverdale Park, MD Math Tutors
Riverdale Pk, MD Math Tutors
Riverdale, MD Math Tutors
University Park, MD Math Tutors | {"url":"http://www.purplemath.com/berwyn_heights_md_math_tutors.php","timestamp":"2014-04-17T13:14:00Z","content_type":null,"content_length":"24129","record_id":"<urn:uuid:9b376ef5-1cb7-43e7-9428-dfe1b4b8813b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graph separators: A parameterized view
Results 1 - 10 of 26
, 2002
"... We present an algorithm that constructively produces a solution to the k-dominating set problem for planar graphs in time O(c . To obtain this result, we show that the treewidth of a planar
graph with domination number (G) is O( (G)), and that such a tree decomposition can be found in O( (G)n) time. ..."
Cited by 105 (23 self)
Add to MetaCart
We present an algorithm that constructively produces a solution to the k-dominating set problem for planar graphs in time O(c . To obtain this result, we show that the treewidth of a planar graph
with domination number (G) is O( (G)), and that such a tree decomposition can be found in O( (G)n) time. The same technique can be used to show that the k-face cover problem ( find a size k set of
faces that cover all vertices of a given plane graph) can be solved in O(c n) time, where c 1 = 3 and k is the size of the face cover set. Similar results can be obtained in the planar case for some
variants of k-dominating set, e.g., k-independent dominating set and k-weighted dominating set.
- in Electronic Colloquium on Computational Complexity (ECCC , 2001
"... A parameterized problem is xed parameter tractable if it admits a solving algorithm whose running time on input instance (I; k) is f(k) jIj , where f is an arbitrary function depending only on
k. Typically, f is some exponential function, e.g., f(k) = c k for constant c. We describe general techniqu ..."
Cited by 61 (21 self)
Add to MetaCart
A parameterized problem is xed parameter tractable if it admits a solving algorithm whose running time on input instance (I; k) is f(k) jIj , where f is an arbitrary function depending only on k.
Typically, f is some exponential function, e.g., f(k) = c k for constant c. We describe general techniques to obtain growth of the form f(k) = c p k for a large variety of planar graph problems. The
key to this type of algorithm is what we call the "Layerwise Separation Property" of a planar graph problem. Problems having this property include planar vertex cover, planar independent set, and
planar dominating set.
- Journal of the ACM , 2004
"... Dealing with the NP-complete Dominating Set problem on graphs, we demonstrate the power of data reduction by preprocessing from a theoretical as well as a practical side. In particular, we prove
that Dominating Set restricted to planar graphs has a so-called problem kernel of linear size, achiev ..."
Cited by 39 (9 self)
Add to MetaCart
Dealing with the NP-complete Dominating Set problem on graphs, we demonstrate the power of data reduction by preprocessing from a theoretical as well as a practical side. In particular, we prove that
Dominating Set restricted to planar graphs has a so-called problem kernel of linear size, achieved by two simple and easy to implement reduction rules. Moreover, having implemented our reduction
rules, first experiments indicate the impressive practical potential of these rules. Thus, this work seems to open up a new and prospective way how to cope with one of the most important problems in
graph theory and combinatorial optimization.
- In Proc. 22nd STACS, volume 3404 of LNCS , 2005
"... Abstract. Determining whether a parameterized problem is kernelizable and has a small kernel size has recently become one of the most interesting topics of research in the area of parameterized
complexity and algorithms. Theoretically, it has been proved that a parameterized problem is kernelizable ..."
Cited by 35 (4 self)
Add to MetaCart
Abstract. Determining whether a parameterized problem is kernelizable and has a small kernel size has recently become one of the most interesting topics of research in the area of parameterized
complexity and algorithms. Theoretically, it has been proved that a parameterized problem is kernelizable if and only if it is fixed-parameter tractable. Practically, applying a data reduction
algorithm to reduce an instance of a parameterized problem to an equivalent smaller instance (i.e., a kernel) has led to very efficient algorithms and now goes hand-in-hand with the design of
practical algorithms for solving NP-hard problems. Well-known examples of such parameterized problems include the vertex cover problem, which is kernelizable to a kernel of size bounded by 2k, and
the planar dominating set problem, which is kernelizable to a kernel of size bounded by 335k. In this paper we develop new techniques to derive upper and lower bounds on the kernel size for certain
parameterized problems. In terms of our lower bound results, we show, for example, that unless P = NP, planar vertex cover does not have a problem kernel of size smaller than 4k/3, and planar
independent set and planar dominating set do not have kernels of size smaller than 2k. In terms of our upper bound results, we further reduce the upper bound on the kernel size for the planar
dominating set problem to 67k, improving significantly the 335k previous upper bound given by Alber, Fellows, and Niedermeier [J. ACM, 51 (2004), pp. 363–384]. This latter result is obtained by
introducing a new set of reduction and coloring rules, which allows the derivation of nice combinatorial properties in the kernelized graph leading to a tighter bound on the size of the kernel. The
paper also shows how this improved upper bound yields a simple and competitive algorithm for the planar dominating set problem.
, 2006
"... We study a novel separator property called k-path separable. Roughly speaking, a k-path separable graph can be recursively separated into smaller components by sequentially removing k shortest
paths. Our main result is that every minor free weighted graph is k-path separable. We then show that k-pat ..."
Cited by 35 (11 self)
Add to MetaCart
We study a novel separator property called k-path separable. Roughly speaking, a k-path separable graph can be recursively separated into smaller components by sequentially removing k shortest paths.
Our main result is that every minor free weighted graph is k-path separable. We then show that k-path separable graphs can be used to solve several object location problems: (1) a small-worldization
with an average poly-logarithmic number of hops; (2) an (1 + ε)approximate distance labeling scheme with O(log n) space labels; (3) a stretch-(1 + ε) compact routing scheme with tables of
poly-logarithmic space; (4) an (1+ε)-approximate distance oracle with O(n log n) space and O(log n) query time. Our results generalizes to much wider classes of weighted graphs, namely to
bounded-dimension isometric sparable graphs.
"... this paper was presented at the 11th Annual International Symposium on Algorithms And Computation (ISAAC'00), Springer-Verlag, LNCS 1969, pages 180--191, held in Taipei, Taiwan, December 2000.
This conference version, however, contains a faulty application of the main result to the case of minimum w ..."
Cited by 32 (15 self)
Add to MetaCart
this paper was presented at the 11th Annual International Symposium on Algorithms And Computation (ISAAC'00), Springer-Verlag, LNCS 1969, pages 180--191, held in Taipei, Taiwan, December 2000. This
conference version, however, contains a faulty application of the main result to the case of minimum weight vertex covers with a bound on the number of vertices
- in LATIN’02: Theoretical informatics (Cancun , 2001
"... We present an improved dynamic programming strategy for dominating set and related problems on graphs that are given together with a tree decomposition of width k. We obtain an O(4 n) algorithm
for dominating set, where n is the number of nodes of the tree decomposition. ..."
Cited by 32 (9 self)
Add to MetaCart
We present an improved dynamic programming strategy for dominating set and related problems on graphs that are given together with a tree decomposition of width k. We obtain an O(4 n) algorithm for
dominating set, where n is the number of nodes of the tree decomposition.
- Computer Journal , 2005
"... This paper surveys the theory of bidimensionality. This theory characterizes a broad range of graph problems (‘bidimensional’) that admit efficient approximate or fixed-parameter solutions in a
broad range of graphs. These graph classes include planar graphs, map graphs, bounded-genus graphs and gra ..."
Cited by 29 (1 self)
Add to MetaCart
This paper surveys the theory of bidimensionality. This theory characterizes a broad range of graph problems (‘bidimensional’) that admit efficient approximate or fixed-parameter solutions in a broad
range of graphs. These graph classes include planar graphs, map graphs, bounded-genus graphs and graphs excluding any fixed minor. In particular, bidimensionality theory builds on the Graph Minor
Theory of Robertson and Seymour by extending the mathematical results and building new algorithmic tools. Here, we summarize the known combinatorial and algorithmic results of bidimensionality theory
with the high-level ideas involved in their proof; we describe the previous work on which the theory is based and/or extends; and we mention several remaining open problems. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=186039","timestamp":"2014-04-21T01:16:09Z","content_type":null,"content_length":"36288","record_id":"<urn:uuid:aaacaa15-477b-47d1-9ed2-bda1f6f379b6>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
bbangert / WebHelpers - 16f94d8
Few more doc building tweaks
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o. | {"url":"https://bitbucket.org/bbangert/webhelpers/commits/16f94d89ad545689f82caa1f4b7bfa65e9620a5c?at=0.6.3","timestamp":"2014-04-16T10:03:17Z","content_type":null,"content_length":"94100","record_id":"<urn:uuid:a8614eb6-e823-427e-9e4d-b6468eb51f67>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fusion and String Field Star Product
Posted by Urs Schreiber
From the point of view of functorial transport #, I describe the structure of the star product of string fields, and, as a special case, the fusion product of loop group representations.
Let $\mathrm{par} := \{ a \to b\}$ be the category that models (the parameter space of) a string. For the open string we think of this as the category with two objects and one nontrivial morphism
going between them.
For the closed strings we set $a = b$ and think of this as the category $\Sigma(\mathbb{Z})$, with a single object and freely generated from a single nontrivial automorphism of that object. In this
case, it is useful to think of this category as the fundamental groupoid of of the circle: $\mathrm{par} = \Pi(S^1) \,.$
In a similar manner, we may model the composite of two strings by a category of the form $\mathrm{par}_2 := \{a \to b \to c\} \,.$ This describes a string stretching from $a$ to $b$, concatenated
with a string stretching from $b$ to $c$.
Again, in the case where all of these strings are closed, we set $a= b$ and $b = c$. Then we should think of $\mathrm{par}_2$ as the fundamental groupoid of the trinion, the sphere with three disks
cut out: $\mathrm{par}_2 = \Pi(\mathrm{trinion}) \,.$
I’ll concentrate on closed strings in the following.
There are three basic non-trivial maps of the closed string to the trinion, i.e functors $F : \mathrm{par} \to \mathrm{par}_2 \,,$ namely those that send the closed string to one of the three
boundary components of the trinion. I’ll call these three functors $F_1$, $F_2$ and $F_3$.
Now, assume all these strings propagate on some target space $\mathrm{tar} \,.$ Then we say that the space of maps from parameter space to target space $\mathrm{conf} = [\mathrm{par},\mathrm{tar}]$
is the configuration space of the closed string.
I shall motivate and illustrate all constructions in this post here by the example where target space is a group. Or rather, where we set $\mathrm{tar} = \Sigma(G) \,,$ the category with a single
object and one morphism for each element of the group $G$.
In this case configuration space is $\mathrm{conf} = [\Sigma(\mathbb{Z}),\Sigma(G)] = \Lambda G \,,$ which is the loop groupoid of $G$. As Simon Willerton explains, this is, in many important
respects, the categorical incarnation of the loop group of $G$.
An object in configuration space is a string, stretching along an element $g \in G$: $\bullet \stackrel{g}{\to} \bullet \,.$ A morphism in configuration space $\array{ g \\ \;\;\downarrow g' \\ \
mathrm{Ad}_{g'} g }$ is a square
(1)$\array{ \bullet & \stackrel{g}{\to} & \bullet \\ g' \downarrow\;\; && \;\; \downarrow g' \\ \bullet & \stackrel{\mathrm{Ad}_{g'}g}{\to} & \bullet } \,.$
We can play the same game with maps from the parameter space $\mathrm{par}_2$ of two concatenated strings: $\mathrm{conf}_2 = [\mathrm{par}_2,\mathrm{tar}] \,.$
A configuration now is a pair of group elements $\bullet \stackrel{g_1}{\to} \bullet \stackrel{g_2}{\to} \bullet \,,$ and a morphism in configuration space $\array{ (g_1,g_2) \\ \;\;\downarrow g' \\
(\mathrm{Ad}_{g'} g_1, \mathrm{Ad}_{g'} g_2) }$ is a diagram $\array{ \bullet &\stackrel{g_1}{\to}& \bullet &\stackrel{g_2}{\to}& \bullet \\ g' \downarrow\;\; && \;\; \downarrow g' && \;\; \downarrow
g' \\ \bullet &\stackrel{\mathrm{Ad}_{g'}g_1}{\to}& \bullet &\stackrel{\mathrm{Ad}_{g'}g_2}{\to}& \bullet } \,.$
Using the three maps from the closed string to the trinion, we may pull back any configuration of two composable strings to one of the single string: $F_i^* : \mathrm{conf}_2 \to \mathrm{conf}$
simply by precomposing with that map: $F_i^* \gamma : \mathrm{par} \stackrel{F_i}{\to} \mathrm{par}_2 \stackrel{\gamma}{\to} \mathrm{tar} \,.$
This simply means that we may read out the configuration of any of the three circles of the trinion.
The crucial point now is the following: there will be an $n$-vector bundle with connection on the configuration space of the string, encoding its quantum dynamics.
For illustration purposes, I shall assume that we have an ordinary vector bundle on configuration space. This is given by a functor $\mathrm{tra} : \mathrm{conf} \to \mathrm{Vect} \,.$
For our example of configuration space above, this is nothing but a representation of the loop groupoid of the group $G$. Simon Willerton teaches us that this is to be thought of as a representation
of the loop group.
There is an obvious and immediate monoidal structure on the category of all such representations: the tensor product inherited from $\mathrm{Vect}$.
So given one representation $\mathrm{tra}_1 : \mathrm{conf} \to \mathrm{Vect} \,.$ and another one $\mathrm{tra}_2 : \mathrm{conf} \to \mathrm{Vect} \,,$ we can form their tensor product $\mathrm
{tra}_1 \otimes \mathrm{tra}_2 : \mathrm{conf} \to \mathrm{Vect}$ simply by tensoring the images of $\mathrm{tra}_1$ and $\mathrm{tra}_2$ “pointwise”.
But there is more. Notice that we may also think of the functors $\mathrm{tra}_i$ as “string fields”:
they associate with every configuration of the string a certain “amplitude” (or $n$-amplitude, if you like).
The parameter space $\mathrm{par}_2$ describes how two strings merge into a single one. This induces the “star product” on the corresponding string fields. As follows:
first pull back $\mathrm{tra}_1$ along $F_1^*$ to a field on the configuration space of the trinion. Then pull back $\mathrm{tra}_2$ along $F_2$ to the configuration space of the trinion.
Then take the ordinary tensor product of these string fields.
Then push the result along $F_3^*$forward, back to the configuration space of the single string.
In formulas: $\mathrm{tra}_1 \star \mathrm{tra}_2 := \mathrm{pushforward}\;\mathrm{along}\; F_3^* \;of ( (F_1^*)^* \mathrm{tra}_1 \otimes (F_2^*)^* \mathrm{tra}_2 ) \,.$
In our example, this “string field star product” reproduces the fusion product of representations of the loop groupoid.
Fusion = composition of strings
Let’s unwrap the above definition of the star product to see how this works.
First of all, the pullback field $(F_1^*)^* \mathrm{tra}_1 : \mathrm{conf}_2 \stackrel{F_1^*}{\to} \mathrm{conf} \stackrel{\mathrm{tra}_1}{\to} \mathrm{Vect}$ evaluates on a configuration of the
trinion $\array{ \bullet &\stackrel{g_1}{\to}& \bullet &\stackrel{g_2}{\to}& \bullet \\ g' \downarrow\;\; && \;\; \downarrow g' && \;\; \downarrow g' \\ \bullet &\stackrel{\mathrm{Ad}_{g'}g_1}{\to}&
\bullet &\stackrel{\mathrm{Ad}_{g'}g_1}{\to}& \bullet }$ by first forgetting the configuration of the second string and only remembering that of the first one $\array{ \bullet &\stackrel{g_1}{\to}& \
bullet \\ g' \downarrow\;\; && \;\; \downarrow g' \\ \bullet &\stackrel{\mathrm{Ad}_{g'}g_1}{\to}& \bullet }$ and then evaluating the original string field on that $\array{ \mathrm{tra}_1(g_1) \\ \
mathrm{tra}_1(\mathrm{Ad}_{g'}) \downarrow \;\;\; \\ \mathrm{tra}_1(\mathrm{Ad}_{g'}g_1) } \,.$
Analogously for $(F_1^*)^* \mathrm{tra}_1$. As a result, the tensor product of these two pullbacks is the string field on the configuration space of the trinion which sends the configuration $\array{
\bullet &\stackrel{g_1}{\to}& \bullet &\stackrel{g_2}{\to}& \bullet \\ g' \downarrow\;\; && \;\; \downarrow g' && \;\; \downarrow g' \\ \bullet &\stackrel{\mathrm{Ad}_{g'}g_1}{\to}& \bullet &\
stackrel{\mathrm{Ad}_{g'}g_1}{\to}& \bullet }$ to $\array{ \mathrm{tra}_1(g_1)\otimes \mathrm{tra}_2(g_2) \\ \downarrow \\ \mathrm{tra}_1(\mathrm{Ad}_{g'}g_1) \otimes \mathrm{tra}_2(\mathrm{Ad}_{g'}
g_2) } \,.$
That is the obvious part. Slightly more subtle is the pushforward. The pushforward along $F_3^*$ achives, in words, a sum over all field values on configurations of two incoming strings that map to
the same configuration of the single outgoing string.
The reasoning behind this is essentially the same that determines the pushforward to a point in general: all contributions from all points that get mapped to the same target point are “added up”.
In our example, this means that contributions from all configurations $\bullet \stackrel{g_1}{\to} \bullet \stackrel{g_2}{\to} \bullet$ with the same value of $g_1 \cdot g_2$ add up. As a result, the
value of $\mathrm{tra}_1 \star \mathrm{tra}_2$ on a configuration $\array{ \bullet &\stackrel{g}{\to}& \bullet \\ g' \downarrow\;\; && \;\; \downarrow g' \\ \bullet &\stackrel{\mathrm{Ad}_{g'}g}{\to}
& \bullet } \,.$ is $\oplus_{g_1 g_2 = g} \array{ \mathrm{tra}_1(g_1)\otimes \mathrm{tra}_2(g_2) \\ \downarrow \\ \mathrm{tra}_1(\mathrm{Ad}_{g'}g_1) \otimes \mathrm{tra}_2(\mathrm{Ad}_{g'}g_2) }
This is the fusion product of representations of the loop groupoid.
Alternatively, if we decategorify this once, we get the star product of two string fields.
Posted at January 21, 2007 8:45 PM UTC
Re: Fusion and String Field Star Product
Hi Urs,
This is a nice description of the fusion product… isn’t it basically the same approach though as the `classical’ description of fusion by Freed, using the pair of pants, in Quantum Groups from Path
Integrals, pages 34-36? It would be instructive to compare the two approaches. A relevant exercise (which I haven’t worked through yet) is Exercise 4.34, which precisely makes reference to the kind
of push-forward you are referring to.
By the way, can you explain a bit more about how you got the fundamental groupoid of the trinion by taking the free category on $\{a \rightarrow b \rightarrow c \}$ and then identifying $a, b$ and
$c$? Recall that $\pi_1$(trinion)$= \langle a,b,c : a b c = 1 \rangle$ (I think).
It would be nice if you performed the explicit calculation you’re referring to when you do the push-forward to a point. Looking at your notes on this, I’ve convinced myself that indeed you’re right -
it will work. It would be cool to actually see the calculation though, perhaps even to compare it to the pushforward Freed is talking about in that exercise.
Posted by: Bruce Bartlett on January 22, 2007 1:00 AM | Permalink | Reply to this
Re: Fusion and String Field Star Product
Presumably your $a$ is Urs’ generator of the loop $a \to b$, your $b$ is Urs’ generator of $b \to c$, and your $c$ is the reverse path (loop) from Urs’ $c$ to $a$. The figure-of-eight and trinion are
homotopy equivalent.
Posted by: David Corfield on January 22, 2007 10:18 AM | Permalink | Reply to this
Trinion and figure-of-eight
Hi all,
I am finally back online. I’ll answer Bruce’s nice questions now.
I admit that the above entry it too terse, in general. I was writing it on a very slow internet connection and had only that much patience with it. You cannot imagine how long it took me to compile
even that terse entry.
Thanks for reading it anyway! :-)
First of all, David is right concerning the interpretation of my notation. I should have mentioned that the category which I cryptically called $\mathrm{par}_2$ represents, in essence, the
Or I should have drawn this picture: $\array{ & b & \\ &earrow \searrow \\ a &\rightarrow& c } \,.$
This is my “parameter space of two composed strings” for the open string: one string is stretching from (brane) $a$ to (brane) $b$. The next one from $b$ to $c$. And their composite is the one
stretching from $a$ to $c$.
For the closed string, I identify all three vertices and think of $\array{ & \bullet & \\ &earrow \searrow \\ \bullet &\rightarrow& \bullet } \,.$
Of course, at this point my notation becomes ambiguous, since now the morphisms are no longer identitfied by their source and target. It would be better if I introduced additional labels for the
morphisms: $\array{ & \bullet & \\ &C earrow \searrow A \\ \bullet &\stackrel{B}{\rightarrow}& \bullet } \,.$
This diagram is still supposed to commute. But now, since these morphisms come back to their sources, we also need to define what it means to compose the strings $A$, $B$ and $C$ with themselves.
In close analogy to what we do for the single closed string, I simply demand that $A$, $B$ and $C$ be invertible and then take the category freely generated from them, subject to the relation
expressed by the above triangle.
The result is in fact (isomorphic to) the fundamental group of the trinion, or, as David points out, the fundamental group of the figure-of-eight.
We can think of $C$ as encircling one of the incoming tubes and of $A$ as encircling the other incoming tube. Then their composite $B = A\circ C$ encirles the outgoing tube (all up to homotopy).
Upon request I will try to draw a picture that makes this obvious. But I guess it is clear now.
Part of the fun of the approach that I am kind of promoting here is that we don’t need to think of $\mathrm{par}_2$ as being the fundamental group of the trinion. The formalism I describe works for
whatever category $\mathrm{par}_2$ that you feel like addressing as a “parameter space” for something.
Posted by: urs on January 22, 2007 11:05 AM | Permalink | Reply to this
Re: Trinion and figure-of-eight
Upon request I will try to draw a picture that makes this obvious. But I guess it is clear now.
Another thread you guys had going made me think you should really set up a wiki. Subsequent discussions explicitly floated similar ideas. I’m a passive observer, but understand enough to contribute
art work once in a while (time permitting!) :)
Baez was somewhat of a pioneer with SPR and TWF and Urs was somewhat of a pioneer with blogs, the next logical step for you guys is to set up a wiki so that others can contribute.
The questions then is, “Why not start adding stuff to WikiPedia?” for which I do not have a great answer :)
Just something to think about and apologies for the digression.
If this is too clueless of a question, feel free to disregard it, but I’m curious how this might be related to what I thought I understood about the *-product way back when I wrote this:
Posted by: Eric on January 22, 2007 4:04 PM | Permalink | Reply to this
wiki or not wiki
the next logical step for you guys is to set up a wiki
Possibly that’s quite right. From what I have seen of research wikis (mostly here and here) I deduce two things:
a) I might find it quite interesting contributing to one (or participating in one, or whatever the right verb is)
b) it is quite unlikely that I will be the one who sets it up.
For the moment, this blog here is already pretty good a platform.
What we could do is maybe organize the joint discussion more, like David Corfield used to do for the Klein-2-geometry line of discussion. I should maybe post a summary of what has been discussed,
what has been achieved and what is still open once in a while.
Maybe I am in the process of preparing something along these lines.
Posted by: urs on January 22, 2007 4:27 PM | Permalink | Reply to this
star product
Eric pointed to (not for the first time :-)
Right. I should think about that!
What I described above is how in certain situations the prescription
take two “string fields” $f$ and $g$ and form a new string field defined on a given string state by applying $f$ and $g$ to all possible ways of decomposing that string into two strings
can be understood as a certain pullback followed by a push-forward.
But in my examples, the “string fields” were really “string 2-fields”: they took values in 1-vector spaces instead of in 0-vector spaces (numbers). This makes their push-forward easier to handle.
For true string fields with values in “numbers” the push-forward will be similar to a path integral and hard to get under control.
But I’ll think about it!
Posted by: urs on January 22, 2007 4:36 PM | Permalink | Reply to this
The n-Category Wiki (Was: Trinion and figure-of-eight)
The questions then is, “Why not start adding stuff to WikiPedia?” for which I do not have a great answer :)
For explaining established facts, that’s a good idea. But Wikipedia isn’t appropriate for new reasearch. More generally, Wikipedia won’t work in any situation where you want to present a perspective
that isn’t already well documented, if not widely accepted and understood. To describe either your own new ideas, or your own new interpretation of old ideas (and almost everything discussed here is
one or the other), you need your own wiki.
Posted by: Toby Bartels on January 22, 2007 6:37 PM | Permalink | Reply to this
Re: Fusion and String Field Star Product
Ok thanks I get it now.
Posted by: Bruce Bartlett on January 22, 2007 4:06 PM | Permalink | Reply to this
Re: Fusion and String Field Star Product
Hi Bruce,
thanks for taking interest. This one was written for you. :-)
It would be nice if you performed the explicit calculation you’re referring to when you do the push-forward to a point.
Right, sure, I should have done that. The only nontrivial thing about my discussion is that push-forward. Here is how it works.
I claimed that the push-forward $\tilde T$ of $T : \mathrm{conf}_2 \to \mathrm{Vect}$ from $\mathrm{conf}_2$ to $\mathrm{conf}$ along $F_3^* : \mathrm{conf}_2 \to \mathrm{conf}$ is the functor $\
tilde T$ on $\mathrm{conf}$ that acts as $\tilde T : ( g \stackrel{g'}{\to} \mathrm{Ad}(g')(g) ) \; \mapsto \; \oplus_{g_1 g_2 = g} ( T(g_1,g_2) \stackrel{T(g')}{\to} T(\mathrm{Ad}(g')(g_1),\mathrm
{Ad}(g')g_2) ) \,.$
To do so, I need to show that there is an isomorphism of Hom-spaces $\mathrm{Hom}((F_3^*)^* f, T) \simeq \mathrm{Hom}(f, \tilde T)$ for all $f : \mathrm{conf} \to \mathrm{Vect} \,.$
But, luckily, this becomes obvious by simply writing it out:
a morphism on the left is a natural transformation $r$ coming from naturality squares of the form $\array{ f(g_1 g_2) &\stackrel{r(g_1,g_2)}{\to}& T(g_1,g_2) \\ f(g')\downarrow \;\; && \;\, \
downarrow T(g') \\ f(\mathrm{Ad}(g')(g_1 g_2)) &\stackrel{r(\mathrm{Ad}(g')(g_1), \mathrm{Ad}(g')(g_2))}{\to}& T(\mathrm{Ad}(g')(g_1), \mathrm{Ad}(g')(g_2)) } \,.$
A morphism on the right is a natural transformation $R$ coming from naturality squares of the form $\array{ f(g) &\stackrel{\oplus_{g_1 g_2 = g}r(g_1,g_2)}{\to}& \oplus_{g_1 g_2 = g}T(g_1,g_2) \\ f
(g')\downarrow \;\; && \;\, \downarrow \oplus_{g_1 g_2 = g}T(g') \\ f(\mathrm{Ad}(g')(g_1 g_2)) &\stackrel{\oplus_{g_1g_2 =g}r(\mathrm{Ad}(g')(g_1), \mathrm{Ad}(g')(g_2))}{\to}& \oplus_{g_1 g_2 = g}T
(\mathrm{Ad}(g')(g_1), \mathrm{Ad}(g')(g_2)) } \,.$ These are clearly in bijection: I have already indicated the required identification by using the letter $r$ in the second diagram.
(Unless I am mixed up, that is. Please don’t trust my use of the words “obvious”, “clearly”, etc. but check everything yourself. If you think I making a mistake somewhere, please let me know.)
Probably my notation here is not really optimized. Let me try to look at the same situation from a more general point of view with slightly more transparent notation:
Let $p : C \to D$ be a morphism of categories which is surjective on morphisms and such that
a) every morphism in $D$ has precisely one lift for every lift of its source object
b) every morphism in $D$ has precisely one lift for every lift of its target object.
Notice that this is the situation for our configuration spaces of strings on $\Sigma(G)$ above. The morphism $\array{ g \\ g'\downarrow \;\; \\ \mathrm{Ad}(g')(g) }$ has precisely one lift, once we
specify either the lift of the source or the target.
So assume, generally, that we have a morphism $p : C \to D$ as above and want to compute the push-forward along that morphism of a functor
(1)$\mathrm{tra} : C \to \mathrm{Vect} \,.$
This means we need to find $\tilde \mathrm{tra} : D \to \mathrm{Vect}$ such that $\mathrm{Hom}(p^* f, \mathrm{tra}) \simeq \mathrm{Hom}(f, \tilde \mathrm{tra})$ for all $f : D \to \mathrm{Vect}$.
I claim that $\tilde \mathrm{tra} : (d_1 \stackrel{\kappa}{\to} d_2) \mapsto \oplus_{(c_1 \stackrel{\gamma}{\to} c_2) | p(\gamma)=\kappa} \left( \mathrm{tra}(c_1) \stackrel{\gamma}{\to} \mathrm{tra}
(c_2) \right) \,.$
Let’s check that.
A morphism on the left of $\mathrm{Hom}(p^* f, \mathrm{tra}) \simeq \mathrm{Hom}(f, \tilde \mathrm{tra})$ is a natural transformation $r$ coming with naturality squares of the form $\array{ f(p(c_1))
&\stackrel{r(c_1)}{\to}& \mathrm{tra}(c_1) \\ f(p(\gamma))\downarrow\;\; && \;\; \downarrow \mathrm{tra}(\gamma) \\ f(p(c_2)) &\stackrel{r(c_2)}{\to}& \mathrm{tra}(c_2) }$ for every morphism $c_1 \
stackrel{\gamma}{\to} c_2$ in $C$.
A morphism on the right is a natural transformation coming from naturality squares of the form $\array{ f(d_1) &\stackrel{\oplus_{p(c_1)=d_1}r(c_1)}{\to}& \oplus_{p(c_1)=d_1}\mathrm{tra}(c_1) \\ f(\
kappa)\downarrow\;\; && \;\; \downarrow \oplus_{p(\gamma) = \kappa}\mathrm{tra}(\gamma) \\ f(d_2) &\stackrel{\oplus_{p(c_2)=d_2}r(c_2)}{\to}& \oplus_{p(c_2)=d_2}\mathrm{tra}(c_2) }$ for every
morphism $d_1 \stackrel{\kappa}{\to} d_2$ in $D$.
Again, I have used the symbol $r$ for my components in the second diagram in order to manifestly indicate how these two spaces of natural transformations are isomorphic.
Posted by: urs on January 22, 2007 11:53 AM | Permalink | Reply to this
This is a nice description of the fusion product… isn’t it basically the same approach though as the `classical’ description of fusion by Freed, using the pair of pants, in Quantum Groups from
Path Integrals, pages 34-36? It would be instructive to compare the two approaches.
Maybe it is. To some extent, much of what I am thinking about lately, not the least through your influence, is what Freed is actually doing there.
I am fond of the fact that my discussion of fusion above is essentially elementary and based on rather general and clear functorial concepts (as described in section 1.2 here). (But I might be biased
I feel to be in line here with the way Simon Willerton reduces lots of apparently sophisticated technology to a clear, elementary idea by identifying the right structure (those reading this who don’t
know what I am talking about here should follow this link).
To amplify this, I quote this verse from Simon’s paper, beginning of section 3:
We can now see how the general theory of the previous section easily recovers many facts about [X]. Indeed it is only through this point of view that I understand [X].
That’s my stance here. I tried to show how the general theory of functorial $n$-transport recovers facts about fusion. Indeed, it is only through this point of view that I understand fusion.
I think Simon Willerton explains how the loop group of $G$ is best thought of as the action groupoid of $G$ on itself, which I am fond of thinking as a special case of a functorial way to realize the
configuration space of a closed string.
What I tried to add above is how the general idea of two strings merging into a single one then also very nicely explains the fusion product of loop group representations.
A relevant exercise (which I haven’t worked through yet) is Exercise 4.34, which precisely makes reference to the kind of push-forward you are referring to.
Right. Thanks for reminding me of that exercise!
Posted by: urs on January 22, 2007 12:31 PM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2007/01/fusion_and_string_field_star.html","timestamp":"2014-04-20T20:56:34Z","content_type":null,"content_length":"102668","record_id":"<urn:uuid:bbccbb5f-3591-452f-a5e9-3586ff30294c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonlinear Least Squares Curve Fitting
Version 3: Now with 22 functions and better convergence damping Help
• The x-y pairs should be input one pair per line with the x value (the independent variable) first. The x and y values can be separated by spaces, a tab, or a comma. You can copy and paste from
• You need to input rough guesses for the fit parameters. Sometimes just guessing "1" for each parameter will work.
• For fitting functions with a "c" parameter, you can choose to fix the value. This option allows you to use "c" as a parameter without varying the value during least squares adjustment.
• If the calculation doesn't converge,
1. Try using convergence damping.
2. In cases of slow convergence, enter the results from the previous non-converged run as guesses for the next run.
3. Use the "Fit values from initial guesses" output to refine your guesses. In other words, vary your input guesses to decrease the initial residuals.
• Leave the guess for any unused parameter blank.
• For fitting a user input function see John Pezzullo's Nonlinear Least Squares Curve Fitter.
• If the plotting screen doesn't finish running, you need to use a more recent version of the Java plugin for your browser.
• The Java plotting routines are courtesy of Leigh Brookshaw.
Colby College Chemistry, T. W. Shattuck, 4/24/2008 | {"url":"http://www.colby.edu/chemistry/PChem/scripts/lsfitpl.html","timestamp":"2014-04-20T05:53:17Z","content_type":null,"content_length":"20296","record_id":"<urn:uuid:3e74baba-524d-48f2-ad4c-22167320f4bd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simple set proofs
February 2nd 2011, 12:43 PM #1
Junior Member
Nov 2009
Simple set proofs
I find these problems easy enough to understand, I just have trouble with proofs that make sense.
For example, if I wanted to prove:
$(C \cup D) \setminus (C\cap D)=(C\setminus D)\cup(D\setminus C)$
Can I write:
$(C \cup D)-(C\cap D)=\{x:x\in C$ or $x\in D\}\wedge \{x\in C:xot\in D\}\wedge \{x\in D:xot\in C\}$
$\Rightarrow ( \forall x \in C)(x \in (C\cup D^c ))$ and $( \forall x \in D)(x \in (D\cup C^c ))$
$(C \cup D)\setminus (C\cap D)= (C\cup D^c ) \cup (D\cup C^c ) = (C\setminus D)\cup(D\setminus C)$
If you can use a non 'pick-a-point' proof, here is one.
$\begin{array}{rcl}<br /> {\left( {C \cup D} \right)\backslash \left( {C \cap D} \right)} & \equiv & {\left( {C \cup D} \right) \cap \left( {C \cap D} \right)^c } \\<br /> {} & \equiv & {\left(
{C \cup D} \right) \cap \left( {C^c \cup D^c } \right)} \\<br /> {} & \equiv & {\left( {C \cap D} \right) \cup \left( {D \cap C^c } \right)} \\<br /> {} & \equiv & {\left( {C\backslash D} \right)
\cup \left( {D\backslash C} \right)} \\<br /> <br /> \end{array}$
Yeah, I can do that much, but I think I would be accused of only "showing" it. Is the attempt I made above even an actual proof?
Well a proof is proof. But follow your instructor’s instructions.
If $x\in(C\cup D)\setminus (C\cap D)$ means that $x\in C\text{ or }x\in D\text{ and }xotin(C\cap D).$
So the last implies $xotin C\text{ or }xotin D$.
From there takes cases.
Last edited by Plato; February 2nd 2011 at 03:53 PM.
I am struggling to understand the following.
First, $\land$ usually denotes conjunction, which joins propositions, not sets. (Speaking of this, it's not good to use both $\land$ and "and", as well as - and $\setminus$.) Second, I don't see
how the right-hand side immediately follows from the definition of the left-hand side.
February 2nd 2011, 12:56 PM #2
February 2nd 2011, 01:15 PM #3
Junior Member
Nov 2009
February 2nd 2011, 01:41 PM #4
February 2nd 2011, 02:49 PM #5
Senior Member
Feb 2010
February 3rd 2011, 04:52 AM #6
MHF Contributor
Oct 2009 | {"url":"http://mathhelpforum.com/discrete-math/170033-simple-set-proofs.html","timestamp":"2014-04-16T10:37:29Z","content_type":null,"content_length":"49855","record_id":"<urn:uuid:fbdbd270-0795-4b95-9b5a-dc26e68c184c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bayesian network theory
From ControlsWiki
Note: Video lecture available for this section!
Authors: Sarah Hebert, Valerie Lee, Matthew Morabito, Jamie Polan
Date Released: 11/29/07
Bayesian network theory can be thought of as a fusion of incidence diagrams and Bayes’ theorem. A Bayesian network, or belief network, shows conditional probability and causality relationships
between variables. The probability of an event occurring given that another event has already occurred is called a conditional probability. The probabilistic model is described qualitatively by a
directed acyclic graph, or DAG. The vertices of the graph, which represent variables, are called nodes. The nodes are represented as circles containing the variable name. The connections between the
nodes are called arcs, or edges. The edges are drawn as arrows between the nodes, and represent dependence between the variables. Therefore, for any pairs of nodes indicate that one node is the
parent of the other so there are no independence assumptions. Independence assumptions are implied in Bayesian networks by the absence of a link. Here is a sample DAG:
The node where the arc originates is called the parent, while the node where the arc ends is called the child. In this case, A is a parent of C, and C is a child of A. Nodes that can be reached from
other nodes are called descendents. Nodes that lead a path to a specific node are called ancestors. For example, C and E are descendents of A, and A and C are ancestors of E. There are no loops in
Bayesian networks, since no child can be its own ancestor or descendent. Bayesian networks will generally also include a set of probability tables, stating the probabilities for the true/false values
of the variables. The main point of Bayesian Networks is to allow for probabilistic inference to be performed. This means that the probability of each value of a node in the Bayesian network can be
computed when the values of the other variables are known. Also, because independence among the variables is easy to recognize since conditional relationships are clearly defined by a graph edge, not
all joint probabilities in the Bayesian system need to be calculated in order to make a decision.
Joint Probability Distributions
Joint probability is defined as the probability that a series of events will happen concurrently. The joint probability of several variables can be calculated from the product of individual
probablities of the nodes:
$\mathrm P(X_1, \ldots, X_n) = \prod_{i=1}^n \mathrm P(X_i \mid \operatorname{parents}(X_i))\,$
Using the sample graph from the introduction, the joint probablity distribution is:
$\ P(A,B,C,D,E)={P(A)P(B)P(C|A,B)P(D|B)P(E|C)}$
If a node does not have a parent, like node A, its probability distribution is described as unconditional. Otherwise, the local probability distribution of the node is conditional on other nodes.
Equivalence Classes
Each Bayesian network belongs to a group of Bayesian networks known as an equivalence class. In a given equivalence class, all of the Bayesian networks can be described by the same joint probability
As an example, the following set of Bayesian networks comprises an equivalence class:
The causality implied by each of these networks is different, but the same joint probability statement describes them all. The following equations demonstrate how each network can be created from the
same original joint probability statement:
Network 1
P(A,B,C) = P(A) * P(B | A) * P(C | B)
Network 2
P(A,B,C) = P(A) * P(B | A) * P(C | B)
$P(A,B,C) = P(A)*\frac{P(A|B)*P(B)}{P(A)}*P(C|B)$
P(A,B,C) = P(A | B) * P(B) * P(C | B)
Network 3
Starting now from the statement for Network 2
P(A,B,C) = P(A | B) * P(B) * P(C | B)
$P(A,B,C) = P(A|B)*P(B)*\frac{P(B|C)*P(C)}{P(B)}$
P(A,B,C) = P(A | B) * P(B | C) * P(C)
All substitutions are based on Bayes rule.
The existence of equivalence classes demonstrates that causality cannot be determined from random observations. A controlled study – which holds some variables constant while varying others to
determine each one’s effect – is necessary to determine the exact causal relationship, or Bayesian network, of a set of variables.
Bayes' Theorem
Bayes' Theorem, developed by the Rev. Thomas Bayes, an 18th century mathematician and theologian, it is expressed as:
$P(H|E,c)=\frac{P(H|c)\cdot P(E|H,c)}{P(E|c)}$
where we can update our belief in hypothesis H given the additional evidence E and the background information c. The left-hand term, P(H|E,c) is known as the "posterior probability," or the
probability of H after considering the effect of E given c. The term P(H|c) is called the "prior probability" of H given c alone. The term P(E|H,c) is called the "likelihood" and gives the
probability of the evidence assuming the hypothesis H and the background information c is true. Finally, the last term P(E|c) is called the "expectedness", or how expected the evidence is given only
c. It is independent of H and can be regarded as a marginalizing or scaling factor.
It can be rewritten as
$P(E|c)=\sum_{i} P(E|H_i,c)\cdot P(H_i|c)$
where i denotes a specific hypothesis H[i], and the summation is taken over a set of hypotheses which are mutually exclusive and exhaustive (their prior probabilities sum to 1).
It is important to note that all of these probabilities are conditional. They specify the degree of belief in some proposition or propositions based on the assumption that some other propositions are
true. As such, the theory has no meaning without prior determination of the probability of these previous propositions.
Bayes' Factor
In cases when you are unsure about the causal relationships between the variables and the outcome when building a model, you can use Bayes Factor to verify which model describes your data better and
hence determine the extent to which a parameter affects the outcome of the probability. After using Bayes' Theorem to build two models with different variable dependence relationships and evaluating
the probability of the models based on the data, one can calculate Bayes' Factor using the general equation below:
$BF=\frac{p(model 1|data)}{p(model 2|data)}=\frac{\frac{p(data|model 1)p(model 1)}{p(data)}}{\frac{p(data|model 2)p(model 2)}{p(data)}}=\frac{p(data|model 1)}{p(data|model 2)}$
The basic intuition is that prior and posterior information are combined in a ratio that provides evidence in favor of one model verses the other. The two models in the Bayes' Factor equation
represent two different states of the variables which influence the data. For example, if the data being studied are temperature measurements taken from multiple sensors, Model 1 could be the
probability that all sensors are functioning normally, and Model 2 the probability that all sensors have failed. Bayes' Factors are very flexible, allowing multiple hypotheses to be compared
BF values near 1, indicate that the two models are nearly identical and BF values far from 1 indicate that the probability of one model occuring is greater than the other. Specifically, if BF is > 1,
model 1 describes your data better than model 2. IF BF is < 1, model 2 describes the data better than model 1. In our example, if a Bayes' factor of 5 would indicate that given the temperature data,
the probability of the sensors functioning normally is five times greater than the probability that the senors failed. A table showing the scale of evidence using Bayes Factor can be found below:
Although Bayes' Factors are rather intuitive and easy to understand, as a practical matter they are often quite difficult to calculate. There are alternatives to Bayes Factor for model assessment
such as the Bayesian Information Criterion (BIC).
The formula for the BIC is:${-2 \cdot \ln{p(x|k)}} \approx \mathrm{BIC} = {-2 \cdot \ln{L} + k \ln(n) }. \$
x = the observed data; n = the number of data points in x, the number of observations, or equivalently, the sample size; k = the number of free parameters to be estimated. If the estimated model is a
linear regression, k is the number of regressors, including the constant; p(x|k) = the likelihood of the observed data given the number of parameters; L = the maximized value of the likelihood
function for the estimated model.
This statistic can also be used for non-nested models. For further information on Bayesian Information Criterion, please refer to:[[1]]
Advantages and Limitations of Bayesian Networks
The advantages of Bayesian Networks are as follows:
• Bayesian Networks visually represent all the relationships between the variables in the system with connecting arcs.
• It is easy to recognize the dependence and independence between various nodes.
• Bayesian networks can handle situations where the data set is incomplete since the model accounts for dependencies between all variables.
• Bayesian networks can maps scenarios where it is not feasible/practical to measure all variables due to system constraints (costs, not enough sensors, etc.)
• Help to model noisy systems.
• Can be used for any system model - from all known parameters to no known parameters.
The limitations of Bayesian Networks are as follows:
• All branches must be calculated in order to calculate the probability of any one branch.
• The quality of the results of the network depends on the quality of the prior beliefs or model. A variable is only a part of a Bayesian network if you believe that the system depends on it.
• Calculation of the network is NP-hard (nondeterministic polynomial-time hard), so it is very difficult and possibly costly.
• Calculations and probabilities using Baye's rule and marginalization can become complex and are often characterized by subtle wording, and care must be taken to calculate them properly.
Inference is defined as the process of deriving logical conclusions based on premises known or assumed to be true. One strength of Bayesian networks is the ability for inference, which in this case
involves the probabilities of unobserved variables in the system. When observed variables are known to be in one state, probabilities of other variables will have different values than the generic
case. Let us take a simple example system, a television. The probability of a television being on while people are home is much higher than the probability of that television being on when no one is
home. If the current state of the television is known, the probability of people being home can be calculated based on this information. This is difficult to do by hand, but software programs that
use Bayesian networks incorporate inference. One such software program, Genie, is introduced in Learning and analyzing Bayesian networks with Genie.
Marginalization of a parameter in a system may be necessary in a few instances:
• If the data for one parameter (P1) depends on another, and data for the independent parameter is not provided.
• If a probability table is given in which P1 is dependent upon two other system parameters, but you are only interested in the effect of one of the parameters on P1.
Imagine a system in which a certain reactant (R) is mixed in a CSTR with a catalyst (C) and results in a certain product yield (Y). Three reactant concentrations are being tested (A, B, and C) with
two different catalysts (1 and 2) to determine which combination will give the best product yield. The conditional probability statement looks as such:
$P(R,C,Y) = P(R)P(C)P(Y|R,C) \qquad$
The probability table is set up such that the probability of certain product yield is dependent upon the reactant concentration and the catalyst type. You want to predict the probability of a certain
product yield given only data you have for catalyst type. The concentration of reactant must be marginalized out of P(Y|R,C) to determine the probability of the product yield without knowing the
reactant concentration. Thus, you need to determine P(Y|C). The marginalization equation is shown below:
$P(Y|C) = \sum_{i}P(Y|R_i,C)P(R_i) \qquad$
where the summation is taken over reactant concentrations A, B, and C.
The following table describes the probability that a sample is tested with reactant concentration A, B, or C:
This next table describes the probability of observing a yield - High (H), Medium (M), or Low (L) - given the reactant concentration and catalyst type:
The final two tables show the calculation for the marginalized probabilities of yield given a catalyst type using the marginalization equation:
Dynamic Bayesian Networks
The static Bayesian network only works with variable results from a single slice of time. As a result, a static Bayesian network does not work for analyzing an evolving system that changes over time.
Below is an example of a static Bayesian network for an oil wildcatter:
An oil wildcatter must decide either to drill or not. However, he needs to determine if the hole is dry, wet or soaking. The wildcatter could take seismic soundings, which help determine the
geological structure at the site. The soundings will disclose whether the terrain below has no structure, which is bad, or open structure that's okay, or closed structure, which is really good. As
you can see this example does not depend on time.
Dynamic Bayesian network (DBN) is an extension of Bayesian Network. It is used to describe how variables influence each other over time based on the model derived from past data. A DBN can be thought
as a Markov chain model with many states or a discrete time approximation of a differential equation with time steps.
An example of a DBN, which is shown below, is a frictionless ball bouncing between two barriers. At each time step the position and velocity changes.
An important distinction must be made between DBNs and Markov chains. A DBN shows how variables affect each other over time, whereas a Markov chain shows how the state of the entire system evolves
over time. Thus, a DBN will illustrate the probabilities of one variable changing another, and how each of the individual variables will change over time. A Markov chain looks at the state of a
system, which incorporates the state of each individual variable making up the system, and shows the probabilities of the system changing states over time. A Markov chain therefore incorporates all
of the variables present in the system when looking at how said system evolves over time. Markov chains can be derived from DBNs, but each network represents different values and probabilites.
There are several advantages to creating a DBN. Once the network has been established between the time steps, a model can be developed based on this data. This model can then be used to predict
future responses by the system. The ability to predict future responses can also be used to explore different alternatives for the system and determine which alternative gives the desired results.
DBN's also provide a suitable environment for model predictive controllers and can be useful in creating the controller. Another advantage of DBN's is that they can be used to create a general
network that does not depend on time. Once the DBN has been established for the different time steps, the network can be collapsed to remove the time component and show the general relationships
between the variables.
A DBN is made up with interconnected time slices of static Bayesian networks. The nodes at certain time can affect the nodes at a future time slice, but the nodes in the future can not affect the
nodes in the previous time slice. The causal links across the time slices are referred to as temporal links, the benefit of this is that it gives DBN an unambiguous direction of causality.
For the convenience of computation, the variables in DBN are assumed to have a finite number of states that the variable can have. Based on this, conditional probability tables can be constructed to
express the probabilities of each child node derived from conditions of its parent nodes.
Node C from the sample DAG above would have a conditional probability table specifying the conditional distribution P(C|A,B). Since A and B have no parents, so it only require probability
distributions P(A) and P(B). Assuming all the variables are binary, means variable A can only take on A1 and A2, variable B can only take on B1 and B2, and variable C can only take on C1 and C2.
Below is an example of a conditional probability table of node C.
The conditional probabilities between observation nodes are defined using a sensor node. This sensor node gives conditional probability distribution of the sensor reading given the actual state of
system. It embodies the accuracy of the system.
The nature of DBN usually results in a large and complex network. Thus to calculate a DBN, the outcome old time slice is summarized into probabilities that is used for the later slice. This provides
a moving time frame and forms a DBN. When creating a DBN, temporal relationships between slices must be taken into account. Below is an implementation chart for DBN.
The graph below is a representation of a DBN. It represents the variables at two different time steps, t-1 and t. t-1, shown on the left, is the initial distribution of the variables. The next time
step, t, is dependent on time step t-1. It is important to note that some of these variables could be hidden.
Where Ao, Bo, Co are initial states and Ai, Bi, Ci are future states where i=1,2,3,…,n.
The probability distribution for this DBN at time t is…
If the process continues for a larger number of time steps, the graph will take the shape below.
Its joint probability distribution will be...
DBN’s are useful in industry because they can model processes where information is incomplete, or there is uncertainty. Limitations of DBN’s are that they do not always accurately predict outcomes
and they can have long computational times.
The above illustrations are all examples of "unrolled" networks. An unrolled dynamic Bayesian network shows how each variable at one time step affects the variables at the next time step. A helpful
way to think of unrolled networks is as visual representations of numerical solutions to differential equations. If you know the states of the variables at one point in time, and you know how the
variables change with time, then you can predict what the state of the variables will be at any point in time, similar to using Euler's method to solve a differential equation. A dynamic Bayesian
network can also be represented as a "rolled" network. A rolled network, unlike an unrolled network, shows each variabl's effect on each other varibale in one chart. For example, if you had an
unrolled network of the form:
then you could represent that same network in a rolled form as:
If you examine each network, you will see that each one provides the exact same information as to how the variables all affect each other.
Bayesian networks are used when the probability that one event will occur depends on the probability that a previous event occurred. This is very important in industry because in many processes,
variables have conditional relationships, meaning they are not independent of each other. Bayesian networks are used to model processes in a wide variety of applications. Some of these include…
1. Gene regulatory networks
2. Protein structure
3. Diagnosis of illness
4. Document classification
5. Image processing
6. Data fusion
7. Decision support systems
8. Gathering data for deep space exploration
9. Artificial Intelligence
10. Prediction of weather
11. On a more familiar basis, Bayesian networks are used by the friendly Microsoft office assistant to elicit better search results.
12. Another use of Bayesian networks arises in the credit industry where an individual may be assigned a credit score based on age, salary, credit history, etc. This is fed to a Bayesian network
which allows credit card companies to decide whether the person's credit score merits a favorable application.
Summary: A General Solution Algorithm for the Perplexed
Given a Bayesian network problem and no idea where to start, just relax and try following the steps outlined below.
Step 1: What does my network look like? Which nodes are parents (and are they conditional or unconditional) and which are children? How would I model this as an incidence diagram and what conditional
probability statement defines it?
Step 2: Given my network connectivity, how do I tabulate the probabilities for each state of my node(s) of interest? For a single column of probabilities (parent node), does the column sum to 1? For
an array of probabilities (child node) with multiple possible states defined by the given combination of parent node states, do the rows sum to 1?
Step 3: Given a set of observed data (usually states of a child node of interest), and probability tables (aka truth tables), what problem am I solving?
• Probability of observing the particular configuration of data, order unimportant
Solution: Apply multinomial distribution
• Probability of observing the particular configuration of data in that particular order
Solution: Compute the probability of each individual observation, then take the product of these
• Probability of observing the data in a child node defined by 2 (or n) parents given only 1 (or n-1) of the parent nodes
Solution: Apply marginalization to eliminate other parent node
• Probability of a parent node being a particular state given data in the form of observed states of the child node
Solution: Apply Bayes' Theorem
Solve for Bayes' Factor to remove incalculable denominator terms generated by applying Bayes’ Theorem, and to compare
the parent node state of interest to a base case, yielding a more meaningful data point
Step 4: Have I solved the problem? Or is there another level of complexity? Is the problem a combination of the problem variations listed in step 3?
• If problem is solved, call it a day and go take a baklava break
• If problem is not solved, return to step 3
Worked out Example 1
A multipurpose alarm in a plant can be tripped in 2 ways. The alarm goes off if the reactor temperature is too high or the pressure in a storage tank is too high. The reactor temperature may be too
high because of a low cooling water flow (1% probability), or because of an unknown side reaction (5% probability). The storage tank pressure might be too high because of a blockage in the outlet
piping (2% probability). If the cooling water flow is low and there is a side reaction, then there is a 99% probability that a high temperature will occur. If the cooling water flow is normal and
there is no side reaction, there is only a 3% probability a high temperature will occur. If there is a pipe blockage, a high pressure will always occur. If there is no pipe blockage, a high pressure
will occur only 2% of the time.
Create a DAG for the situation above, and set up the probability tables needed to model this system. All the values required to fill in these tables are not given, so fill in what is possible and
then indicate what further values need to be found.
The following probability tables describe the system, where CFL = Cold water flow is low, SR = Side reaction present, PB = Pipe is blocked, HT = High temperature, HP = High pressure, A = Alarm. T
stands for true, or the event did occur. F stands for false, or the event did not occur. A blank space in a table indicates an area where further information is needed.
An advantage of using DAGs becomes apparent. For example, you can see that there is only a 3% chance that there is a high temperature situation given that both cold water flow is not low and that
there is no side reaction.However, as soon as the cold water becomes low, you have at least a 94% chance of a high temperature alarm, regardless of whether or not a side reaction occurs. Conversely,
the presence of a side reaction here only creates a 90% chance of alarm trigger. From the above probability calculations, one can estimate relative dominance of cause-and-effect triggers. For example
you could now reasonably conjecture that the cold water being low is a more serious event than a side reaction.
Worked out Example 2
The DAG given below depicts a different model in which the alarm will ring when activated by high temperature and/or coolant water pipe leakage in the reactor.
The table below shows the truth table and probabilities with regards to the different situations that might occur in this model.
The joint probability function is:
P(A,HT,CP) = P(A | HT,CP)P(HT | CP)P(CP)
A great feature of using the Bayesian network is that the probability of any situation can be calculated. In this example, write the statement that will describe the probability that the temperature
is high in the reactor given that the alarm sounded.
$\mathrm P(\mathit{CP}=T \mid \mathit{A}=T) =\frac{\mathrm P(\mathit{A}=T,\mathit{CP}=T)}{\mathrm P(\mathit{A}=T)} =\frac{\sum_{\mathit{HT} \in \{T, F\}}\mathrm P(\mathit{A}=T,\mathit{HT},\mathit{CP}
=T)}{\sum_{\mathit{HT}, \mathit{CP} \in \{T, F\}} \mathrm P(\mathit{A}=T,\mathit{HT},\mathit{CP})}$
Worked out Example 3
Certain medications and traumas can both cause blood clots. A blood clot can lead to a stroke, heart attack, or it could simply dissolve on its own and have no health implications.
a. Please create a DAG that represents this situation.
b. The following probability information is given where M = medication, T = trauma, BC = blood clot, HA = heart attack, N = nothing, and S = stroke. T stands for true, or this event did occur. F
stands for false, or this event did not occur.
What is the probability that a person will develop a blood clot as a result of both medication and trauma, and then have no medical implications?
Answer: P(N, BC, M, T) = P(N | BC) P(BC | M, T) P(M) P(T) = (0.25) (0.95) (0.2) (0.05) = 0.2375%
Worked out Example 4
Suppose you were given the following data.
│Catalyst │p(Catalyst) │
│A │0.40 │
│B │0.60 │
│Temperature│Catalyst│P(Yield = H)│P(Yield = M)│P(Yield = L)│
│H │A │0.51 │0.08 │0.41 │
│H │B │0.30 │0.20 │0.50 │
│M │A │0.71 │0.09 │0.20 │
│M │B │0.92 │0.05 │0.03 │
│L │A │0.21 │0.40 │0.39 │
│L │B │0.12 │0.57 │0.31 │
How would you use this data to find p(yield|temp) for 9 observations with the following descriptions?
│# Times Observed │Temperature │Yield│
│4x │H │H │
│2x │M │L │
│3x │L │H │
An DAG of this system is below: Answer: Marginalization! The state of the catalyst can be marginalized out using the following equation:
p(yield | temp) = ∑ p(yield | temp,cat[i])p(cat[i])
i = A,B
The two tables above can be merged to form a new table with marginalization:
│Temperature│ P(Yield = H) │ P(Yield = M) │ P(Yield = L) │
│H │0.51*0.4 + 0.3*0.6 = 0.384 │0.08*0.4 + 0.2*0.6 = 0.152 │0.41*0.4 + 0.5*0.6 = 0.464 │
│M │0.71*0.4 + 0.92*0.6 = 0.836│0.09*0.4 + 0.05*0.6 = 0.066│0.20*0.4 + 0.03*0.6 = 0.098│
│L │0.21*0.4 + 0.12*0.6 = 0.156│0.40*0.4 + 0.57*0.6 = 0.502│0.39*0.4 + 0.31*0.6 = 0.342│
$p(yield|temp)= \frac{9!}{4!2!3!}*(0.384^4*0.098^2*0.156^3)= 0.0009989$
Worked Out Example 5
A very useful use of Bayesian networks is determining if a sensor is more likely to be working or broken based on current readings using the Bayesian Factor discussed earlier. Suppose there is a
large vat in your process with large turbulent flow that makes it difficult to accurately measure the level within the vat. To help you use two different level sensors positioned around the tank that
read whether the level is high, normal, or low. When you first set up the sensor system you obtained the following probabilities describing the noise of a sensor operating normally.
│ Tank Level (L) │p(S=High)│p(S=Normal)│p(S=Low)│
│Above Operating Level Range │0.80 │0.15 │0.05 │
│Within Operating Level Range │0.15 │0.75 │0.10 │
│Below Operating Level Range │0.10 │0.20 │0.70 │
When the sensor fails there is an equal chance of the sensor reporting high, normal, or low regardless of the actual state of the tank. The conditional probability table for a fail sensor then looks
│ Tank Level (L) │p(S=High)│p(S=Normal)│p(S=Low)│
│Above Operating Level Range │0.33 │0.33 │0.33 │
│Within Operating Level Range │0.33 │0.33 │0.33 │
│Below Operating Level Range │0.33 │0.33 │0.33 │
From previous data you have determined that when the process is acting normally, as you believe it is now, the tank will be operating above the level range 10% of the time, within the level range 85%
of the time, and below the level range 5% of the time. Looking at the last 10 observations (shown below) you suspect that sensor 1 may be broken. Use Bayesian factors to determine the probability of
sensor 1 being broken compared to both sensors working.
│Sensor 1 │Sensor 2 │
│High │Normal │
│Normal │Normal │
│Normal │Normal │
│High │High │
│Low │Normal │
│Low │Normal │
│Low │Low │
│High │Normal │
│High │High │
│Normal │Normal │
From the definition of the Bayesian Factor we get.
$BF=\frac{p(model 1|data)}{p(model 2|data)}=\frac{\frac{p(data|model 1)p(model 1)}{p(data)}}{\frac{p(data|model 2)p(model 2)}{p(data)}}=\frac{p(data|model 1)}{p(data|model 2)}$
For this set we will use the probability that we get the data given based on the model.
$\frac{p(data|model 1)}{p(data|model 2)}$
If we consider model 1 both sensors working and model 2 sensor 2 being broken we can find the BF for this rather easily.
p(data | model 1) = p(s1 data | model 1)*p(s2 data | model 1)
For both sensors working properly:
The probability of the sensor giving each reading has to be calculated first, which can be found by summing the probability the tank will be at each level and multiplying by probability of getting a
specific reading at that level for each level.
p(s1 = high | model 1) = [(.10)*(.80) + (.85)*(.15) + (.05)*(.10) = 0.2125
p(s1 = normal | model 1) = [(.10)*(.15) + (.85)*(.75) + (.05)*(.20) = 0.6625
p(s1 = low | model 1) = [(.10)*(.05) + (.85)*(.10) + (.05)*(.70) = 0.125
Probability of getting sensor 1's readings (assuming working normally)
p(s1data | model1) = (.2125)^4 * (.6625)^3 * (.125)^3 = 5.450 * 10^ − 6
The probability of getting each reading for sensor 2 will be the same since it is also working normally
p(s2data | model1) = (.2125)^2 * (.6625)^7 * (.125)^1 = 3.162 * 10^ − 4
p(data | model1) = (5.450 * 10^ − 6) * (3.162 * 10^ − 4) = 1.723 * 10^ − 9
For sensor 1 being broken:
The probability of getting each reading now for sensor one will be 0.33.
p(s1data | model2) = (0.33)^4 * (0.33)^3 * (0.33)^3 = 1.532 * 10^ − 5
The probability of getting the readings for sensor 2 will be the same as model 1, since both models assume sensor 2 is acing normally.
p(data | model2) = (1.532 * 10^ − 5) * (3.162 * 10^ − 4) = 4.844 * 10^ − 9
$BF = \frac{1.723 * 10^{-9}}{4.844 * 10^{-9}}=0.356$
A BF factor between 1/3 and 1 means there is weak evidence that model 2 is correct.
True or False?
1. Is the other name for Bayesian Network the "Believer" Network?
2. The nodes in a Bayesian network can represent any kind of variable (latent variable, measured parameter, hypothesis..) and are not restricted to random variables.
3. Bayesian network theory is used in part of the modeling process for artificial intelligence.
1. F
2. T
3. T
Sage's Corner
{http://controls.engin.umich.edu/wiki/index.php/Image:Bayesian_Network_Theory.ppt Unnarrated Slides}
Example Adapted from <http://www.dcs.qmw.ac.uk/~norman/BBNs/BBNs.htm>
{http://controls.engin.umich.edu/wiki/index.php/Image:Bayesian.ppt Unnarrated Slides}
• Ben-Gal, Irad. “BAYESIAN NETWORKS.” Department of Industrial Engineering. Tel-Aviv University. <http://www.eng.tau.ac.il/~bengal/BN.pdf>http://www.dcs.qmw.ac.uk/~norman/BBNs/BBNs.htm
• Charniak, Eugene (1991). "Bayesian Networks without Tears", AI Magazine, p. 8.
• Friedman, Nir, Linial, Michal, Nachman, Iftach, and Pe’er, Dana. “Using Bayesian Networks to Analyze Expression Data.” JOURNAL OF COMPUTATIONAL BIOLOGY, Vol. 7, # 3/4, 2000, Mary Ann Liebert,
Inc. pp. 601–620 <http://www.sysbio.harvard.edu/csb/ramanathan_lab/iftach/papers/FLNP1Full.pdf>
• Neil, Martin, Fenton, Norman, and Tailor, Manesh. “Using Bayesian Networks to Model Expected and Unexpected Operational Losses.” Risk Analysis, Vol. 25, No. 4, 2005 <http://www.dcs.qmul.ac.uk/ | {"url":"https://controls.engin.umich.edu/wiki/index.php/Bayesian_network_theory","timestamp":"2014-04-21T12:40:22Z","content_type":null,"content_length":"63570","record_id":"<urn:uuid:7ed5dbd3-9ed8-46ee-b922-319de7c031f8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
:: What Is The Next Lottery Winning Numbers : How To Win The Lotto With
Scratch Cards
What Is The Next Lottery Winning Numbers
One Minute Set Up. Ken Silver\'s Has Won 9 Out Of 10 Games For the Last 20 Years And Banked Millions. This Lotto System Is Amazing!
What Is The Next Lottery Winning Numbers
Ca Lottery Mega Millions Results Lottery Lottery Lottery Mega Millions Winning Numbers For Today California Official Lottery Numbers Ca Lotteruy Can U Play The Lotto Online Calottery Mega Million
Winning Numbers Check Lottery Tickets Lotto Number Winner Cal State Lottery Results Play The Lotto Online Uk Lotttery Winning The Fl Lotto How To Win The Fl Lotto Calottery Winning Numbers Mega
Millions Lottery Online Usa Fantasy 5 Winning Numbers For California Nattional Loterry Mega Lotto In California National Lottery Number Today S Lottery Number Find Lottery Numbers Winning The Lotto
Multiple Times Where Can I Get The Latest Lotto Results Ca Replay Lotto Lotto Results For The 29th September Mega Lottery Results California Winning Number In Lotto Lottery Result Uk Replay Ca Do
Lotto Programs Work Story Of Lottery Winners Www loto Results What Time Is The Super Lotto Drawing In California Why Does My Lotto Ticket Only Have 6 Numbers How To Win The Lotto Using Statistics The
Lotto Jackpot Lottery Result Ca What Does The Bible Say About Lotto Lotto Online What Is The Next Lottery Winning Numbers.
How To Win The Lotto With Scratch Cards
What Is The Next Lottery Winning Numbers
How to What Is The Next Lottery Winning Numbers.
How does a What Is The Next Lottery Winning Numbers.
How do What Is The Next Lottery Winning Numbers.
Does a What Is The Next Lottery Winning Numbers.
Do a What Is The Next Lottery Winning Numbers.
Does my What Is The Next Lottery Winning Numbers.
Is a What Is The Next Lottery Winning Numbers.
Is My What Is The Next Lottery Winning Numbers.
Can What Is The Next Lottery Winning Numbers.
What is What Is The Next Lottery Winning Numbers.
When What Is The Next Lottery Winning Numbers.
Are What Is The Next Lottery Winning Numbers.
Why Do What Is The Next Lottery Winning Numbers.
What is a What Is The Next Lottery Winning Numbers.
What Is The Next Lottery Winning Numbers 2010.
What Is The Next Lottery Winning Numbers 2011.
What Is The Next Lottery Winning Numbers 2012.
What Is The Next Lottery Winning Numbers 2013.
What Is The Next Lottery Winning Numbers 2014.
How To Win The Lotto With Scratch Cards - One Minute Set Up. Ken Silver\'s Has Won 9 Out Of 10 Games For the Last 20 Years And Banked Millions. This Lotto System Is Amazing!
Relate What Is The Next Lottery Winning Numbers : How Does The Lotto Draw Work,The Starwin Asia Lotto,Do Online Lotto Tickets Ever Win,Do California Lotto Numbers Have To Be In Order,Lottery In All
States,Usa Mega Lottery,Best Lottery Tickets To Buy,What Is The Next Lottery Winning Numbers
What Is The Next Lottery Winning Numbers : How To Win The Lotto With Scratch Cards Rating: | {"url":"http://howtowinthelotto72.blogspot.com/2012/09/what-is-next-lottery-winning-numbers.html","timestamp":"2014-04-23T08:35:35Z","content_type":null,"content_length":"70887","record_id":"<urn:uuid:46f9d397-7b69-4af7-9b28-7e8f62b972bd>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Characteristic Equation
Did you find the characteristic equation? Remember that if you have an equation p(x)=0 where p(x) is a polynomial, and you factor p into (x-a)(x-b)... then the roots are a,b,.... Here you're doing
the same thing backwards. Once you get the characteristic equation, the D.E. should be easy to find. You would normally have the D.E. and write the characteristic equation, but either way is easy. If
you can't get the answer, let us know where you get stuck. | {"url":"http://mathhelpforum.com/differential-equations/131765-characteristic-equation.html","timestamp":"2014-04-17T21:59:02Z","content_type":null,"content_length":"32131","record_id":"<urn:uuid:b5f9b1ff-0caa-435f-9467-741b943c72aa>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Matheology § 222 Back to the root<br> s
Date: Feb 27, 2013 3:05 PM
Author: fom
Subject: Re: Matheology § 222 Back to the root<br> s
On 2/27/2013 1:21 PM, WM wrote:
> On 27 Feb., 14:50, William Hughes <wpihug...@gmail.com> wrote:
>> On Feb 27, 1:04 pm, WM <mueck...@rz.fh-augsburg.de> wrote:
>>> On 26 Feb., 22:54, William Hughes <wpihug...@gmail.com> wrote:
>>>> Now no one can stop you using whatever
>>>> terminology you want. However, do not
>>>> expect that you can use idiotic terminology
>>>> without being considered an idiot.
>>> But one can use such arguing without being considered as such?
>> Nope.
>>> Remember: Your point of view requires, what you often have emphasized:
>>> There are all FIS of d in the list, but there is no line containing
>>> them. This implies that they are distributed among several lines
>> or that m, the index of line they are in is
>> a variable natural number and
>> that it is silly to
>> say that there is one line that contains
>> every FIS when this "one" line has a variable
>> as index.
> Do you prefer your argument?
His is not an argument. It is the received paradigm.
> Or do you think it is not better than
> mine?
Allowing, for the moment, that that to which you refer may be
characterized as an argument, there is no issue of "better".
You are the dissenter and have the burden of proof.
> If you think your opinion is better, more logical, than mine, why do
> you think so?
Because it respects objective knowledge in the form of principles
that may ground demonstrations in a deductive calculus.
> We can prove that there is no knowable natural number of a line that
> contains all FIS of d.
One may question how you prove what is unknowable.
And, by your logic, if you happen to die before I do, you
cannot prove that I am unable to successfully and completely
enumerate omega. Even worse, you cannot even prove that I
am unable to successfully and completely enumerate each of
the n-huge cardinals.
Your methodology is able to prove only that which you can
imagine since you reject all principles intended to ground
the demonstration of facticity.
> Ok, we speak of a variable that is not fixed.
Variables have fixed grammatical relations and determinate purport.
They are singular terms presumed capable of standing for objects
that could be ostensively named. Since you cannot materially
produce the number '1', you can no better "name" or "find" the
objects you imagine than could Cantor.
> We can prove that there is no knowable natural number
One may, once again, question how you prove what is unknowable.
> that is the
> first one of the infinite set that you claim. What is the advantage?
It is the very advantage that you claim but do not properly
Every findable number is findable.
In your version:
Every findable number is findable... | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8431199","timestamp":"2014-04-20T21:37:55Z","content_type":null,"content_length":"4693","record_id":"<urn:uuid:08a22f6b-6584-4fea-a9a8-989841accd29>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: TRUE and FALSE values in the relational lattice
From: Vadim Tropashko <vadimtro_invalid_at_yahoo.com> Date: Thu, 21 Jun 2007 10:26:19 -0700 Message-ID: <1182446779.883211.130730@e9g2000prf.googlegroups.com>
On Jun 21, 3:14 am, Jan Hidders <hidd..._at_gmail.com> wrote:
> I would assume the following operations: (in yet another notation)
> - E * E : natural join
> - E + E : generalized union
Eh, 3 different people, 3 different notations:-)
> - {()} : the relation with empty header and a single empty tuple
> - [A,B,..D] : the empty relation with header {A, B, ..., D} (possibly
> the empty set)
This is nice proposal. How abotut small letters for attributes and capital letters for relations?
One of my (yet not realized) goals is to get rid of the attributes completely. Every time we have some relational lattice identity which mentions attribute x, the x can always be thought of as a
composite attribute. And in relational terms a composite attribute is either an empty relation with header x, or a or cartesian product of domains. Therefore an attribute x, being it composite or not
is just a relation x which satisfies the identity
x /\ 00 = x
(empty relation constraint), or
X \/ 11 = X
(full domain constraint). Therefore, perhaps the distinction between attributes and relations is not so clear cut to warrant capital and small letter notation?
> - A=B : the equality relation with header {A, B}
> - A=c : the relation with header {A} and tuple (A:c)
Single tuple element is a relation R such that
R /\ 00 != R
and there is no relation X such that
R < X < R /\ 00
where the "<" operation is understood to be the strict lattice order: "greater than (and not equal)"
Defining equality relation is an intersting venture by itself. I can suggest few identities. Assuming R(x,y)
(((R /\ (y=y') ) \/ [x,y'] ) /\ (y=y') ) \/ [x,y] = R
Again, for this to become a true axion it better be transformed into attribute agnostic form, something like this:
(( (R /\ E) \/ (X /\ Y') ) /\ E ) \/ (X /\ Y) = R X /\ 00 = X
Y /\ 00 = Y
Y' /\ 00 = Y'
E /\ 00 = X /\ Y /\ 00
R /\ 00 = X /\ Y /\ 00
X \/ Y = 00
X \/ Y' = 00
Y \/ Y' = 00
Clearly, the straightforward translation produced such an unwieldy mess that there has to be a more concise axioms.
> - R : a relation with name R (and known header)
Yes! Again, this is how algebraic identity should look like: relation variables (no attributes in parenthesis!), constant relation symbols, and algebraic operations.
> That would be roughly equivalent with unions of conjunctive queries,
> for which equivalence is known to be decidable. Adding the set
> difference or division would make it relationally complete (so, FOL)
> and make this undecidable.
> Actually, a (sound and complete) axiomatization would be a small but > publishable result. It doesn't seem that impossible either. You could > start with showing that you have enough rules to bring
the thing into > some union normal form, and then show that you also have enough rules > to show a homomorfisme over the conjuncts. Quite interesting indeed. > Anyone interested in thinking about a
paper on that?
This is nice research program. If the other exchange taught any lessons, then spelling out every minute detail would be one of them:-) Could you please clarify (with an example!) what "homomorfism
over the conjuncts" you have in mind? Received on Thu Jun 21 2007 - 12:26:19 CDT | {"url":"http://www.orafaq.com/usenet/comp.databases.theory/2007/06/21/0199.htm","timestamp":"2014-04-18T14:56:32Z","content_type":null,"content_length":"12076","record_id":"<urn:uuid:aec44ee8-46a8-435c-aeb4-0d3197b19240>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lessons In Electric Circuits -- Volume II
Lessons In Electric Circuits -- Volume II
Chapter 14
Early in my explorations of electricity, I came across a length of coaxial cable with the label “50 ohms” printed along its outer sheath. (Figure below) Now, coaxial cable is a two-conductor cable
made of a single conductor surrounded by a braided wire jacket, with a plastic insulating material separating the two. As such, the outer (braided) conductor completely surrounds the inner (single
wire) conductor, the two conductors insulated from each other for the entire length of the cable. This type of cabling is often used to conduct weak (low-amplitude) voltage signals, due to its
excellent ability to shield such signals from external interference.
Coaxial cable contruction.
I was mystified by the “50 ohms” label on this coaxial cable. How could two conductors, insulated from each other by a relatively thick layer of plastic, have 50 ohms of resistance between them?
Measuring resistance between the outer and inner conductors with my ohmmeter, I found it to be infinite (open-circuit), just as I would have expected from two insulated conductors. Measuring each of
the two conductors' resistances from one end of the cable to the other indicated nearly zero ohms of resistance: again, exactly what I would have expected from continuous, unbroken lengths of wire.
Nowhere was I able to measure 50 Ω of resistance on this cable, regardless of which points I connected my ohmmeter between.
What I didn't understand at the time was the cable's response to short-duration voltage “pulses” and high-frequency AC signals. Continuous direct current (DC) -- such as that used by my ohmmeter to
check the cable's resistance -- shows the two conductors to be completely insulated from each other, with nearly infinite resistance between the two. However, due to the effects of capacitance and
inductance distributed along the length of the cable, the cable's response to rapidly-changing voltages is such that it acts as a finite impedance, drawing current proportional to an applied voltage.
What we would normally dismiss as being just a pair of wires becomes an important circuit element in the presence of transient and high-frequency AC signals, with characteristic properties all its
own. When expressing such properties, we refer to the wire pair as a transmission line.
This chapter explores transmission line behavior. Many transmission line effects do not appear in significant measure in AC circuits of powerline frequency (50 or 60 Hz), or in continuous DC
circuits, and so we haven't had to concern ourselves with them in our study of electric circuits thus far. However, in circuits involving high frequencies and/or extremely long cable lengths, the
effects are very significant. Practical applications of transmission line effects abound in radio-frequency (“RF”) communication circuitry, including computer networks, and in low-frequency circuits
subject to voltage transients (“surges”) such as lightning strikes on power lines.
Suppose we had a simple one-battery, one-lamp circuit controlled by a switch. When the switch is closed, the lamp immediately lights. When the switch is opened, the lamp immediately darkens: (Figure
Lamp appears to immediately respond to switch.
Actually, an incandescent lamp takes a short time for its filament to warm up and emit light after receiving an electric current of sufficient magnitude to power it, so the effect is not instant.
However, what I'd like to focus on is the immediacy of the electric current itself, not the response time of the lamp filament. For all practical purposes, the effect of switch action is instant at
the lamp's location. Although electrons move through wires very slowly, the overall effect of electrons pushing against each other happens at the speed of light (approximately 186,000 miles per
What would happen, though, if the wires carrying power to the lamp were 186,000 miles long? Since we know the effects of electricity do have a finite speed (albeit very fast), a set of very long
wires should introduce a time delay into the circuit, delaying the switch's action on the lamp: (Figure below)
At the speed of light, lamp responds after 1 second.
Assuming no warm-up time for the lamp filament, and no resistance along the 372,000 mile length of both wires, the lamp would light up approximately one second after the switch closure. Although the
construction and operation of superconducting wires 372,000 miles in length would pose enormous practical problems, it is theoretically possible, and so this “thought experiment” is valid. When the
switch is opened again, the lamp will continue to receive power for one second of time after the switch opens, then it will de-energize.
One way of envisioning this is to imagine the electrons within a conductor as rail cars in a train: linked together with a small amount of “slack” or “play” in the couplings. When one rail car
(electron) begins to move, it pushes on the one ahead of it and pulls on the one behind it, but not before the slack is relieved from the couplings. Thus, motion is transferred from car to car (from
electron to electron) at a maximum velocity limited by the coupling slack, resulting in a much faster transfer of motion from the left end of the train (circuit) to the right end than the actual
speed of the cars (electrons): (Figure below)
Motion is transmitted sucessively from one car to next.
Another analogy, perhaps more fitting for the subject of transmission lines, is that of waves in water. Suppose a flat, wall-shaped object is suddenly moved horizontally along the surface of water,
so as to produce a wave ahead of it. The wave will travel as water molecules bump into each other, transferring wave motion along the water's surface far faster than the water molecules themselves
are actually traveling: (Figure below)
Wave motion in water.
Likewise, electron motion “coupling” travels approximately at the speed of light, although the electrons themselves don't move that quickly. In a very long circuit, this “coupling” speed would become
noticeable to a human observer in the form of a short time delay between switch action and lamp action.
• REVIEW:
• In an electric circuit, the effects of electron motion travel approximately at the speed of light, although electrons within the conductors do not travel anywhere near that velocity.
Suppose, though, that we had a set of parallel wires of infinite length, with no lamp at the end. What would happen when we close the switch? Being that there is no longer a load at the end of the
wires, this circuit is open. Would there be no current at all? (Figure below)
Driving an infinite transmission line.
Despite being able to avoid wire resistance through the use of superconductors in this “thought experiment,” we cannot eliminate capacitance along the wires' lengths. Any pair of conductors separated
by an insulating medium creates capacitance between those conductors: (Figure below)
Equivalent circuit showing stray capacitance between conductors.
Voltage applied between two conductors creates an electric field between those conductors. Energy is stored in this electric field, and this storage of energy results in an opposition to change in
voltage. The reaction of a capacitance against changes in voltage is described by the equation i = C(de/dt), which tells us that current will be drawn proportional to the voltage's rate of change
over time. Thus, when the switch is closed, the capacitance between conductors will react against the sudden voltage increase by charging up and drawing current from the source. According to the
equation, an instant rise in applied voltage (as produced by perfect switch closure) gives rise to an infinite charging current.
However, the current drawn by a pair of parallel wires will not be infinite, because there exists series impedance along the wires due to inductance. (Figure below) Remember that current through any
conductor develops a magnetic field of proportional magnitude. Energy is stored in this magnetic field, (Figure below) and this storage of energy results in an opposition to change in current. Each
wire develops a magnetic field as it carries charging current for the capacitance between the wires, and in so doing drops voltage according to the inductance equation e = L(di/dt). This voltage drop
limits the voltage rate-of-change across the distributed capacitance, preventing the current from ever reaching an infinite magnitude:
Equivalent circuit showing stray capacitance and inductance.
Voltage charges capacitance, current charges inductance.
Because the electrons in the two wires transfer motion to and from each other at nearly the speed of light, the “wave front” of voltage and current change will propagate down the length of the wires
at that same velocity, resulting in the distributed capacitance and inductance progressively charging to full voltage and current, respectively, like this: (Figures below, below, below, below)
Uncharged transmission line.
Begin wave propagation.
Continue wave propagation.
Propagate at speed of light.
The end result of these interactions is a constant current of limited magnitude through the battery source. Since the wires are infinitely long, their distributed capacitance will never fully charge
to the source voltage, and their distributed inductance will never allow unlimited charging current. In other words, this pair of wires will draw current from the source so long as the switch is
closed, behaving as a constant load. No longer are the wires merely conductors of electrical current and carriers of voltage, but now constitute a circuit component in themselves, with unique
characteristics. No longer are the two wires merely a pair of conductors, but rather a transmission line.
As a constant load, the transmission line's response to applied voltage is resistive rather than reactive, despite being comprised purely of inductance and capacitance (assuming superconducting wires
with zero resistance). We can say this because there is no difference from the battery's perspective between a resistor eternally dissipating energy and an infinite transmission line eternally
absorbing energy. The impedance (resistance) of this line in ohms is called the characteristic impedance, and it is fixed by the geometry of the two conductors. For a parallel-wire line with air
insulation, the characteristic impedance may be calculated as such:
If the transmission line is coaxial in construction, the characteristic impedance follows a different equation:
In both equations, identical units of measurement must be used in both terms of the fraction. If the insulating material is other than air (or a vacuum), both the characteristic impedance and the
propagation velocity will be affected. The ratio of a transmission line's true propagation velocity and the speed of light in a vacuum is called the velocity factor of that line.
Velocity factor is purely a factor of the insulating material's relative permittivity (otherwise known as its dielectric constant), defined as the ratio of a material's electric field permittivity to
that of a pure vacuum. The velocity factor of any cable type -- coaxial or otherwise -- may be calculated quite simply by the following formula:
Characteristic impedance is also known as natural impedance, and it refers to the equivalent resistance of a transmission line if it were infinitely long, owing to distributed capacitance and
inductance as the voltage and current “waves” propagate along its length at a propagation velocity equal to some large fraction of light speed.
It can be seen in either of the first two equations that a transmission line's characteristic impedance (Z[0]) increases as the conductor spacing increases. If the conductors are moved away from each
other, the distributed capacitance will decrease (greater spacing between capacitor “plates”), and the distributed inductance will increase (less cancellation of the two opposing magnetic fields).
Less parallel capacitance and more series inductance results in a smaller current drawn by the line for any given amount of applied voltage, which by definition is a greater impedance. Conversely,
bringing the two conductors closer together increases the parallel capacitance and decreases the series inductance. Both changes result in a larger current drawn for a given applied voltage, equating
to a lesser impedance.
Barring any dissipative effects such as dielectric “leakage” and conductor resistance, the characteristic impedance of a transmission line is equal to the square root of the ratio of the line's
inductance per unit length divided by the line's capacitance per unit length:
• REVIEW:
• A transmission line is a pair of parallel conductors exhibiting certain characteristics due to distributed capacitance and inductance along its length.
• When a voltage is suddenly applied to one end of a transmission line, both a voltage “wave” and a current “wave” propagate along the line at nearly light speed.
• If a DC voltage is applied to one end of an infinitely long transmission line, the line will draw current from the DC source as though it were a constant resistance.
• The characteristic impedance (Z[0]) of a transmission line is the resistance it would exhibit if it were infinite in length. This is entirely different from leakage resistance of the dielectric
separating the two conductors, and the metallic resistance of the wires themselves. Characteristic impedance is purely a function of the capacitance and inductance distributed along the line's
length, and would exist even if the dielectric were perfect (infinite parallel resistance) and the wires superconducting (zero series resistance).
• Velocity factor is a fractional value relating a transmission line's propagation speed to the speed of light in a vacuum. Values range between 0.66 and 0.80 for typical two-wire lines and coaxial
cables. For any cable type, it is equal to the reciprocal (1/x) of the square root of the relative permittivity of the cable's insulation.
A transmission line of infinite length is an interesting abstraction, but physically impossible. All transmission lines have some finite length, and as such do not behave precisely the same as an
infinite line. If that piece of 50 Ω “RG-58/U” cable I measured with an ohmmeter years ago had been infinitely long, I actually would have been able to measure 50 Ω worth of resistance between the
inner and outer conductors. But it was not infinite in length, and so it measured as “open” (infinite resistance).
Nonetheless, the characteristic impedance rating of a transmission line is important even when dealing with limited lengths. An older term for characteristic impedance, which I like for its
descriptive value, is surge impedance. If a transient voltage (a “surge”) is applied to the end of a transmission line, the line will draw a current proportional to the surge voltage magnitude
divided by the line's surge impedance (I=E/Z). This simple, Ohm's Law relationship between current and voltage will hold true for a limited period of time, but not indefinitely.
If the end of a transmission line is open-circuited -- that is, left unconnected -- the current “wave” propagating down the line's length will have to stop at the end, since electrons cannot flow
where there is no continuing path. This abrupt cessation of current at the line's end causes a “pile-up” to occur along the length of the transmission line, as the electrons successively find no
place to go. Imagine a train traveling down the track with slack between the rail car couplings: if the lead car suddenly crashes into an immovable barricade, it will come to a stop, causing the one
behind it to come to a stop as soon as the first coupling slack is taken up, which causes the next rail car to stop as soon as the next coupling's slack is taken up, and so on until the last rail car
stops. The train does not come to a halt together, but rather in sequence from first car to last: (Figure below)
Reflected wave.
A signal propagating from the source-end of a transmission line to the load-end is called an incident wave. The propagation of a signal from load-end to source-end (such as what happened in this
example with current encountering the end of an open-circuited transmission line) is called a reflected wave.
When this electron “pile-up” propagates back to the battery, current at the battery ceases, and the line acts as a simple open circuit. All this happens very quickly for transmission lines of
reasonable length, and so an ohmmeter measurement of the line never reveals the brief time period where the line actually behaves as a resistor. For a mile-long cable with a velocity factor of 0.66
(signal propagation velocity is 66% of light speed, or 122,760 miles per second), it takes only 1/122,760 of a second (8.146 microseconds) for a signal to travel from one end to the other. For the
current signal to reach the line's end and “reflect” back to the source, the round-trip time is twice this figure, or 16.292 µs.
High-speed measurement instruments are able to detect this transit time from source to line-end and back to source again, and may be used for the purpose of determining a cable's length. This
technique may also be used for determining the presence and location of a break in one or both of the cable's conductors, since a current will “reflect” off the wire break just as it will off the end
of an open-circuited cable. Instruments designed for such purposes are called time-domain reflectometers (TDRs). The basic principle is identical to that of sonar range-finding: generating a sound
pulse and measuring the time it takes for the echo to return.
A similar phenomenon takes place if the end of a transmission line is short-circuited: when the voltage wave-front reaches the end of the line, it is reflected back to the source, because voltage
cannot exist between two electrically common points. When this reflected wave reaches the source, the source sees the entire transmission line as a short-circuit. Again, this happens as quickly as
the signal can propagate round-trip down and up the transmission line at whatever velocity allowed by the dielectric material between the line's conductors.
A simple experiment illustrates the phenomenon of wave reflection in transmission lines. Take a length of rope by one end and “whip” it with a rapid up-and-down motion of the wrist. A wave may be
seen traveling down the rope's length until it dissipates entirely due to friction: (Figure below)
Lossy transmission line.
This is analogous to a long transmission line with internal loss: the signal steadily grows weaker as it propagates down the line's length, never reflecting back to the source. However, if the far
end of the rope is secured to a solid object at a point prior to the incident wave's total dissipation, a second wave will be reflected back to your hand: (Figure below)
Reflected wave.
Usually, the purpose of a transmission line is to convey electrical energy from one point to another. Even if the signals are intended for information only, and not to power some significant load
device, the ideal situation would be for all of the original signal energy to travel from the source to the load, and then be completely absorbed or dissipated by the load for maximum signal-to-noise
ratio. Thus, “loss” along the length of a transmission line is undesirable, as are reflected waves, since reflected energy is energy not delivered to the end device.
Reflections may be eliminated from the transmission line if the load's impedance exactly equals the characteristic (“surge”) impedance of the line. For example, a 50 Ω coaxial cable that is either
open-circuited or short-circuited will reflect all of the incident energy back to the source. However, if a 50 Ω resistor is connected at the end of the cable, there will be no reflected energy, all
signal energy being dissipated by the resistor.
This makes perfect sense if we return to our hypothetical, infinite-length transmission line example. A transmission line of 50 Ω characteristic impedance and infinite length behaves exactly like a
50 Ω resistance as measured from one end. (Figure below) If we cut this line to some finite length, it will behave as a 50 Ω resistor to a constant source of DC voltage for a brief time, but then
behave like an open- or a short-circuit, depending on what condition we leave the cut end of the line: open (Figure below) or shorted. (Figure below) However, if we terminate the line with a 50 Ω
resistor, the line will once again behave as a 50 Ω resistor, indefinitely: the same as if it were of infinite length again: (Figure below)
Infinite transmission line looks like resistor.
One mile transmission.
Shorted transmission line.
Line terminated in characteristic impedance.
In essence, a terminating resistor matching the natural impedance of the transmission line makes the line “appear” infinitely long from the perspective of the source, because a resistor has the
ability to eternally dissipate energy in the same way a transmission line of infinite length is able to eternally absorb energy.
Reflected waves will also manifest if the terminating resistance isn't precisely equal to the characteristic impedance of the transmission line, not just if the line is left unconnected (open) or
jumpered (shorted). Though the energy reflection will not be total with a terminating impedance of slight mismatch, it will be partial. This happens whether or not the terminating resistance is
greater or less than the line's characteristic impedance.
Re-reflections of a reflected wave may also occur at the source end of a transmission line, if the source's internal impedance (Thevenin equivalent impedance) is not exactly equal to the line's
characteristic impedance. A reflected wave returning back to the source will be dissipated entirely if the source impedance matches the line's, but will be reflected back toward the line end like
another incident wave, at least partially, if the source impedance does not match the line. This type of reflection may be particularly troublesome, as it makes it appear that the source has
transmitted another pulse.
• REVIEW:
• Characteristic impedance is also known as surge impedance, due to the temporarily resistive behavior of any length transmission line.
• A finite-length transmission line will appear to a DC voltage source as a constant resistance for some short time, then as whatever impedance the line is terminated with. Therefore, an open-ended
cable simply reads “open” when measured with an ohmmeter, and “shorted” when its end is short-circuited.
• A transient (“surge”) signal applied to one end of an open-ended or short-circuited transmission line will “reflect” off the far end of the line as a secondary wave. A signal traveling on a
transmission line from source to load is called an incident wave; a signal “bounced” off the end of a transmission line, traveling from load to source, is called a reflected wave.
• Reflected waves will also appear in transmission lines terminated by resistors not precisely matching the characteristic impedance.
• A finite-length transmission line may be made to appear infinite in length if terminated by a resistor of equal value to the line's characteristic impedance. This eliminates all signal
• A reflected wave may become re-reflected off the source-end of a transmission line if the source's internal impedance does not match the line's characteristic impedance. This re-reflected wave
will appear, of course, like another pulse signal transmitted from the source.
In DC and low-frequency AC circuits, the characteristic impedance of parallel wires is usually ignored. This includes the use of coaxial cables in instrument circuits, often employed to protect weak
voltage signals from being corrupted by induced “noise” caused by stray electric and magnetic fields. This is due to the relatively short timespans in which reflections take place in the line, as
compared to the period of the waveforms or pulses of the significant signals in the circuit. As we saw in the last section, if a transmission line is connected to a DC voltage source, it will behave
as a resistor equal in value to the line's characteristic impedance only for as long as it takes the incident pulse to reach the end of the line and return as a reflected pulse, back to the source.
After that time (a brief 16.292 µs for the mile-long coaxial cable of the last example), the source “sees” only the terminating impedance, whatever that may be.
If the circuit in question handles low-frequency AC power, such short time delays introduced by a transmission line between when the AC source outputs a voltage peak and when the source “sees” that
peak loaded by the terminating impedance (round-trip time for the incident wave to reach the line's end and reflect back to the source) are of little consequence. Even though we know that signal
magnitudes along the line's length are not equal at any given time due to signal propagation at (nearly) the speed of light, the actual phase difference between start-of-line and end-of-line signals
is negligible, because line-length propagations occur within a very small fraction of the AC waveform's period. For all practical purposes, we can say that voltage along all respective points on a
low-frequency, two-conductor line are equal and in-phase with each other at any given point in time.
In these cases, we can say that the transmission lines in question are electrically short, because their propagation effects are much quicker than the periods of the conducted signals. By contrast,
an electrically long line is one where the propagation time is a large fraction or even a multiple of the signal period. A “long” line is generally considered to be one where the source's signal
waveform completes at least a quarter-cycle (90^o of “rotation”) before the incident signal reaches line's end. Up until this chapter in the Lessons In Electric Circuits book series, all connecting
lines were assumed to be electrically short.
To put this into perspective, we need to express the distance traveled by a voltage or current signal along a transmission line in relation to its source frequency. An AC waveform with a frequency of
60 Hz completes one cycle in 16.66 ms. At light speed (186,000 mile/s), this equates to a distance of 3100 miles that a voltage or current signal will propagate in that time. If the velocity factor
of the transmission line is less than 1, the propagation velocity will be less than 186,000 miles per second, and the distance less by the same factor. But even if we used the coaxial cable's
velocity factor from the last example (0.66), the distance is still a very long 2046 miles! Whatever distance we calculate for a given frequency is called the wavelength of the signal.
A simple formula for calculating wavelength is as follows:
The lower-case Greek letter “lambda” (λ) represents wavelength, in whatever unit of length used in the velocity figure (if miles per second, then wavelength in miles; if meters per second, then
wavelength in meters). Velocity of propagation is usually the speed of light when calculating signal wavelength in open air or in a vacuum, but will be less if the transmission line has a velocity
factor less than 1.
If a “long” line is considered to be one at least 1/4 wavelength in length, you can see why all connecting lines in the circuits discussed thusfar have been assumed “short.” For a 60 Hz AC power
system, power lines would have to exceed 775 miles in length before the effects of propagation time became significant. Cables connecting an audio amplifier to speakers would have to be over 4.65
miles in length before line reflections would significantly impact a 10 kHz audio signal!
When dealing with radio-frequency systems, though, transmission line length is far from trivial. Consider a 100 MHz radio signal: its wavelength is a mere 9.8202 feet, even at the full propagation
velocity of light (186,000 mile/s). A transmission line carrying this signal would not have to be more than about 2-1/2 feet in length to be considered “long!” With a cable velocity factor of 0.66,
this critical length shrinks to 1.62 feet.
When an electrical source is connected to a load via a “short” transmission line, the load's impedance dominates the circuit. This is to say, when the line is short, its own characteristic impedance
is of little consequence to the circuit's behavior. We see this when testing a coaxial cable with an ohmmeter: the cable reads “open” from center conductor to outer conductor if the cable end is left
unterminated. Though the line acts as a resistor for a very brief period of time after the meter is connected (about 50 Ω for an RG-58/U cable), it immediately thereafter behaves as a simple “open
circuit:” the impedance of the line's open end. Since the combined response time of an ohmmeter and the human being using it greatly exceeds the round-trip propagation time up and down the cable, it
is “electrically short” for this application, and we only register the terminating (load) impedance. It is the extreme speed of the propagated signal that makes us unable to detect the cable's 50 Ω
transient impedance with an ohmmeter.
If we use a coaxial cable to conduct a DC voltage or current to a load, and no component in the circuit is capable of measuring or responding quickly enough to “notice” a reflected wave, the cable is
considered “electrically short” and its impedance is irrelevant to circuit function. Note how the electrical “shortness” of a cable is relative to the application: in a DC circuit where voltage and
current values change slowly, nearly any physical length of cable would be considered “short” from the standpoint of characteristic impedance and reflected waves. Taking the same length of cable,
though, and using it to conduct a high-frequency AC signal could result in a vastly different assessment of that cable's “shortness!”
When a source is connected to a load via a “long” transmission line, the line's own characteristic impedance dominates over load impedance in determining circuit behavior. In other words, an
electrically “long” line acts as the principal component in the circuit, its own characteristics overshadowing the load's. With a source connected to one end of the cable and a load to the other,
current drawn from the source is a function primarily of the line and not the load. This is increasingly true the longer the transmission line is. Consider our hypothetical 50 Ω cable of infinite
length, surely the ultimate example of a “long” transmission line: no matter what kind of load we connect to one end of this line, the source (connected to the other end) will only see 50 Ω of
impedance, because the line's infinite length prevents the signal from ever reaching the end where the load is connected. In this scenario, line impedance exclusively defines circuit behavior,
rendering the load completely irrelevant.
The most effective way to minimize the impact of transmission line length on circuit behavior is to match the line's characteristic impedance to the load impedance. If the load impedance is equal to
the line impedance, then any signal source connected to the other end of the line will “see” the exact same impedance, and will have the exact same amount of current drawn from it, regardless of line
length. In this condition of perfect impedance matching, line length only affects the amount of time delay from signal departure at the source to signal arrival at the load. However, perfect matching
of line and load impedances is not always practical or possible.
The next section discusses the effects of “long” transmission lines, especially when line length happens to match specific fractions or multiples of signal wavelength.
• REVIEW:
• Coaxial cabling is sometimes used in DC and low-frequency AC circuits as well as in high-frequency circuits, for the excellent immunity to induced “noise” that it provides for signals.
• When the period of a transmitted voltage or current signal greatly exceeds the propagation time for a transmission line, the line is considered electrically short. Conversely, when the
propagation time is a large fraction or multiple of the signal's period, the line is considered electrically long.
• A signal's wavelength is the physical distance it will propagate in the timespan of one period. Wavelength is calculated by the formula λ=v/f, where “λ” is the wavelength, “v” is the propagation
velocity, and “f” is the signal frequency.
• A rule-of-thumb for transmission line “shortness” is that the line must be at least 1/4 wavelength before it is considered “long.”
• In a circuit with a “short” line, the terminating (load) impedance dominates circuit behavior. The source effectively sees nothing but the load's impedance, barring any resistive losses in the
transmission line.
• In a circuit with a “long” line, the line's own characteristic impedance dominates circuit behavior. The ultimate example of this is a transmission line of infinite length: since the signal will
never reach the load impedance, the source only “sees” the cable's characteristic impedance.
• When a transmission line is terminated by a load precisely matching its impedance, there are no reflected waves and thus no problems with line length.
Whenever there is a mismatch of impedance between transmission line and load, reflections will occur. If the incident signal is a continuous AC waveform, these reflections will mix with more of the
oncoming incident waveform to produce stationary waveforms called standing waves.
The following illustration shows how a triangle-shaped incident waveform turns into a mirror-image reflection upon reaching the line's unterminated end. The transmission line in this illustrative
sequence is shown as a single, thick line rather than a pair of wires, for simplicity's sake. The incident wave is shown traveling from left to right, while the reflected wave travels from right to
left: (Figure below)
Incident wave reflects off end of unterminated transmission line.
If we add the two waveforms together, we find that a third, stationary waveform is created along the line's length: (Figure below)
The sum of the incident and reflected waves is a stationary wave.
This third, “standing” wave, in fact, represents the only voltage along the line, being the representative sum of incident and reflected voltage waves. It oscillates in instantaneous magnitude, but
does not propagate down the cable's length like the incident or reflected waveforms causing it. Note the dots along the line length marking the “zero” points of the standing wave (where the incident
and reflected waves cancel each other), and how those points never change position: (Figure below)
The standing wave does not propgate along the transmission line.
Standing waves are quite abundant in the physical world. Consider a string or rope, shaken at one end, and tied down at the other (only one half-cycle of hand motion shown, moving downward): (Figure
Standing waves on a rope.
Both the nodes (points of little or no vibration) and the antinodes (points of maximum vibration) remain fixed along the length of the string or rope. The effect is most pronounced when the free end
is shaken at just the right frequency. Plucked strings exhibit the same “standing wave” behavior, with “nodes” of maximum and minimum vibration along their length. The major difference between a
plucked string and a shaken string is that the plucked string supplies its own “correct” frequency of vibration to maximize the standing-wave effect: (Figure below)
Standing waves on a plucked string.
Wind blowing across an open-ended tube also produces standing waves; this time, the waves are vibrations of air molecules (sound) within the tube rather than vibrations of a solid object. Whether the
standing wave terminates in a node (minimum amplitude) or an antinode (maximum amplitude) depends on whether the other end of the tube is open or closed: (Figure below)
Standing sound waves in open ended tubes.
A closed tube end must be a wave node, while an open tube end must be an antinode. By analogy, the anchored end of a vibrating string must be a node, while the free end (if there is any) must be an
Note how there is more than one wavelength suitable for producing standing waves of vibrating air within a tube that precisely match the tube's end points. This is true for all standing-wave systems:
standing waves will resonate with the system for any frequency (wavelength) correlating to the node/antinode points of the system. Another way of saying this is that there are multiple resonant
frequencies for any system supporting standing waves.
All higher frequencies are integer-multiples of the lowest (fundamental) frequency for the system. The sequential progression of harmonics from one resonant frequency to the next defines the overtone
frequencies for the system: (Figure below)
Harmonics (overtones) in open ended pipes
The actual frequencies (measured in Hertz) for any of these harmonics or overtones depends on the physical length of the tube and the waves' propagation velocity, which is the speed of sound in air.
Because transmission lines support standing waves, and force these waves to possess nodes and antinodes according to the type of termination impedance at the load end, they also exhibit resonance at
frequencies determined by physical length and propagation velocity. Transmission line resonance, though, is a bit more complex than resonance of strings or of air in tubes, because we must consider
both voltage waves and current waves.
This complexity is made easier to understand by way of computer simulation. To begin, let's examine a perfectly matched source, transmission line, and load. All components have an impedance of 75 Ω:
(Figure below)
Perfectly matched transmission line.
Using SPICE to simulate the circuit, we'll specify the transmission line (t1) with a 75 Ω characteristic impedance (z0=75) and a propagation delay of 1 microsecond (td=1u). This is a convenient
method for expressing the physical length of a transmission line: the amount of time it takes a wave to propagate down its entire length. If this were a real 75 Ω cable -- perhaps a type “RG-59B/U”
coaxial cable, the type commonly used for cable television distribution -- with a velocity factor of 0.66, it would be about 648 feet long. Since 1 µs is the period of a 1 MHz signal, I'll choose to
sweep the frequency of the AC source from (nearly) zero to that figure, to see how the system reacts when exposed to signals ranging from DC to 1 wavelength.
Here is the SPICE netlist for the circuit shown above:
Transmission line
v1 1 0 ac 1 sin
rsource 1 2 75
t1 2 0 3 0 z0=75 td=1u
rload 3 0 75
.ac lin 101 1m 1meg
* Using “Nutmeg” program to plot analysis
Running this simulation and plotting the source impedance drop (as an indication of current), the source voltage, the line's source-end voltage, and the load voltage, we see that the source voltage
-- shown as vm(1) (voltage magnitude between node 1 and the implied ground point of node 0) on the graphic plot -- registers a steady 1 volt, while every other voltage registers a steady 0.5 volts:
(Figure below)
No resonances on a matched transmission line.
In a system where all impedances are perfectly matched, there can be no standing waves, and therefore no resonant “peaks” or “valleys” in the Bode plot.
Now, let's change the load impedance to 999 MΩ, to simulate an open-ended transmission line. (Figure below) We should definitely see some reflections on the line now as the frequency is swept from 1
mHz to 1 MHz: (Figure below)
Open ended transmission line.
Transmission line
v1 1 0 ac 1 sin
rsource 1 2 75
t1 2 0 3 0 z0=75 td=1u
rload 3 0 999meg
.ac lin 101 1m 1meg
* Using “Nutmeg” program to plot analysis
Resonances on open transmission line.
Here, both the supply voltage vm(1) and the line's load-end voltage vm(3) remain steady at 1 volt. The other voltages dip and peak at different frequencies along the sweep range of 1 mHz to 1 MHz.
There are five points of interest along the horizontal axis of the analysis: 0 Hz, 250 kHz, 500 kHz, 750 kHz, and 1 MHz. We will investigate each one with regard to voltage and current at different
points of the circuit.
At 0 Hz (actually 1 mHz), the signal is practically DC, and the circuit behaves much as it would given a 1-volt DC battery source. There is no circuit current, as indicated by zero voltage drop
across the source impedance (Z[source]: vm(1,2)), and full source voltage present at the source-end of the transmission line (voltage measured between node 2 and node 0: vm(2)). (Figure below)
At f=0: input: V=1, I=0; end: V=1, I=0.
At 250 kHz, we see zero voltage and maximum current at the source-end of the transmission line, yet still full voltage at the load-end: (Figure below)
At f=250 KHz: input: V=0, I=13.33 mA; end: V=1 I=0.
You might be wondering, how can this be? How can we get full source voltage at the line's open end while there is zero voltage at its entrance? The answer is found in the paradox of the standing
wave. With a source frequency of 250 kHz, the line's length is precisely right for 1/4 wavelength to fit from end to end. With the line's load end open-circuited, there can be no current, but there
will be voltage. Therefore, the load-end of an open-circuited transmission line is a current node (zero point) and a voltage antinode (maximum amplitude): (Figure below)
Open end of transmission line shows current node, voltage antinode at open end.
At 500 kHz, exactly one-half of a standing wave rests on the transmission line, and here we see another point in the analysis where the source current drops off to nothing and the source-end voltage
of the transmission line rises again to full voltage: (Figure below)
Full standing wave on half wave open transmission line.
At 750 kHz, the plot looks a lot like it was at 250 kHz: zero source-end voltage (vm(2)) and maximum current (vm(1,2)). This is due to 3/4 of a wave poised along the transmission line, resulting in
the source “seeing” a short-circuit where it connects to the transmission line, even though the other end of the line is open-circuited: (Figure below)
1 1/2 standing waves on 3/4 wave open transmission line.
When the supply frequency sweeps up to 1 MHz, a full standing wave exists on the transmission line. At this point, the source-end of the line experiences the same voltage and current amplitudes as
the load-end: full voltage and zero current. In essence, the source “sees” an open circuit at the point where it connects to the transmission line. (Figure below)
Double standing waves on full wave open transmission line.
In a similar fashion, a short-circuited transmission line generates standing waves, although the node and antinode assignments for voltage and current are reversed: at the shorted end of the line,
there will be zero voltage (node) and maximum current (antinode). What follows is the SPICE simulation (circuit Figure below and illustrations of what happens (Figure 2nd-below at resonances) at all
the interesting frequencies: 0 Hz (Figure below) , 250 kHz (Figure below), 500 kHz (Figure below), 750 kHz (Figure below), and 1 MHz (Figure below). The short-circuit jumper is simulated by a 1 µΩ
load impedance: (Figure below)
Shorted transmission line.
Transmission line
v1 1 0 ac 1 sin
rsource 1 2 75
t1 2 0 3 0 z0=75 td=1u
rload 3 0 1u
.ac lin 101 1m 1meg
* Using “Nutmeg” program to plot analysis
Resonances on shorted transmission line
At f=0 Hz: input: V=0, I=13.33 mA; end: V=0, I=13.33 mA.
Half wave standing wave pattern on 1/4 wave shorted transmission line.
Full wave standing wave pattern on half wave shorted transmission line.
1 1/2 standing wavepattern on 3/4 wave shorted transmission line.
Double standing waves on full wave shorted transmission line.
In both these circuit examples, an open-circuited line and a short-circuited line, the energy reflection is total: 100% of the incident wave reaching the line's end gets reflected back toward the
source. If, however, the transmission line is terminated in some impedance other than an open or a short, the reflections will be less intense, as will be the difference between minimum and maximum
values of voltage and current along the line.
Suppose we were to terminate our example line with a 100 Ω resistor instead of a 75 Ω resistor. (Figure below) Examine the results of the corresponding SPICE analysis to see the effects of impedance
mismatch at different source frequencies: (Figure below)
Transmission line terminated in a mismatch
Transmission line
v1 1 0 ac 1 sin
rsource 1 2 75
t1 2 0 3 0 z0=75 td=1u
rload 3 0 100
.ac lin 101 1m 1meg
* Using “Nutmeg” program to plot analysis
Weak resonances on a mismatched transmission line
If we run another SPICE analysis, this time printing numerical results rather than plotting them, we can discover exactly what is happening at all the interesting frequencies: (DC, Figure below; 250
kHz, Figure below; 500 kHz, Figure below; 750 kHz, Figure below; and 1 MHz, Figure below).
Transmission line
v1 1 0 ac 1 sin
rsource 1 2 75
t1 2 0 3 0 z0=75 td=1u
rload 3 0 100
.ac lin 5 1m 1meg
.print ac v(1,2) v(1) v(2) v(3)
freq v(1,2) v(1) v(2) v(3)
1.000E-03 4.286E-01 1.000E+00 5.714E-01 5.714E-01
2.500E+05 5.714E-01 1.000E+00 4.286E-01 5.714E-01
5.000E+05 4.286E-01 1.000E+00 5.714E-01 5.714E-01
7.500E+05 5.714E-01 1.000E+00 4.286E-01 5.714E-01
1.000E+06 4.286E-01 1.000E+00 5.714E-01 5.714E-01
At all frequencies, the source voltage, v(1), remains steady at 1 volt, as it should. The load voltage, v(3), also remains steady, but at a lesser voltage: 0.5714 volts. However, both the line input
voltage (v(2)) and the voltage dropped across the source's 75 Ω impedance (v(1,2), indicating current drawn from the source) vary with frequency.
At f=0 Hz: input: V=0.57.14, I=5.715 mA; end: V=0.5714, I=5.715 mA.
At f=250 KHz: input: V=0.4286, I=7.619 mA; end: V=0.5714, I=7.619 mA.
At f=500 KHz: input: V=0.5714, I=5.715 mA; end: V=5.714, I=5.715 mA.
At f=750 KHz: input: V=0.4286, I=7.619 mA; end: V=0.5714, I=7.619 mA.
At f=1 MHz: input: V=0.5714, I=5.715 mA; end: V=0.5714, I=0.5715 mA.
At odd harmonics of the fundamental frequency (250 kHz, Figure 3rd-above and 750 kHz, Figure above) we see differing levels of voltage at each end of the transmission line, because at those
frequencies the standing waves terminate at one end in a node and at the other end in an antinode. Unlike the open-circuited and short-circuited transmission line examples, the maximum and minimum
voltage levels along this transmission line do not reach the same extreme values of 0% and 100% source voltage, but we still have points of “minimum” and “maximum” voltage. (Figure 6th-above) The
same holds true for current: if the line's terminating impedance is mismatched to the line's characteristic impedance, we will have points of minimum and maximum current at certain fixed locations on
the line, corresponding to the standing current wave's nodes and antinodes, respectively.
One way of expressing the severity of standing waves is as a ratio of maximum amplitude (antinode) to minimum amplitude (node), for voltage or for current. When a line is terminated by an open or a
short, this standing wave ratio, or SWR is valued at infinity, since the minimum amplitude will be zero, and any finite value divided by zero results in an infinite (actually, “undefined”) quotient.
In this example, with a 75 Ω line terminated by a 100 Ω impedance, the SWR will be finite: 1.333, calculated by taking the maximum line voltage at either 250 kHz or 750 kHz (0.5714 volts) and
dividing by the minimum line voltage (0.4286 volts).
Standing wave ratio may also be calculated by taking the line's terminating impedance and the line's characteristic impedance, and dividing the larger of the two values by the smaller. In this
example, the terminating impedance of 100 Ω divided by the characteristic impedance of 75 Ω yields a quotient of exactly 1.333, matching the previous calculation very closely.
A perfectly terminated transmission line will have an SWR of 1, since voltage at any location along the line's length will be the same, and likewise for current. Again, this is usually considered
ideal, not only because reflected waves constitute energy not delivered to the load, but because the high values of voltage and current created by the antinodes of standing waves may over-stress the
transmission line's insulation (high voltage) and conductors (high current), respectively.
Also, a transmission line with a high SWR tends to act as an antenna, radiating electromagnetic energy away from the line, rather than channeling all of it to the load. This is usually undesirable,
as the radiated energy may “couple” with nearby conductors, producing signal interference. An interesting footnote to this point is that antenna structures -- which typically resemble open- or
short-circuited transmission lines -- are often designed to operate at high standing wave ratios, for the very reason of maximizing signal radiation and reception.
The following photograph (Figure below) shows a set of transmission lines at a junction point in a radio transmitter system. The large, copper tubes with ceramic insulator caps at the ends are rigid
coaxial transmission lines of 50 Ω characteristic impedance. These lines carry RF power from the radio transmitter circuit to a small, wooden shelter at the base of an antenna structure, and from
that shelter on to other shelters with other antenna structures:
Flexible coaxial cables connected to rigid lines.
Flexible coaxial cable connected to the rigid lines (also of 50 Ω characteristic impedance) conduct the RF power to capacitive and inductive “phasing” networks inside the shelter. The white, plastic
tube joining two of the rigid lines together carries “filling” gas from one sealed line to the other. The lines are gas-filled to avoid collecting moisture inside them, which would be a definite
problem for a coaxial line. Note the flat, copper “straps” used as jumper wires to connect the conductors of the flexible coaxial cables to the conductors of the rigid lines. Why flat straps of
copper and not round wires? Because of the skin effect, which renders most of the cross-sectional area of a round conductor useless at radio frequencies.
Like many transmission lines, these are operated at low SWR conditions. As we will see in the next section, though, the phenomenon of standing waves in transmission lines is not always undesirable,
as it may be exploited to perform a useful function: impedance transformation.
• REVIEW:
• Standing waves are waves of voltage and current which do not propagate (i.e. they are stationary), but are the result of interference between incident and reflected waves along a transmission
• A node is a point on a standing wave of minimum amplitude.
• An antinode is a point on a standing wave of maximum amplitude.
• Standing waves can only exist in a transmission line when the terminating impedance does not match the line's characteristic impedance. In a perfectly terminated line, there are no reflected
waves, and therefore no standing waves at all.
• At certain frequencies, the nodes and antinodes of standing waves will correlate with the ends of a transmission line, resulting in resonance.
• The lowest-frequency resonant point on a transmission line is where the line is one quarter-wavelength long. Resonant points exist at every harmonic (integer-multiple) frequency of the
fundamental (quarter-wavelength).
• Standing wave ratio, or SWR, is the ratio of maximum standing wave amplitude to minimum standing wave amplitude. It may also be calculated by dividing termination impedance by characteristic
impedance, or vice versa, which ever yields the greatest quotient. A line with no standing waves (perfectly matched: Z[load] to Z[0]) has an SWR equal to 1.
• Transmission lines may be damaged by the high maximum amplitudes of standing waves. Voltage antinodes may break down insulation between conductors, and current antinodes may overheat conductors.
Standing waves at the resonant frequency points of an open- or short-circuited transmission line produce unusual effects. When the signal frequency is such that exactly 1/2 wave or some multiple
thereof matches the line's length, the source “sees” the load impedance as it is. The following pair of illustrations shows an open-circuited line operating at 1/2 (Figure below) and 1 wavelength
(Figure below) frequencies:
Source sees open, same as end of half wavelength line.
Source sees open, same as end of full wavelength (2x half wavelength line).
In either case, the line has voltage antinodes at both ends, and current nodes at both ends. That is to say, there is maximum voltage and minimum current at either end of the line, which corresponds
to the condition of an open circuit. The fact that this condition exists at both ends of the line tells us that the line faithfully reproduces its terminating impedance at the source end, so that the
source “sees” an open circuit where it connects to the transmission line, just as if it were directly open-circuited.
The same is true if the transmission line is terminated by a short: at signal frequencies corresponding to 1/2 wavelength (Figure below) or some multiple (Figure below) thereof, the source “sees” a
short circuit, with minimum voltage and maximum current present at the connection points between source and transmission line:
Source sees short, same as end of half wave length line.
Source sees short, same as end of full wavelength line (2x half wavelength).
However, if the signal frequency is such that the line resonates at 1/4 wavelength or some multiple thereof, the source will “see” the exact opposite of the termination impedance. That is, if the
line is open-circuited, the source will “see” a short-circuit at the point where it connects to the line; and if the line is short-circuited, the source will “see” an open circuit: (Figure below)
Line open-circuited; source “sees” a short circuit: at quarter wavelength line (Figure below), at three-quarter wavelength line (Figure below)
Source sees short, reflected from open at end of quarter wavelength line.
Source sees short, reflected from open at end of three-quarter wavelength line.
Line short-circuited; source “sees” an open circuit: at quarter wavelength line (Figure below), at three-quarter wavelength line (Figure below)
Source sees open, reflected from short at end of quarter wavelength line.
Source sees open, reflected from short at end of three-quarter wavelength line.
At these frequencies, the transmission line is actually functioning as an impedance transformer, transforming an infinite impedance into zero impedance, or vice versa. Of course, this only occurs at
resonant points resulting in a standing wave of 1/4 cycle (the line's fundamental, resonant frequency) or some odd multiple (3/4, 5/4, 7/4, 9/4 . . .), but if the signal frequency is known and
unchanging, this phenomenon may be used to match otherwise unmatched impedances to each other.
Take for instance the example circuit from the last section where a 75 Ω source connects to a 75 Ω transmission line, terminating in a 100 Ω load impedance. From the numerical figures obtained via
SPICE, let's determine what impedance the source “sees” at its end of the transmission line at the line's resonant frequencies: quarter wavelength (Figure below), halfwave length (Figure below),
three-quarter wavelength (Figure below) full wavelength (Figure below)
Source sees 100 Ω reflected from 100 Ω load at end of quarter wavelength line.
Source sees 100 Ω reflected from 100 Ω load at end of half wavelength line.
Source sees 56.25 Ω reflected from 100 Ω load at end of three-quarter wavelength line (same as quarter wavelength).
Source sees 56.25 Ω reflected from 100 Ω load at end of full-wavelength line (same as half-wavelength).
A simple equation relates line impedance (Z[0]), load impedance (Z[load]), and input impedance (Z[input]) for an unmatched transmission line operating at an odd harmonic of its fundamental frequency:
One practical application of this principle would be to match a 300 Ω load to a 75 Ω signal source at a frequency of 50 MHz. All we need to do is calculate the proper transmission line impedance (Z
[0]), and length so that exactly 1/4 of a wave will “stand” on the line at a frequency of 50 MHz.
First, calculating the line impedance: taking the 75 Ω we desire the source to “see” at the source-end of the transmission line, and multiplying by the 300 Ω load resistance, we obtain a figure of
22,500. Taking the square root of 22,500 yields 150 Ω for a characteristic line impedance.
Now, to calculate the necessary line length: assuming that our cable has a velocity factor of 0.85, and using a speed-of-light figure of 186,000 miles per second, the velocity of propagation will be
158,100 miles per second. Taking this velocity and dividing by the signal frequency gives us a wavelength of 0.003162 miles, or 16.695 feet. Since we only need one-quarter of this length for the
cable to support a quarter-wave, the requisite cable length is 4.1738 feet.
Here is a schematic diagram for the circuit, showing node numbers for the SPICE analysis we're about to run: (Figure below)
Quarter wave section of 150 Ω transmission line matches 75 Ω source to 300 Ω load.
We can specify the cable length in SPICE in terms of time delay from beginning to end. Since the frequency is 50 MHz, the signal period will be the reciprocal of that, or 20 nano-seconds (20 ns).
One-quarter of that time (5 ns) will be the time delay of a transmission line one-quarter wavelength long:
Transmission line
v1 1 0 ac 1 sin
rsource 1 2 75
t1 2 0 3 0 z0=150 td=5n
rload 3 0 300
.ac lin 1 50meg 50meg
.print ac v(1,2) v(1) v(2) v(3)
freq v(1,2) v(1) v(2) v(3)
5.000E+07 5.000E-01 1.000E+00 5.000E-01 1.000E+00
At a frequency of 50 MHz, our 1-volt signal source drops half of its voltage across the series 75 Ω impedance (v(1,2)) and the other half of its voltage across the input terminals of the transmission
line (v(2)). This means the source “thinks” it is powering a 75 Ω load. The actual load impedance, however, receives a full 1 volt, as indicated by the 1.000 figure at v(3). With 0.5 volt dropped
across 75 Ω, the source is dissipating 3.333 mW of power: the same as dissipated by 1 volt across the 300 Ω load, indicating a perfect match of impedance, according to the Maximum Power Transfer
Theorem. The 1/4-wavelength, 150 Ω, transmission line segment has successfully matched the 300 Ω load to the 75 Ω source.
Bear in mind, of course, that this only works for 50 MHz and its odd-numbered harmonics. For any other signal frequency to receive the same benefit of matched impedances, the 150 Ω line would have to
lengthened or shortened accordingly so that it was exactly 1/4 wavelength long.
Strangely enough, the exact same line can also match a 75 Ω load to a 300 Ω source, demonstrating how this phenomenon of impedance transformation is fundamentally different in principle from that of
a conventional, two-winding transformer:
Transmission line
v1 1 0 ac 1 sin
rsource 1 2 300
t1 2 0 3 0 z0=150 td=5n
rload 3 0 75
.ac lin 1 50meg 50meg
.print ac v(1,2) v(1) v(2) v(3)
freq v(1,2) v(1) v(2) v(3)
5.000E+07 5.000E-01 1.000E+00 5.000E-01 2.500E-01
Here, we see the 1-volt source voltage equally split between the 300 Ω source impedance (v(1,2)) and the line's input (v(2)), indicating that the load “appears” as a 300 Ω impedance from the source's
perspective where it connects to the transmission line. This 0.5 volt drop across the source's 300 Ω internal impedance yields a power figure of 833.33 µW, the same as the 0.25 volts across the 75 Ω
load, as indicated by voltage figure v(3). Once again, the impedance values of source and load have been matched by the transmission line segment.
This technique of impedance matching is often used to match the differing impedance values of transmission line and antenna in radio transmitter systems, because the transmitter's frequency is
generally well-known and unchanging. The use of an impedance “transformer” 1/4 wavelength in length provides impedance matching using the shortest conductor length possible. (Figure below)
Quarter wave 150 Ω transmission line section matches 75 Ω line to 300 Ω antenna.
• REVIEW:
• A transmission line with standing waves may be used to match different impedance values if operated at the correct frequency(ies).
• When operated at a frequency corresponding to a standing wave of 1/4-wavelength along the transmission line, the line's characteristic impedance necessary for impedance transformation must be
equal to the square root of the product of the source's impedance and the load's impedance.
A waveguide is a special form of transmission line consisting of a hollow, metal tube. The tube wall provides distributed inductance, while the empty space between the tube walls provide distributed
capacitance: Figure below
Wave guides conduct microwave energy at lower loss than coaxial cables.
Waveguides are practical only for signals of extremely high frequency, where the wavelength approaches the cross-sectional dimensions of the waveguide. Below such frequencies, waveguides are useless
as electrical transmission lines.
When functioning as transmission lines, though, waveguides are considerably simpler than two-conductor cables -- especially coaxial cables -- in their manufacture and maintenance. With only a single
conductor (the waveguide's “shell”), there are no concerns with proper conductor-to-conductor spacing, or of the consistency of the dielectric material, since the only dielectric in a waveguide is
air. Moisture is not as severe a problem in waveguides as it is within coaxial cables, either, and so waveguides are often spared the necessity of gas “filling.”
Waveguides may be thought of as conduits for electromagnetic energy, the waveguide itself acting as nothing more than a “director” of the energy rather than as a signal conductor in the normal sense
of the word. In a sense, all transmission lines function as conduits of electromagnetic energy when transporting pulses or high-frequency waves, directing the waves as the banks of a river direct a
tidal wave. However, because waveguides are single-conductor elements, the propagation of electrical energy down a waveguide is of a very different nature than the propagation of electrical energy
down a two-conductor transmission line.
All electromagnetic waves consist of electric and magnetic fields propagating in the same direction of travel, but perpendicular to each other. Along the length of a normal transmission line, both
electric and magnetic fields are perpendicular (transverse) to the direction of wave travel. This is known as the principal mode, or TEM (Transverse Electric and Magnetic) mode. This mode of wave
propagation can exist only where there are two conductors, and it is the dominant mode of wave propagation where the cross-sectional dimensions of the transmission line are small compared to the
wavelength of the signal. (Figure below)
Twin lead transmission line propagation: TEM mode.
At microwave signal frequencies (between 100 MHz and 300 GHz), two-conductor transmission lines of any substantial length operating in standard TEM mode become impractical. Lines small enough in
cross-sectional dimension to maintain TEM mode signal propagation for microwave signals tend to have low voltage ratings, and suffer from large, parasitic power losses due to conductor “skin” and
dielectric effects. Fortunately, though, at these short wavelengths there exist other modes of propagation that are not as “lossy,” if a conductive tube is used rather than two parallel conductors.
It is at these high frequencies that waveguides become practical.
When an electromagnetic wave propagates down a hollow tube, only one of the fields -- either electric or magnetic -- will actually be transverse to the wave's direction of travel. The other field
will “loop” longitudinally to the direction of travel, but still be perpendicular to the other field. Whichever field remains transverse to the direction of travel determines whether the wave
propagates in TE mode (Transverse Electric) or TM (Transverse Magnetic) mode. (Figure below)
Waveguide (TE) transverse electric and (TM) transverse magnetic modes.
Many variations of each mode exist for a given waveguide, and a full discussion of this is subject well beyond the scope of this book.
Signals are typically introduced to and extracted from waveguides by means of small antenna-like coupling devices inserted into the waveguide. Sometimes these coupling elements take the form of a
dipole, which is nothing more than two open-ended stub wires of appropriate length. Other times, the coupler is a single stub (a half-dipole, similar in principle to a “whip” antenna, 1/4λ in
physical length), or a short loop of wire terminated on the inside surface of the waveguide: (Figure below)
Stub and loop coupling to waveguide.
In some cases, such as a class of vacuum tube devices called inductive output tubes (the so-called klystron tube falls into this category), a “cavity” formed of conductive material may intercept
electromagnetic energy from a modulated beam of electrons, having no contact with the beam itself: (Figure below below)
Klystron inductive output tube.
Just as transmission lines are able to function as resonant elements in a circuit, especially when terminated by a short-circuit or an open-circuit, a dead-ended waveguide may also resonate at
particular frequencies. When used as such, the device is called a cavity resonator. Inductive output tubes use toroid-shaped cavity resonators to maximize the power transfer efficiency between the
electron beam and the output cable.
A cavity's resonant frequency may be altered by changing its physical dimensions. To this end, cavities with movable plates, screws, and other mechanical elements for tuning are manufactured to
provide coarse resonant frequency adjustment.
If a resonant cavity is made open on one end, it functions as a unidirectional antenna. The following photograph shows a home-made waveguide formed from a tin can, used as an antenna for a 2.4 GHz
signal in an “802.11b” computer communication network. The coupling element is a quarter-wave stub: nothing more than a piece of solid copper wire about 1-1/4 inches in length extending from the
center of a coaxial cable connector penetrating the side of the can: (Figure below)
Can-tenna illustrates stub coupling to waveguide.
A few more tin-can antennae may be seen in the background, one of them a “Pringles” potato chip can. Although this can is of cardboard (paper) construction, its metallic inner lining provides the
necessary conductivity to function as a waveguide. Some of the cans in the background still have their plastic lids in place. The plastic, being nonconductive, does not interfere with the RF signal,
but functions as a physical barrier to prevent rain, snow, dust, and other physical contaminants from entering the waveguide. “Real” waveguide antennae use similar barriers to physically enclose the
tube, yet allow electromagnetic energy to pass unimpeded.
• REVIEW:
• Waveguides are metal tubes functioning as “conduits” for carrying electromagnetic waves. They are practical only for signals of extremely high frequency, where the signal wavelength approaches
the cross-sectional dimensions of the waveguide.
• Wave propagation through a waveguide may be classified into two broad categories: TE (Transverse Electric), or TM (Transverse Magnetic), depending on which field (electric or magnetic) is
perpendicular (transverse) to the direction of wave travel. Wave travel along a standard, two-conductor transmission line is of the TEM (Transverse Electric and Magnetic) mode, where both fields
are oriented perpendicular to the direction of travel. TEM mode is only possible with two conductors and cannot exist in a waveguide.
• A dead-ended waveguide serving as a resonant element in a microwave circuit is called a cavity resonator.
• A cavity resonator with an open end functions as a unidirectional antenna, sending or receiving RF energy to/from the direction of the open end.
Lessons In Electric Circuits copyright (C) 2000-2014 Tony R. Kuphaldt, under the terms and conditions of the Design Science License. | {"url":"http://www.ibiblio.org/kuphaldt/electricCircuits/AC/AC_14.html","timestamp":"2014-04-21T03:05:47Z","content_type":null,"content_length":"86549","record_id":"<urn:uuid:eb673505-9c2c-4910-aff2-37d7a7eafb52>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
Go Figure
When playing games, children learn a great deal concerning mathematical concepts and number relationships. Games are especially appropriate for the visual and/or kinesthetic learner. They are
suitable for a math center, for differentiated instruction, as well as for introducing or reviewing concepts.
Games give the learner numerous opportunities to reinforce current knowledge and to try out strategies or techniques without the worry of getting the “wrong” answer. Games provide students of any age
with a non-threatening environment for seeing incorrect solutions, not as mistakes, but as steps towards finding the correct mathematical solution.
1) Pique student interest and participation in math practice and review.
2) Provide immediate feedback for the teacher. (i.e. Who is still having difficulty with a concept? Who needs verbal assurance? Why is a student continually getting the wrong answer?)
3) Encourage and engage even the most reluctant student.
4) Enhance opportunities to respond correctly.
5) Reinforce or support a positive attitude or viewpoint of mathematics.
6) Let students test new problem solving strategies without the fear of failing.
7) Stimulate logical reasoning.
8) Require critical thinking skills.
9) Allow the student to use trial and error strategies.
Check out the following games at Teachers Pay Teachers.
1) Beat the Teach – This is an addition and multiplication game. The two objectives are to practice math fact families and to use problem solving strategies. Procedures as well as detailed
instructions are given for each game, and two separate game boards are included. One game board is specifically made for addition, and the other one is for multiplication.
2) Big Number – This is a place value game that features seven different game boards. The game boards vary in difficultly beginning with only two places, the ones and tens. Game Board #5 goes to the
hundred thousands place and requires the learner to decide where to place six different numbers. All the games have been developed to practice place value using problem solving strategies, reasoning,
and intelligent practice.
3) Bug Mania – This is two different games that provide motivation for the learner to practice addition, subtraction, and multiplication using positive and negative numbers. It is a fun game which is
easy to adapt to any grade level, for a whole class, or a small group of students. The games are simple to individualize since not every pair of students must use the same cubes or have the same
4) Bug Ya - My Students' "Favoritest" Game – Three games are included in this short resource packet. One is for addition and subtraction; the second is for multiplication, and the third game involves
the use of money. The second and third games may involve subtraction with renaming and addition with regrouping based on the numbers that are used. All the games have been developed to extend the
recall of facts through playful and skillful practice.
5) Contact - The object of this game is to touch as many other players’ squares as possible in order to receive the most points. The game can be simplified by allowing the players to use just two to
four operations, or it can be made more challenging by requiring that the players correctly use the order of operations. It is a fun and attention-grabbing way for students to practice basic math
facts and to use critical thinking without doing another “drill and kill” activity.
6) Could Be - This consists of two separate games, one for addition and one for multiplication. The objective of these games is to practice basic addition or multiplication facts by using the problem
solving strategy of logic. The games are designed to be used with the whole class, small groups, at centers, individually, or as a homework activity.
7) Digital Logic - In this game, one student thinks of a number while the other players, called Digit Detectives, must find out what it is. They do so by guessing numbers. These guesses are recorded
on a score card that is drawn by the students. The gathered information includes how many digits are correct and whether any of the digits are in the right place. This game can easily be adapted for
the upper grades by using three, four, or as many digits as appropriate. The more digits involved in the game, the more complex the game.
8) Dots Lots of Fun - This 12 page resource contains seven math games that use dominoes. A domino blackline is included on the last page of the handout. The games vary in difficulty; so instruction
can easily be differentiated. The games involve matching, finding sums, using <, >, and = signs, multiplication practice, and comparing fractions. A memory game is also included which makes an
excellent center activity. These games correlate well with the math curriculum, "Everyday Math".
9) Make A Difference - This is a math game for 2-4 players. Taking turns, each roll a die numbered 1-6 and then subtracts the number on the die from 10. They then locate the answer in their column on
the game board and place a marker. Play continues, and each marker moves up or down according to the solution to the subtraction problem. Students review subtraction facts while trying to be the
first one to reach the winning space. The game is easily made more challenging by requiring the players to subtract the number rolled on the die (numbered 5-10) from 15.
10) Race to the "Sum"mit – This is an addition game for 2-4 players. Taking turns, each player rolls two dice numbered 1-6 and then adds the numbers on the die. The player then locates the answer
(the sum) in their column on the game board and places a marker. Play continues, and each player’s marker moves up or down according to the solution to the addition problem. Students review addition
facts while trying to be the first one to reach the “Sum”mit. The game is easily made more complex and challenging by replacing the original dice with two dice numbered 5-10.
11) Red Light, Green Light - This is two games in one. It can focus on either addition with regrouping (carrying) or subtraction with renaming (borrowing). The students each have a red light/green
light card. Red means stop; regrouping or renaming is necessary before the problem can be worked. Green means there is no regrouping or renaming necessary, and the student can proceed to work the
12) Roll and Calculate - This math game is designed to practice the addition and subtraction of positive and negative numbers. Two players take turns rolling two dice numbered 1-6 and then add or
subtract the numbers based on the placement on the game board. The answer must be agreed upon before the next player takes his/her turn. This game is easily made more difficult by requiring the
players to use dice numbered 5-10. It is a fun and interesting way for students to review adding and subtracting positive and negative numbers without doing a “drill and kill” activity.
13) Sly Fox - This is a reading comprehension game for grades 1-5. Students play the game after reading any selection (reading, social studies, science, etc.). It can be played with teams of 4-6
students or with a small group of children. This game works is very motivational for the students, and it provides an effective means of determining comprehension of any selection. The game can be
used during structured classroom time, indoor recess time, indoor lunch time, or just when students need additional help with a tutor or parent volunteer. After the children know how to play Sly Fox,
it can be used as a center activity.
14) Spell Down - This is a spelling game that your whole class can play. It allows every child to participate, and each child has a fair chance of winning the game. In addition, it is a quiet game
because nothing is repeated, not the spelling word or any letter that is said by a child or the teacher. The game is exciting and stimulating for the students. Game rules as well as six game
adaptations are part of the original handout. | {"url":"http://gofigurewithscipi.blogspot.com/p/games-that-teach.html","timestamp":"2014-04-19T13:16:19Z","content_type":null,"content_length":"185369","record_id":"<urn:uuid:6d7bafa5-0a32-44a2-b143-7c21e5f9bd0f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Near-optimal hardness results and approximation algorithms for edge-disjoint paths and related problems
Results 11 - 20 of 76
- In Proceedings of the ACM annual conference of the Special Interest Group on Data Communication (SIGCOMM’03 , 2003
"... We present a routing paradigm called PB-routing that utilizes steepest gradient search methods to route data packets. More specifically, the PB-routing paradigm assigns scalar potentials to
network elements and forwards packets in the direction of maximum positive force. We show that the family of P ..."
Cited by 34 (1 self)
Add to MetaCart
We present a routing paradigm called PB-routing that utilizes steepest gradient search methods to route data packets. More specifically, the PB-routing paradigm assigns scalar potentials to network
elements and forwards packets in the direction of maximum positive force. We show that the family of PB-routing schemes are loop free and that the standard shortest path routing algorithms are a
special case of the PB-routing paradigm. We then show how to design a potential function that accounts for traffic conditions at a node. The resulting routing algorithm routes around congested areas
while preserving the key desirable properties of IP routing mechanisms including hop-byhop routing, local route computations and statistical multiplexing. Our simulations using the ns simulator
indicate that the traffic aware routing algorithm shows significant improvements in end-to-end delay and jitter when compared to standard shortest path routing algorithms. The simulations also
indicate that our algorithm does not incur too much control overheads and is fairly stable even when traffic conditions are dynamic.
- In Proceedings of the 15th ACM-SIAM Symposium on Discrete Algorithms (SODA , 2004
"... Abstract Given a directed graph G = (V, E) with n vertices and a parameter l> = 1, we present an algorithm that finds a cut (set of edges) of size O((n2/l2)log2(n/l)) whose removal separates
every pair of vertices (s,t) in G such that the minimum distance between s and t in G is at least l. This the ..."
Cited by 26 (0 self)
Add to MetaCart
Abstract Given a directed graph G = (V, E) with n vertices and a parameter l> = 1, we present an algorithm that finds a cut (set of edges) of size O((n2/l2)log2(n/l)) whose removal separates every
pair of vertices (s,t) in G such that the minimum distance between s and t in G is at least l. This theorem implies a nearly tight analysis of the greedy algorithm for finding edge-disjoint paths in
directed graphs, and gives the best known approximation factor for this problem in terms of the number of vertices.
, 2001
"... Network design problems, such as generalizations of the Steiner Tree Problem, can be cast as edge-cost-ow problems. An edge-cost ow problem is a min-cost ow problem in which the cost of the ow
equals the sum of the costs of the edges carrying positive ow. ..."
Cited by 23 (3 self)
Add to MetaCart
Network design problems, such as generalizations of the Steiner Tree Problem, can be cast as edge-cost-ow problems. An edge-cost ow problem is a min-cost ow problem in which the cost of the ow equals
the sum of the costs of the edges carrying positive ow.
- In Proceedings of the 15th Annual IEEE Conference on Computational Complexity , 2000
"... We give a new proof showing that it is NP-hard to color a 3-colorable graph using just four colors. This result is already known [19], but our proof is novel as it does not rely on the PCP
theorem, while the one in [19] does. This highlights a qualitative difference between the known hardness res ..."
Cited by 21 (2 self)
Add to MetaCart
We give a new proof showing that it is NP-hard to color a 3-colorable graph using just four colors. This result is already known [19], but our proof is novel as it does not rely on the PCP theorem,
while the one in [19] does. This highlights a qualitative difference between the known hardness result for coloring 3-colorable graphs and the factor n hardness for approximating the chromatic number
of general graphs, as the latter result is known to imply (some form of) PCP theorem [3].
- PROC. 36TH. ANNUAL ACM SYMPOSIUM ON THEORY OF COMPTING
"... We study the approximability of two natural NP-hard problems. The first problem is congestion minimization in directed networks. In this problem, we are given a directed graph and a set of
source-sink pairs. The goal is to route all the pairs with minimum congestion on the network edges. The second ..."
Cited by 20 (3 self)
Add to MetaCart
We study the approximability of two natural NP-hard problems. The first problem is congestion minimization in directed networks. In this problem, we are given a directed graph and a set of
source-sink pairs. The goal is to route all the pairs with minimum congestion on the network edges. The second problem is machine scheduling, where we are given a set of jobs, and for each job, there
is a list of intervals on which it can be scheduled. The goal is to find the smallest number of machines on which all jobs can be scheduled such that no two jobs overlap in their execution on any
machine. Both problems are known to be O(log n/loglog n)-approximable via the randomized rounding technique of Raghavan and Thompson. However, until recently, only Max SNP hardness was known for each
problem. We make progress in closing this gap by showing that both problems are Ω(log log n)-hard to approximate unless NP ⊆ DTIME(n O(log log log n)).
- SIAM Journal on Discrete Mathematics , 1998
"... . A bidirected tree is the directed graph obtained from an undirected tree by replacing each undirected edge by two directed edges with opposite directions. Given a set of directed paths in a
bidirected tree, the goal of the maximum edge-disjoint paths problem is to select a maximumcardinality subse ..."
Cited by 17 (3 self)
Add to MetaCart
. A bidirected tree is the directed graph obtained from an undirected tree by replacing each undirected edge by two directed edges with opposite directions. Given a set of directed paths in a
bidirected tree, the goal of the maximum edge-disjoint paths problem is to select a maximumcardinality subset of the paths such that the selected paths are edge-disjoint. This problem can be solved
optimally in polynomial time for bidirected trees of constant degree, but is MAXSNP-hard for bidirected trees of arbitrary degree. For every fixed " ? 0, a polynomial-time (5=3+ ")-approximation
algorithm is presented. Key words. approximation algorithms, edge-disjoint paths, bidirected trees AMS subject classifications. 68Q25, 68R10 1. Introduction. Research on disjoint paths problems in
graphs has a long history [12]. In recent years, edge-disjoint paths problems have been brought into the focus of attention by advances in the field of communication networks. Many modern network
architectures estab...
, 2002
"... In traditional multi-commodity flow theory, the task is to send a certain amount of each commodity from its start to its target node, subject to capacity constraints on the edges. However, ..."
Cited by 17 (3 self)
Add to MetaCart
In traditional multi-commodity flow theory, the task is to send a certain amount of each commodity from its start to its target node, subject to capacity constraints on the edges. However,
- in Proceedings of ICNP , 2004
"... Efficient integration of a multi-hop wireless network with the Internet is an important research problem, and benefits several applications, such as wireless neighborhood networks and sensor
networks. In a wireless neighborhood network, a few Internet Transit Access Points (ITAPs), serving as gatewa ..."
Cited by 17 (2 self)
Add to MetaCart
Efficient integration of a multi-hop wireless network with the Internet is an important research problem, and benefits several applications, such as wireless neighborhood networks and sensor
networks. In a wireless neighborhood network, a few Internet Transit Access Points (ITAPs), serving as gateways to the Internet, are deployed across the neighborhood; houses are equipped with
low-cost antennas, and form a multi-hop wireless network among themselves to cooperatively route traffic to the Internet through the ITAPs. In a sensor network, sensors collect measurement data and
send it through a multi-hop wireless network to the servers on the Internet via ITAPs. For both applications, placement of integration points between the wireless and wired network is a critical
determinant of system performance and resource usage. However there has been little work on this subject. In this paper, we explore the placement problem under three wireless link models. For each
link model, we develop algorithms to make informed placement decisions based on neighborhood layouts, user demands, and wireless link characteristics. We also extend our algorithms to provide fault
tolerance and handle significant workload variation. We evaluate our placement algorithms using both analysis and simulation, and show that our algorithms yield close to optimal solutions over a wide
range of scenarios we have considered. 1.
, 1998
"... In a packing integer program, we are given a matrix A and column vectors b; c with nonnegative entries. We seek a vector x of nonnegative integers, which maximizes c^T x; subject to Ax ≤ b: The
edge and vertex-disjoint path problems together with their unsplittable ow generalization are NP-hard p ..."
Cited by 15 (2 self)
Add to MetaCart
In a packing integer program, we are given a matrix A and column vectors b; c with nonnegative entries. We seek a vector x of nonnegative integers, which maximizes c^T x; subject to Ax ≤ b: The
edge and vertex-disjoint path problems together with their unsplittable ow generalization are NP-hard problems with a multitude of applications in areas such as routing, scheduling and bin packing.
These two categories of problems are known to be conceptually related, but this connection has largely been ignored in terms of approximation algorithms. We explore the topic of approximating
disjoint-path problems using polynomial-size packing integer programs. Motivated by the... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=4724&sort=cite&start=10","timestamp":"2014-04-18T21:55:07Z","content_type":null,"content_length":"37339","record_id":"<urn:uuid:2229dd2c-54ee-4e84-9c13-356f3c5064c8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generating Functions for Point Set Distances
Does the multi-set of point-to-point distances between N points on a line uniquely determine the configuration of points (up to reflection)? For N = 2 or 3 the answer is obviously yes. What about N =
4? In other words, can there exist two distinct configurations of points on the unit interval such as
with the same multi-set of point-to-point distances? Neglecting the distance 1 common to both, these two configurations have the sets of distances
If the two configurations are to have the same multi-set of distances, the elementary symmetric functions of the elements of these two sets must be identical. Equating the sums of the two sets gives
2-a+b = 2-u+v, which implies that (b-a) = (v-u). Deleting those elements from their respective sets leaves us with
Clearly the largest element of the first set must be either b or 1-a, and the largest element of the second set must be either v or 1-u. If we identify b with v then the equation (b-a) = (v-u)
implies a = u, so the configurations are identical. On the other hand, if we identify b with 1-u then the equation (b-a) = (v-u) implies a = 1-v, so the configurations are reflections of each other.
These two cases cover the identifications of 1-a with v and 1-a with 1-u as well, so it follows that the multi-set of distances between four points on a line uniquely determines the configuration (up
to reflection).
Now what 5 points? Suppose there are two distinct (ordered) configurations of 5 points on the unit interval
The point-to-point distances for the first set are
and the distances for the second set are
If the two sets are isospectral then these are the same ten numbers as for the first set, possibly in some different order, so the sums of these two sets of distances should be equal. This leads to 4
-2a+2c = 4-2q+2s, from which it follows that
Now, notice that the largest distance in each set is 1-0, and the 2nd largest distance in the first set must be either c or 1-a. Likewise the 2nd largest distance in the second set must be s or 1-q.
So there are four possibilities
(I) If c = s then by (1) a = q and so b = r, and the sets are identical.
(II) If 1-a = 1-q then a = q and by (1) c = s, and so b = r, and the sets are again identical.
(III) If c = 1-q then (1) implies (1-q) -a = s-q, which gives s = 1-a and so b = 1-r, and the sets are just reflections of each other.
(IV) If 1-a = s then by (1) c = 1-q and so b = 1-r, and the sets are again just reflections of each other.
Notice that in each of the four cases we've established the correspondence between four of the five points, so the 5th point in the middle must also fall into the same pattern. Since all four cases
are symmetrical, consider just Case I, where we have established a = q and c = s. All the other distances match up and we're just left with the four quantities involving b and r
which must be the same four numbers, possibly permuted in some way. Equating the sum of every product of two of them gives
which implies either b = r or b = (c+1)/3 - r. On the other hand, equating the sum of every product of three of them gives
and if we substitute b = (c+1)/3 - r this equation says r = (c+1)/6, so again b = r. Likewise the other three cases lead to either equivalent or reflected sets, so the two sets are equivalent up to
So we've established that the multi-set of point-to-point distances for N = 2, 3, 4, or 5 points on a line uniquely determines the configuration up to reflection. On the other hand, it does not
determine the configuration for N = 6 points, as shown by the two sets of linear lattice points
Two distinct configurations of points on a line with identical multi-sets of point-to-point distances are called "isospectral". Another example of isospectral configurations of six points is shown
Incidentally, these two point-sets are not only isospectral, they also have no duplicated point-to-point distances. Such point-sets are sometimes called Golomb rulers, although some people reserve
that term for sets of N points that have the distances 1, 2, 3, ..., N(N-1)/2. Other people refer to these latter sets as "perfect Golomb rulers". A conjecture of Picard is that if X and Y are two
Golomb rulers (not necessarily perfect) of size N (not equal to 6) with the same set of point-to-point distances, then X = Y.
There are 36 "isospectral pairs" of linear lattice point sets with max distance less than or equal to 14. Of these 36 pairs there are 3, 5, 4, 23, and 1 for N = 6, 7, 8, 9, and 10 points
respectively. The smallest example for each of these values of N is listed below
This raises several interesting questions. For example, if we let f(N) denote the size of the smallest isospectral pair of linear lattice N-point sets, what can we say about the values of f(N)? They
clearly aren't monotonic, as already shown by f(9) < f(8). Asymptotically, does f(N) increase linearly with N? Also, what is the smallest isospectral triples of linear lattice points?
Another interesting aspect of these point sets concerns their representations as polynomials. For example, the point set {0,1,2,6,8,11} corresponds to the polynomial
It's clear that the generating function for the point-to-point distances is given by the product p(x)p(1/x), because the coefficient of x^k in this product is just the number of times the difference
between two exponents of x equals k. Of course, this counts each distance twice, positive and negative k, and also counts each of the "zero" distances between each point and itself.
Notice that this applies just as well to point sets with more than one point at each location on the line. The coefficients of p(x) can be any value. In fact, they can even be fractional, irrational,
and/or complex. This allows us to represent the probability density distribution f(z) on the line by the polynomial
and then the density of the difference between two samples from this distribution can be found from the product p(x)p(1/x).
Of course we can "normalize" the polynomial p(1/x) by multiplying it by x^d where d is the degree of p. Let's let p'(x) denote this normalized polynomial x^d p(1/x). Thus, given the polynomial p(x)
corresponding to the six-point set, we have
and the product p(x) p'(x) gives a shifted version of the point-to-point distances. Notice that if we set x = 2 these polynomials can be regarded as binary numbers. For example, the set
{0,1,2,6,8,11} corresponds to the number
or, in ordinary binary notation, 100101000111, which is equal to 2375 in decimal and corresponds to p(x). Reversing the digits gives the binary number 111000101001, which equals 3625 and corresponds
to p'(x). Now recall that the point set {0,1,2,6,8,11} is isospectral with the distinct point set {0,1,6,7,9,11}. Expressing this as a polynomial q(x) and its normalized reflection by q'(x), and
converting to binary numbers, these two polynomials correspond to the decimal integers 2755 and 3125 (which happens to equal 5^5).
Now, since {0,1,2,6,8,11} and {0,1,6,7,9,11} have the same multi-set of point-to-point distances, we know that
which implies in particular that p(2) p'(2) equals q(2) q'(2). In other words, we have (2375)(3625) = (2755)(3125). This occurs because these individual numbers factor as
A similar factorization results for every pair of isospectral sets. Here are a couple more examples
In general, if X and Y are two integers corresponding to isospectral point sets, then there are integers a,b,c,d > 1 such that
The first 12 non-trivial isospectral pairs of individual points on a line are represented by the pairs of decimal integers
We know that two point sets p and q are isospectral if and only if p(x)p'(x) = q(x)q'(x), and it follows that if p and q are isospectral we must have p(2)p'(2) = q(2)q'(2). However, is the converse
Somewhat surprisingly, it appears that the converse is true, at least over the range of numbers that I've checked. This is unexpected because in general we lose information when reducing the formal
polynomial p(x) to a specific value such as p(2). On the other hand, the evaluations increase so rapidly that it's unlikely two unrelated polynomials would give the same value at x = 2. So we would
expect counter-examples to be rare, but this still leaves open the question of whether they exist at all. I've checked all point sets (with integer coordinates) with max distance less than 16, and
the condition xx' = yy' is both necessary and sufficient for this range, which includes 73 non-trivial isospectral pairs of point sets consisting of up to 12 points in each set. I'd be interested to
see a counter-example, i.e., a pair of integers x,y such that xx' = yy' but for which the corresponding point sets are not isospectral.
The above generalizes nicely to other bases, i.e., a relation between distance sets for points on a line and ordinary digit reversal and multiplication. Specifically, if XX' = YY' where U' denotes
the number produced by reversing the digits of U in the base b, then the point sets corresponding to X and Y (and X' and Y') in the base b are isospectral. For example, consider the case x = 5589 and
y = 4761. The base 4 representations of these numbers are
and the digit reversals of these numbers are
which have the decimal values X' = 5589 (because X happens to be palindromic, but that's not typical) and Y' = 6561. Notice that XX' = YY' = 31236921, so we expect that the point sets corresponding
to X and Y in base 4 are isospectral. The point set corresponding to a given number u in the base b is produced by placing d[i] points at a distance i from the origin, where d[i] is the ith digit of
u. Thus the particular number x above corresponds to a set with one point at coordinate 0, one point at coordinate 1, one point at coordinate 2, three points at coordinate 3, one point at coordinate
4, one point at coordinate 5, and one point at coordinate 6. In other words, the ith digit signifies how many points to place at coordinate i on the line.
Now determine the set of distances between each point and each other point in this set. If we do this for the point sets corresponding to X and Y above we find that they have the same multi-set of
point-to-point distances. So this nicely generalizes the base-2 case (where we just have at most one point at each coordinate). The smallest isospectral pairs in the first few bases are shown below
(as decimal numbers)
Again this raises the question of whether the numerical relation XX' = YY' for any particular base is sufficient as well as necessary for X and Y to be isospectral. I know of no reason that it should
be sufficient, but I also haven't found any counter-examples.
Incidentally, the existence of distinct isospectral arrangements of points in one-dimensional space can also be expressed in terms of a combinatorial proposition involving sequences of integers.
Given a finite ordered set of non-zero integers A = {a[1], a[2], ... a[k]}, let S{A} denote the multi-set of all the sums of consecutive elements of A. Also, define the "reversal" of A as A' = {a[k],
... a[2], a[1]}. (Obviously we have S{A} = S{A'}.) A pair of distinct isospectral arrangements of points corresponds to a pair of distinct ordered sequences A and B such that S{A} = S{B}.
An application of generating functions to higher dimensional configurations is discussed in Isospectral Point Sets in Higher Dimensions.
Return to MathPages Main Menu | {"url":"http://www.mathpages.com/home/kmath390/kmath390.htm","timestamp":"2014-04-19T22:06:25Z","content_type":null,"content_length":"35991","record_id":"<urn:uuid:88954d25-c2ec-4e23-afdc-2f7e93e003f3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Classes = sets + truth
Volker Halbach Volker.Halbach at uni-konstanz.de
Thu Feb 10 22:44:21 EST 2000
Here are some comments and questions on Jeff Ketland's posting of 1 Feb.
Technical remarks:
PA(S) is PA plus "there is a full inductive satisfaction class"
1. PA(S) and ACA are L_{PA}-conservatively interpretable in each another.
The relation of the respective systems with arithmetical induction only is
more complicated. ACA_0 is easily shown to be conservative over PA (Harvey
Friedman provided a fairly general proof strategy for this and more
advanced results in a recent posting).
John Burgess has emphasized that one wants to have conservativeness proof
that can be formalized in relatively weak systems. Till recently there was
only a complicated model-theoretic proof by Kotlarski, Krajewski and
Lachlan of the conservativeness of PA(S)_0 over PA. In particular, PA(S)_0
is not easly interpretable in ACA_0 (although it has been suggested in the
literature that arithmetical truth (satisfying the Tarski-clauses) is
definable in ACA_0).
In fact, one can show that a truth predicate satisfying the Tarski clauses
can be defined in ACA_0 (this is an easy consequence of Lachlan's theorem),
though ACA_0 defines a truth predicate satisfying the T-sentences.
2. I am sorry that I caused the impression that DeVidi and Solomon
conjectured something like ZFC(S)=MK with respect to their set-theoretic
content. ZFC(S) looks much weaker than MK.
Jeff asked whether ZFC(S)_0= is equivalent to NBG. It seems that NBG can be
embedded in ZFC(S)_0 (in essentially the same way as ACA_0 can be embedded
in PA(S)_0). The other direction is harder. I doubt that one can define a
truth predicate in NBG that commutes with all connectives and quantifiers,
because something like Lachlan's theorem should show again that such a
truth definition in NBG is impossible. Again, this does not withstand
Mostowski's result that NBG proves the T-sentences (for sentences not
containing class variables).
Volker Halbach
Universitaet Konstanz
Fachgruppe Philosophie
Postfach 5560
78434 Konstanz Germany
Office phone: 07531 88 3524
Fax: 07531 88 4121
Home phone: 07732 970863
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2000-February/003729.html","timestamp":"2014-04-17T10:23:23Z","content_type":null,"content_length":"4505","record_id":"<urn:uuid:c94f97b3-0f74-4c00-a8c8-3aee1a009aa2>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tools Newsletter
MATH TOOLS NEWSLETTER - MAY 2, 2008 - No. 63
***FREE ONLINE OPPORTUNITY FOR TEACHERS OF GRADES 5-9
Technology Tools for Thinking and Reasoning about Probability
The Math Forum will host a new online workshop for teachers
who work with students in 5th through 9th grade. Teachers will
investigate some mathematics topics common to middle school
curricula within the theme of probability. In this context
they will explore the Math Tools digital library and several
software tools that contribute in some way to mathematical
understanding, problem solving, reflection and discussion.
Each participant's $100 workshop fee is being covered by a
National Science Foundation (NSF) grant, affording participants
the convenience of anytime, anywhere online learning.
The workshop will be offered May 12 - June 23.
Certificates of completion will be available for participants
finishing the six weeks of interaction. For an overview with
additional detalis, visit:
Deadline for applications is Monday, May 5, 2008.
As you browse the catalog, please take a moment to rate a
resource, comment on it, review it -- start a new or join
a discussion!
***FEATURED TOOL
Tool: WordPress Math Publisher
Ron Fredericks
Allows a blogger to publish math equations. More details
and screen shots available at the site.
***FEATURED TOOL
Tool: The Coordinate (Cartesian) Plane - Math Open Reference
John Page
An interactive applet and associated web page that
describe the concept of the coordinate plane (Cartesian
Plane). The applet shows the plane, its axes, origin and
related controls. The user can drag a point around and
see the coordinates change, and click anywhere to create
new points. The origin can be dragged to emphasize or
eliminate certain quadrants.
***FEATURED TPOW
Technology PoW: Spinners
Annie Fetter
Choose which spinner is most likely to have produced the
given experimental results.
***FEATURED SUPPORT MATERIAL
Support Material: Algebra Nspirations
a.m. productions llc.
This new video series allows Algebra 1 teachers to
integrate Texas Instruments' new TI-Nspire™ handheld.
Real-world scenarios combined with dynamic footage and
easy-to-follow animated sequences for the calculator
keystrokes provide a student-friendly presentation.
***FEATURED DISCUSSION
the use of technology
DawnS81 explains, "Some teachers have different viewpoints
on the use of technology in the classroom. I am writing to
get opinions from other teachers."
***FEATURED DISCUSSION
Is a rhombus a kite?
LFS posts to an ongoing discussion, "Does the 'standard'
definition of quadrilateral assume convexness?"
CHECK OUT THE MATH TOOLS SITE:
Math Tools http://mathforum.org/mathtools/
Register http://mathforum.org/mathtools/register.html
Discussions http://mathforum.org/mathtools/discuss.html
Research Area http://mathforum.org/mathtools/research/
Developers Area http://mathforum.org/mathtools/developers/
Newsletter Archive http://mathforum.org/mathtools/newsletter/
.--------------.___________) \
|//////////////|___________[ ]
`--------------' ) (
The Math Forum @ Drexel -- 2 May 2008 | {"url":"http://mathforum.org/mathtools/newsletter/2008/May.html","timestamp":"2014-04-19T08:45:37Z","content_type":null,"content_length":"15677","record_id":"<urn:uuid:c8fccddb-5df5-42ff-99cc-10176c05501c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
the principle of explosion
If you accept a few simple laws of classical logic, and you accept that a statement and its negation are both true, then everything else is simultaneously true and false. Collquially, "from a
contradiction everything is true" or "from a contradiction, everything follows". Sometimes this is known as the Principle of Explosion, because if you accept a contradiction, then your logical system
blows up in your face, so to speak.
It's not hard to prove any other statement from a contradiction. Suppose you grant me that both P and its negation -P are true (e.g. it is simultaneously true that I am and am not the Pope). Let R be
any other statement (e.g. "roses are blue".) Then, the statement E = "either P or R is true" is itself true because P is true. However, P is also false (i.e. -P is true), thus we conclude that R is
true because we have eliminated the case that P is true (this is a disjunctive syllogism). We conclude that if I am and am not the Pope, then roses are blue.
Note that we could have taken the negation "roses are not blue" also, and we could have proven that to be true. So given a contradiction, everything is true, and everything is also false. Our logical
system becomes trivial and there's nothing interesting left to prove or disprove.
Now, humans being what humans are, sometimes they are troubled by the notion that from a contradiction everything follows. Unfortunately, the only way to believe a contradiction and not conclude that
everything follows from it, is to discard one of the logical laws used to prove this. In the proof that I provided, really the only logical law you could reject is disjuntive syllogism. That's a
pretty big law to reject. So, for example, if I was certain that everyone is either dead or alive, and I knew that you weren't dead, then I would be unable to conclude that you're alive. Or if today
either is Sunday or it's any other day, and I know it's not any other day, then I can't conclude that today it's Sunday. Either way, you have to reject a lot of classical logic if you want to believe
in a contradiction, or you have to accept everything as true or false and give up logic altogether.
The logical systems that accept contradictions, sometimes useful in software, are known as paraconsistent logic. For almost every other day use, though, it is much easier to accept the usual logical
laws and to reject all contradictions. | {"url":"http://everything2.com/title/the%2520principle%2520of%2520explosion","timestamp":"2014-04-18T08:13:12Z","content_type":null,"content_length":"21923","record_id":"<urn:uuid:91265b5a-f74c-4aba-a8e0-da4f324af707>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advanced Microeconomics -
A reader requests expansion of this book to include more material.
You can help by adding new material (learn how) or ask for assistance in the reading room.
Advanced Microeconomics - PrefaceEdit
The goal of this book is to provide graduate-level foundations for microeconomics. It will assume proficiency in advanced mathematics such as calculus, set theory, and optimization. Many readers may
wish to start with the lower-level Principles of Microeconomics.
Table of ContentsEdit
Last modified on 11 April 2011, at 02:09 | {"url":"http://en.m.wikibooks.org/wiki/Advanced_Microeconomics","timestamp":"2014-04-18T00:53:51Z","content_type":null,"content_length":"18577","record_id":"<urn:uuid:bfc25156-bb89-4056-ab20-2a6f2271e2e8>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
baffled...bijective arithmetic &congruence
May 30th 2008, 01:13 PM #1
Apr 2008
baffled...bijective arithmetic &congruence
consider arithmetic modulo 14. Determine which of the following functions are bijective.If it is bijective determine the inverse function.
i.) $f: \mathbb{Z}_{14} \rightarrow \mathbb{Z}_{14}:<br /> f([a]_{14})=[a]_{14}\cdot[3]_{14};$
ii.) $f: \mathbb{Z}_{14} \rightarrow \mathbb{Z}_{14}:<br /> f([a]_{14})=[a]_{14}\cdot[4]_{14};$
confused need help.
consider arithmetic modulo 14. Determine which of the following functions are bijective.If it is bijective determine the inverse function.
i.) $f: \mathbb{Z}_{14} \rightarrow \mathbb{Z}_{14}:<br /> f([a]_{14})=[a]_{14}\cdot[3]_{14};$
ii.) $f: \mathbb{Z}_{14} \rightarrow \mathbb{Z}_{14}:<br /> f([a]_{14})=[a]_{14}\cdot[4]_{14};$
confused need help.
$\mathbb{Z}_{14}$ is a cyclic group with respect to addition and 3 and 14 relatively prime. That means the 3 is a generator of the group. So F is both 1-1 and onto. $3^{-1}=5$ in $\mathbb{Z}_{14}
So $f^{-1}([a]_{14})=[a]_{14} \cdot [5]_{14}$
This can be verified by function composition.
For ii) The range of f is the even elements so the function is not onto.
I hope this helps.
Good luck.
i dont understand how $3^{-1}=5$ though? thanks
The inverse of 3 modulo 14
The inverse d of 3 is such that $3d = 1 \mod 14$
Use the Euclidian algorithm to find it :
$\textbf{14}=\textbf{3} \times 4+{\color{red}2} \implies {\color{red}2}=\textbf{14}-\textbf{3} \times 4$
$\textbf{3}={\color{red}2}+1 \implies 1=\textbf{3}-{\color{red}2}=\textbf{3}-(\textbf{14}-\textbf{3} \times 4)=-\textbf{14}+{\color{blue}5} \times \textbf{3}$
---> $5 \times 3=1+14=1 \mod 14$
May 30th 2008, 06:34 PM #2
May 31st 2008, 02:58 AM #3
Apr 2008
May 31st 2008, 03:03 AM #4 | {"url":"http://mathhelpforum.com/advanced-algebra/40105-baffled-bijective-arithmetic-congruence.html","timestamp":"2014-04-16T11:02:08Z","content_type":null,"content_length":"44420","record_id":"<urn:uuid:a5e0645b-31fd-4c91-bdbe-2fc9dd32437e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Slader :: Homework Answers and Solutions
The Slader Solution Editor FAQ
1. Do not reproduce copyrighted material. This includes the exercise prompt from the textbook. This also includes parts or a whole solution from another copyrighted solution manual. Your solution
must be your own original work.
2. Every solution needs a result and an explanation. Remember that users looking at your solutions may be confused. Be clear and detailed!
3. The editor writes math expressions in LaTeX, a math markup language. Every math expression needs to be surrounded by dollar signs ($x+5=3$). You can leave text outside of the dollar signs. Get an
angry red x? That means your LaTeX has an error in it.
4. Are we missing a button for an operation that you need? For now, you can look it up on a web LaTeX resource. Email us at contributor@slader.com with what we should add to the editor.
How does this equation editor thing work?
Short answer: The editor renders LaTeX code. However, don't worry if you don't know what that is! Even if you think LaTeX is kinky synthetic rubber stuff, it's all right: just write as you would
normally. Just remember to put dollar signs ("$") around math expressions.
This would be an example of correct code:
The area of a circle is $\pi r^{2}$.
And this will almost certainly set our computers on fire:
The area of a circle is \pi r^{2}.
What if a solution already exists?
If you can offer a better explanation, you should upload your solution! Users vote on the best solution for each problem. When your solution is voted up, you will receive the royalties.
Why do I have to use the right side of the template?
You can format the template any way you like (by using the rows/columns buttons on the top right of the editor), but we highly recommend the two-column layout. Write out the steps of the problem on
the left and an explanation on the right. Assume the people reading your work are still learning the math involved, and be as clear as possible. Explanations are important because they are what will
separate your solutions from others’ solutions. The best solution will gain the most revenue, and the “best” solution is frequently the one that is the easiest to understand.
What’s the difference between the result and the explanation?
The result (the bottom part of the template) is the back-of-the-book answer - what you would see if you flipped to the back of your textbook to check your work. The rest of the template is for your
Why is there is an angry red X here?
That means there's an error in your code. Just go through what you wrote and see if you have a misplaced dollar sign. Other common errors include a misplaced backslash (\) or a forgotten brace/
bracket. If you are still having trouble, you can check your LaTeX on a parser like mathurl.
You don’t have a button I need.
Tell us here! Even if we don’t have a button for the operation you need yet, you can use the relevant LaTeX command to insert it. There are several LaTeX resources online than you can consult.
I’ve finished adding a solution. Why hasn’t it showed up on the site?
We have a team of trained moderators who check submitted solutions before they display. That way, we can make sure that the quality of submissions is high. Don’t worry - we have solutions up on the
site soon after their submission.
What are the minimum standards for submission?
At the very least, you should have a one-sentence explanation (for very simple problems) and a result. We will not accept spam, bad LaTeX, or solutions written in languages other than English.
If you would like to appeal a moderator solution, please email contributor@slader.com with the textbook, page number, and exercise number.
Why doesn’t the rendered code look like what I wrote?
Good question. It depends on what looks wrong with it. See if your problem is mentioned below.
The editor ignores all my line breaks/returns/new lines!
LaTeX is just designed like that, if you want to force a line break you need to put in a double backslash ("\\") at the end of the line.
My math looks funky.
Dollar signs! DOLLAR SIGNS! Put dollar signs around your math expressions! If you want to get super fancy with your code you can put double dollar signs ("$$") around your math expressions, which
will put the expression on its own line and center it. Using LaTeX is like using HTML: you need to close your dollar signs.
This is wrong:
This is right:
What if I need to talk about money?
Need the $ to actually display in your answer? You have to “escape” it like this:
$ \$ $
Confusing, isn’t it? The backslash before the second $ tells the parser to ignore what comes directly after it. The parser skips right over the dollar sign and doesn’t freak out. So if you need to
write $16.57, you would write
$ \$16.57 $
Some of my text is all squished together/I'm missing spaces in my math.
You probably put the text in math mode, which means there are dollar signs around your text. LaTeX ignores extra spaces generally, which is why you may be missing spaces in your text or math. Make
sure that your math expressions are surrounded by dollar signs and that your text is not. You can combine text and expressions if you need to. The next line is correct LaTeX:
I love writing equations like $x=2x-24$ even when I don’t understand what $x$ is.
I tried to write a table and it looks like a mess.
If you need to write a table, look up the LaTeX code for it. Using tabs and newlines won't work, and additionally will drive one or more of our moderators insane. There are many LaTeX resources
available online. | {"url":"http://www.slader.com/contribute/editor/","timestamp":"2014-04-21T07:27:17Z","content_type":null,"content_length":"31296","record_id":"<urn:uuid:74930db5-5b66-4513-b886-8cb751752087>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantitative evaluation study of four-dimensional gated cardiac SPECT reconstruction
In practice gated cardiac SPECT images suffer from a number of degrading factors, including distance-dependent blur, attenuation, scatter, and increased noise due to gating. Recently we proposed a
motion-compensated approach for four-dimensional (4D) reconstruction for gated cardiac SPECT, and demonstrated that use of motion-compensated temporal smoothing could be effective for suppressing the
increased noise due to lowered counts in individual gates. In this work we further develop this motion-compensated 4D approach by also taking into account attenuation and scatter in the
reconstruction process, which are two major degrading factors in SPECT data. In our experiments we conducted a thorough quantitative evaluation of the proposed 4D method using Monte Carlo simulated
SPECT imaging based on the 4D NURBS-based cardiac-torso (NCAT) phantom. In particular we evaluated the accuracy of the reconstructed left ventricular myocardium using a number of quantitative
measures including regional bias-variance analyses and wall intensity uniformity. The quantitative results demonstrate that use of motion-compensated 4D reconstruction can improve the accuracy of the
reconstructed myocardium, which in turn can improve the detectability of perfusion defects. Moreover, our results reveal that while traditional spatial smoothing could be beneficial, its merit would
become diminished with the use of motion-compensated temporal regularization. As a preliminary demonstration, we also tested our 4D approach on patient data. The reconstructed images from both
simulated and patient data demonstrated that our 4D method can improve the definition of the LV wall.
Keywords: 4D reconstruction, gated cardiac SPECT, motion-compensated temporal smoothing, attenuation and scatter compensation | {"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC3690946/?lang=en-ca","timestamp":"2014-04-19T06:39:12Z","content_type":null,"content_length":"131887","record_id":"<urn:uuid:2e34bd5d-48a6-4dc4-888f-38c2646d208e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comments on Noncommutative geometry: Infinitesimal variablesDear Theo,Thanks for mentioning Hardy's nice book...Masoud,I don't know which procedure Euler used for...There is an interesting entry level Wikipedia arti...David Goss has just kindly pointed out a paper by ...Hi Alain,Thanks for this post. I just wanted to co...
Arupnoreply@blogger.comBlogger5125tag:blogger.com,1999:blog-6912603287930240451.post-4657013079661021632007-06-28T01:35:00.000Z2007-06-28T01:35:00.000ZDear Theo,<BR/>Thanks for mentioning Hardy's
nice book on divergent series. In between I read a bit more on Euler's method of computing zeta values.I can identify at least two methods that he used. His early success, as mentioned in Ayoub's
article, was as follows, in modern notation. Let <BR/>Li_2 (x)= \sum x^n/n^2<BR/> be the `dilogarithm' function'. He proved the identity<BR/>Li_2 (x)
+Li_2masoudnoreply@blogger.comtag:blogger.com,1999:blog-6912603287930240451.post-51848074625910400882007-06-27T17:13:00.000Z2007-06-27T17:13:00.000ZMasoud,<BR/><BR/>I don't know which procedure Euler
used for zeta(2); he certainly had quite a collection of methods to make slowly-convergent series speed up.<BR/><BR/>The very fine book <I>Divergent Series</I> by G.H. Hardy (1949) discusses a
There is an interesting entry level Wikipedia article on Euler-MacLaurin summation formula at<BR/>http://en.wikipedia.org/wiki/Euler-Maclaurin_formula<BR/>It points out to the double-edged nature of
the formula: to use it to approximate a series by a definite integral, as Euler did, or approximating an integral by a series, as in MacLaurin's case. <BR/><BR/> Nice post!
Chrisnoreply@blogger.comtag:blogger.com,1999:blog-6912603287930240451.post-75234528329673769732007-06-22T19:33:00.000Z2007-06-22T19:33:00.000ZDavid Goss has just kindly pointed out a paper by Raymond
Ayoub (`Euler<BR/>and the Zeta function' American Math Monthly, Dec. 1974) where the issue of numerical computation of zeta values by Euler is explained very well. His early success was in 1731 where
he showed zeta (2)= 1.644934 by an elaborate summation technique which increased the rate of convergence. His second computation was in
masoudnoreply@blogger.comtag:blogger.com,1999:blog-6912603287930240451.post-74393212848109809822007-06-22T17:16:00.000Z2007-06-22T17:16:00.000ZHi Alain,<BR/>Thanks for this post. I just wanted to
comment on Euler's book `Introductio in Analysis Infinitorum'. Indeed the numerical computation of zeta values is another achievement of Euler. The original series for the zeta function is slowly
convergent (to find zeta (2) using this series within just six decimal digits one has to add a million terms!) So the original series, it seems to masoudnoreply@blogger.com | {"url":"http://noncommutativegeometry.blogspot.com/feeds/6462938788470774958/comments/default","timestamp":"2014-04-16T18:57:29Z","content_type":null,"content_length":"11311","record_id":"<urn:uuid:49421c50-f9c5-487f-afca-bd1fbb86b75b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Algebra and Trigonometry
ISBN: 9780321296429 | 0321296427
Edition: 1st
Format: Hardcover
Publisher: Addison Wesley
Pub. Date: 1/1/2008
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/college-algebra-trigonometry-1st-ratti-j-s/bk/9780321296429","timestamp":"2014-04-16T21:58:08Z","content_type":null,"content_length":"27643","record_id":"<urn:uuid:1986d280-053e-4140-8846-acce79c7e2bf>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
H J Keselman
Affiliation: University of Manitoba Collaborators
Country: Canada
Detail Information
1. A generally robust approach for testing hypotheses and setting confidence intervals for effect sizes
H J Keselman
Department of Psychology, University of Manitoba, 190 Dysart Road, Winnipeg, Manitoba, Canada
Psychol Methods 13:110-29. 2008
..In an online supplement, the authors use several examples to illustrate the application of an SAS program to implement these statistical methods...
2. Many tests of significance: new methods for controlling type I errors
H J Keselman
Department of Psychology, University of Manitoba, Winnipeg, Manitoba, Canada
Psychol Methods 16:420-31. 2011
..05. We demonstrate with two published data sets how more hypotheses can be rejected with k-FWER methods compared to FWER control...
3. Adaptive robust estimation and testing
H J Keselman
Department of Psychology, University of Manitoba, Winnipeg, Manitoba, Canada R3T 2N2
Br J Math Stat Psychol 60:267-93. 2007
..With regard to the power to detect non-null treatment effects, we found that the choice among the methods depended on the degree of non-normality and variance heterogeneity. Recommendations are
4. A comparative study of robust tests for spread: asymmetric trimming strategies
H J Keselman
University of Manitoba, Winnipeg, Manitoba, Canada
Br J Math Stat Psychol 61:235-53. 2008
5. Pairwise multiple comparison test procedures: an update for clinical child and adolescent psychologists
H J Keselman
Department of Psychology, University of Manitoba, Winnipeg, Manitoba, Canada R3T 2N2
J Clin Child Adolesc Psychol 33:623-45. 2004
..The newer methods are intended to provide additional sensitivity to detect treatment group differences and provide tests that are robust to the effects of variance heterogeneity, nonnormality,
or both...
6. A generally robust approach to hypothesis testing in independent and correlated groups designs
H J Keselman
Department of Psychology, University of Manitoba, Winnipeg, Manitoba, Canada
Psychophysiology 40:586-96. 2003
..We also illustrate, with examples from the psychophysiological literature, the use of a new computer program to obtain numerical results for these solutions...
7. Testing treatment effects in repeated measures designs: trimmed means and bootstrapping
H J Keselman
Department of Psychology, University of Manitoba, Winnipeg, Canada
Br J Math Stat Psychol 53:175-91. 2000
..Neither approach particularly benefited from adopting bootstrapped critical values. Recommendations are provided to researchers regarding when each approach is best...
8. Robust tests for the multivariate Behrens-Fisher problem
Lisa M Lix
Department of Community Health Sciences, Faculty of Medicine, University of Manitoba, 408 727 McDermot Avenue, Winnipeg, Man, R3E 3P5, Canada
Comput Methods Programs Biomed 77:129-39. 2005
..Recommendations are provided on the specific data-analytic conditions under which these tests should be adopted...
9. Pairwise multiple comparisons: a model comparison approach versus stepwise procedures
Robert A Cribbie
Department of Psychology, York University, Toronto, Canada
Br J Math Stat Psychol 56:167-82. 2003
..The protected version of the model selection approach selected the true model a significantly greater proportion of times than the stepwise procedures and, in most cases, was not affected by
variance heterogeneity and non-normality...
10. Effect of non-normality on test statistics for one-way independent groups designs
Robert A Cribbie
Department of Psychology, York University, Toronto, Canada
Br J Math Stat Psychol 65:56-73. 2012
..The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are
11. The new and improved two-sample T test
H J Keselman
University of Manitoba, Winnipeg, Manitoba, Canada
Psychol Sci 15:47-51. 2004
..We find that a transformation for skewness combined with a bootstrap method improves Type I error control and probability coverage even if sample sizes are small...
12. The analysis of repeated measures designs: a review
H J Keselman
Department of Psychology, University of Manitoba, 190 Dysart Road, Winnipeg, Manitoba, Canada R3T 2N2
Br J Math Stat Psychol 54:1-20. 2001
..Additional topics discussed include analyses for missing data and tests of linear contrasts...
13. An examination of the robustness of the empirical Bayes and other approaches for testing main and interaction effects in repeated measures designs
H J Keselman
Department of Psychology, University of Manitoba, Winnipeg, Canada
Br J Math Stat Psychol 53:51-67. 2000
..On the other hand, the Huynh and Keselman et al. procedures were generally robust to these same pairings of covariance matrices and group sizes...
14. Controlling the rate of Type I error over a large set of statistical tests
H J Keselman
Department of Psychology, University of Manitoba, Winnipeg, Canada
Br J Math Stat Psychol 55:27-39. 2002
..05 value. Accordingly, we recommend the Benjamini and Hochberg (1995, 2000) methods of Type I error control when the number of tests in the family is large...
15. An alternative to Cohen's standardized mean difference effect size: a robust parameter and confidence interval in the two independent groups case
James Algina
Department of Educational Psychology, University of Florida, Gainesville, FL 32611 7047, USA
Psychol Methods 10:317-28. 2005
..Over the range of distributions and effect sizes investigated in the study, coverage probability was better for the percentile bootstrap confidence interval...
16. Repeated measures one-way ANOVA based on a modified one-step M-estimator
Rand R Wilcox
Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA
Br J Math Stat Psychol 56:15-25. 2003
..Methods based on a simple modification of a one-step M-estimator that address the problems with trimmed means are examined. Several omnibus tests are compared, one of which performed well in
simulations, even with a sample size of 11...
17. Multivariate tests of means in independent groups designs. Effects of covariance heterogeneity and nonnormality
Lisa M Lix
University of Manitoba
Eval Health Prof 27:45-69. 2004
..A numeric example illustrates the statistical concepts that are presented and a computer program to implement these robust solutions is introduced...
18. Comparing measures of the 'typical' score across treatment groups
Abdul R Othman
Universiti Sains Malaysia, Malaysia
Br J Math Stat Psychol 57:215-34. 2004 | {"url":"http://www.labome.org/expert/canada/university/keselman/h-j-keselman-1271386.html","timestamp":"2014-04-16T16:49:08Z","content_type":null,"content_length":"21709","record_id":"<urn:uuid:930f9c9c-55f2-46ac-9231-a5d2a3c0c0a7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
We consider a fashion discounter distributing its many branches with integral multiples from a set of available lot-types. For the problem of approximating the branch and size dependent demand
using those lots we propose a tailored exact column generation approach assisted by fast algorithms for intrinsic subproblems, which turns out to be very efficient on our real-world instances.
We computationally assess policies for the elevator control problem by a new column-generation approach for the linear programming method for discounted infinite-horizon Markov decision problems.
By analyzing the optimality of given actions in given states, we were able to provably improve the well-known nearest-neighbor policy. Moreover, with the method we could identify an optimal
parking policy. This approach can be used to detect and resolve weaknesses in particular policies for Markov decision problems. | {"url":"http://opus.ub.uni-bayreuth.de/opus4-ubbayreuth/solrsearch/index/search/searchtype/collection/id/13330/start/0/rows/10/author_facetfq/J%C3%B6rg+Rambau","timestamp":"2014-04-18T23:21:56Z","content_type":null,"content_length":"23889","record_id":"<urn:uuid:b37e7d27-c320-451b-a074-8d4d418d20a5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Taylor series approximation and error
February 13th 2011, 08:00 AM #1
Junior Member
Sep 2008
Mesnil, Mauritius
Taylor series approximation and error
The question:
For small values of $s$, how good is the approximation $cos x \approx x$ ?
By "small values", how small does the question want x to be? Less than 1? Less than 0.1? Less than 0.01?
Also, how do I proceed to get an approximation of the error? After expressing $cos x$ in terms of its Taylor series, at a point $c=0$, I get the error term in terms of $O(x^2)$.
Any help to extend my reasoning would be much appreciated.
February 13th 2011, 09:13 AM #2 | {"url":"http://mathhelpforum.com/trigonometry/171113-taylor-series-approximation-error.html","timestamp":"2014-04-17T01:59:33Z","content_type":null,"content_length":"35061","record_id":"<urn:uuid:f68d643e-65b2-42b2-a180-c6223b212716>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Advanced Studies in Pure Mathematics
2007; 432 pp; hardcover
Volume: 45
ISBN-10: 4-931469-38-8
ISBN-13: 978-4-931469-38-9
List Price: US$102
Member Price: US$81.60
Order Code: ASPM/45
Since its birth, algebraic geometry has been closely related to and deeply motivated by number theory. The modern study of moduli spaces and arithmetic geometry demonstrates that these two areas have
many important techniques and ideas in common. With this close relation in mind, the RIMS conference "Moduli Spaces and Arithmetic Geometry" was held at Kyoto University during September 8-15, 2004
as the 13th International Research Institute of the Mathematical Society of Japan.
This volume is the outcome of this conference and consists of thirteen papers by invited speakers, including C. Soulé, A. Beauville and C. Faber, and other participants. All papers, with two
exceptions by C. Voisin and Yoshinori Namikawa, treat moduli problem and/or arithmetic geometry. Algebraic curves, Abelian varieties, algebraic vector bundles, connections and D-modules are the
subjects of those moduli papers. Arakelov geometry and rigid geometry are studied in arithmetic papers. In the two exceptions, integral Hodge classes on Calabi-Yau threefolds and symplectic
resolutions of nilpotent orbits are studied.
Volumes in this series are freely available electronically 5 years post-publication.
Published for the Mathematical Society of Japan by Kinokuniya, Tokyo, and distributed worldwide, except in Japan, by the AMS.
Graduate students and research mathematicians interested in algebra and algebraic geometry.
• K. Yoshioka -- Moduli spaces of twisted sheaves on a projective variety
• D. Huybrechts and P. Stellari -- Appendix. Proof of Caldararu's conjecture
• C. Voisin -- On integral Hodge classes on uniruled or Calabi-Yau threefolds
• Y. Namikawa -- Birational geometry of symplectic resolutions of nilpotent orbits
• T. Abe -- The moduli stack of rank-two Gieseker bundles with fixed determinant on a nodal curve
• A. Beauville -- Vector bundles on curves and theta functions
• A. Moriwaki -- On the finiteness of abelian varieties with bounded modular height
• N. Nitsure -- Moduli of regular holonomic \(\mathcal{D}_X\)-modules with natural parabolic stability
• I. Nakamura and K. Sugawara -- The cohomology groups of stable quasi-abelian schemes and degenerations associated with the \(E_8\)-lattice
• C. Soulé -- Semi-stable extensions on arithmetic surfaces
• C. Consani and C. Faber -- On the cusp form motives in genus 1 and level 1
• S. Mukai -- Polarized K3 surfaces of genus thirteen
• K. Fujiwara and F. Kato -- Rigid geometry and applications
• M. Inaba, K. Iwasaki, and M. Saito -- Moduli of stable parabolic connections, Riemann-Hilbert correspondence and geometry of Painlevé equation of type VI, part II | {"url":"http://www.ams.org/bookstore?fn=50&arg1=salenumber&ikey=ASPM-45","timestamp":"2014-04-21T15:04:45Z","content_type":null,"content_length":"16688","record_id":"<urn:uuid:8e23eada-19ed-427d-a97d-251ec9a1241c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rent's Rule Estimate of Wire Length
HOME UP   PREV NEXT (Power Estimation Formula)
Rent's Rule Estimate of Wire Length
If we know the physical area of each leaf cell we can estimate the area of each component in a heirarchic design (sum of parts plus percentage swell).
Rent's rule pertains to the organization of computing logic, specifically the relationship between the number of external signal connections to a logic block with the number of logic gates in the
logic block, and has been applied to circuits ranging from small digital circuits to mainframe computers »[Wikipedia].
Rent gives a simple power-law relationship and wire length distribution (with good placement) follows an equally-predictable pattern.
With a heirarchic design, where we have the area use of each leaf cell, even without placement, we can follow a net's trajectorary up and down the hierarchy and apply Rent's Rule.
Hence we can estimate a signal's length by sampling a power law distribution whose 'maximum' is the square root of the area of the lowest-common-parent component in the hierarchy. | {"url":"http://www.cl.cam.ac.uk/teaching/1213/SysOnChip/materials/sg7power/zhpa0dc2404d.html","timestamp":"2014-04-18T10:48:36Z","content_type":null,"content_length":"2687","record_id":"<urn:uuid:be911be5-0a02-411d-8151-2500103c0670>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math in the Media
March 2002
More backstage math. The February 1 2002 Science ran ``Beautiful Mind's Math Guru Makes Truth = Beauty,'' a piece by David Mackenzie about Dave Beyer, the Barnard College professor who advised the
director on mathematical matters. The focus is on the problem that the Nash in the movie proposes to his MIT class:
For the purposes of the script, Beyer needed a problem that could be stated in terms appropriate for the course (which seems to be the old M351 Advanced Calculus for Engineers), was subtle enough to
be out of reach for most undergraduates, but ``accessible enough so that Connelly's character, a bright physics student, might concoct a plausible, though incorrect, solution.'' He also wanted a
problem that mathematicians would recognize as worthwhile if they thought about it. But he hoped they wouldn't. Mackenzie quotes him as saying: ``If you put enough effort into making the math
credible, at a certain point you win the war. They're caught up in the movie and barely have time to recognize it's a problem in de Rham cohomology.''
``Highly enjoyable and interesting people" Nous? David Auburn, the playwright autor of Proof says in fact: ``The more time I spent with mathematicians, the more I found they were highly enjoyable and
interesting people to be around.'' This quotation is highlighted in a January 27 2002 Boston Sunday Globe article by their staff writer Maureen Dezell, entitled ``Setting dramas of love and loss in
the world of mathematics.'' Dezell spends most of her time on Proof and its author, who says he began writing ``a story about sisters fighting over something they found after their parents die,'' and
chose a mathematical proof as the disputed object ``because its fathership could be called into question the way a painting or a manuscript couldn't.'' Auburn was particularly pleased that the
mathematical community took the play seriously enough to organize a symposium (at NYU in October, 2000) on the topic. ``All these prominent mathematicians flew in and saw the show and spoke on
panels.'' He reports that Ben Shenkman, the young actor who played Hal, and who never got beyond calculus in college, turned to him at one point and said: ``I feel like George Clooney at a medical
Math on 42nd street. ``Solve this problem and win a Snickers bar.'' The challenge is thrown by Prof. George Nobl, who holds forth on 42nd Street between 5th and 6th Avenues every Wednesday at noon.
He stands by an easel with a whiteboard and a sheaf of problems. This activity is reported in the New York Times for February 7, 2002: ``Problems on the Street, Solvable with a Pencil'' by Yilu Zhao.
The problem of the hour, when the accompanying photograph was taken, is ``Pete sells a six inch pizza for $6.00. How much should he charge for a twelve inch pizza?'' Professor Nobl's goal is ``to
promote the fun of math,'' and to further his own pedagogical agenda. ``It's so easy to teach math right. Why teach it wrong?'' Wrong means using rote learning and memorization. Right is instilling
understanding. He says, according to Zhao, that once a student truly grasps a rule, getting the correct answer is easy. And he is now seeking grants to start a nonprofit group to hire a few teachers
who would put up similar stands around the city.
Fourier transform of the fossil record. This research, reported in a January 3 2002 Letter to Nature by James Kirchner of UC Berkeley, uses spectral analysis methods ``to measure how fossil
extinction and origination rates fluctuate across different timescales.'' His data were compilations of fossil marine animal families and genera over the last 500 million years (Myr); his conclusion:
``Compared with extinction rates, origination rates have equal or greater spectral power at long wavelengths (>100 Myr), but much lower spectral power at short wavelengths (<25 Myr).'' Implication of
this analysis: ``either the processes regulating originations have more inertia than those driving extinctions, or that origination events tend to be diverse and local, whereas extinctions
(particularly mass extinctions) tend to be coherent and global.'' What this means for us: ``If the continuing anthropogenic extinction episode turns out to be comparable to those in the fossil record
(which is not yet clear), my analysis shows that diversification rates are unlikely to accelerate enough to keep pace with it. Thus, widespread depletion of biodiversity would probably be permanent
on multimillion-year timescales.''
Large-scale sign error. Sign errors are the plague of calculation. But they are usually not as interesting as the one that ensnared two groups in 1995. In one of the Feynman integrals for the
computation of the ``predicted value of the muon's magnetism,'' using the Standard Model, they were ``misled by an extra minus sign.'' When last year a group at Brookhaven National Laboratory
obtained an experimental value that was significantly different, the discrepancy was interpreted by many physicists as possible evidence of supersymmetry. But no; when Marc Knecht and Andreas
Nyffeler (Center for Theoretical Physics, Marseille) refined the calculation, they found a different sign for that term. The 1995 groups rechecked their work and found where they had gone wrong; the
predicted and observed values are now only slightly farther apart than expected errors would allow. The story is told in ``Sign of Supersymmetry Fades Away'' by Adrian Cho, Science (News of the
Week), December 21, 2001.
``The Shape of the Universe: Ten Possibilities'' is the title of a long and lavishly illustrated article in the American Scientist for September-October 2001. The autors are Colin Adams and Joey
Shapiro, respectively professor and undergraduate at Williams College. The article starts from scratch with an explanation of the topology of surfaces, and then leaps into three dimensional
manifolds. Given that the universe is Euclidean (average curvature zero), as recent observations of the cosmic microwave background radiation (CMB) seem to imply, and orientable, there are only ten
possible topologies. Six are compact (finite volume); four are not; all are illustrated. Adams and Shapiro end by explaining how more accurate CMB measurements in the near future may give us a better
idea of the shape we're in.
-Tony Phillips
Stony Brook
Math in the Media Archive | {"url":"http://cust-serv@ams.org/news/math-in-the-media/mmarc-03-2002-media","timestamp":"2014-04-24T06:54:12Z","content_type":null,"content_length":"17318","record_id":"<urn:uuid:ce92cc84-a751-40b1-b434-3d0a2511eed4>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: Kwallis in a loop [now p-value of the Kwallis]
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: Kwallis in a loop [now p-value of the Kwallis]
From "Herve STOLOWY" <stolowy@hec.fr>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: Kwallis in a loop [now p-value of the Kwallis]
Date Sat, 12 Nov 2005 18:41:25 +0100
Dear Nick:
Sorry for this late reply. I tried your program and it works perfectly. I thank you very much.
Although I perfectly read what you wrote about -makematrix-, I still like this command and my level of programming is not good enough to write the program you designed for me.
So, I would have an additional question, hopefully more simple.
With a ranksum test, I learnt from prior postings how to get a matrix:
makematrix m1, from(r(z) 2*normprob(-abs(r(z)))) label : ranksum prov_ass, by( sodas2cl)
I woud like to do the same with -kwallis-. In your program, I found the following formula which seems to be the computation of the p-value of the kwallis:
max(chiprob(r(df), r(chi2_adj) + 1e-20),.0001)
Then, I wrote the following command with -makematrix-:
makematrix m2, from(r(chi2_adj) max(chiprob(r(df), r(chi2_adj) + 1e-20),.0001) ) label : kwallis prov_ass, by(sodas3cl)
Unfortunately, it does not work. I get the following error message:
invalid '.0001'
I obviously did something wrong.
Best regards
Coordinateur du Département/Head of Department
HEC Paris
Département Comptabilité Contrôle de gestion / Dept of Accounting and Management Control
1, rue de la Liberation
78351 - Jouy-en-Josas
Tel: +33 1 39 67 94 42 - Fax: +33 1 39 67 70 86
>>> n.j.cox@durham.ac.uk 11/06/05 11:39 PM >>>
This all looks more complicated than needed,
especially given the alternative of
a few seconds' copy and paste.
-tabstat- is not really designed for matrix
output, as is illustrated by the fact that you
have to massage things with -tabstatmat-, which
I rather regret writing, as it tempts people into
bad habits. Similarly, -makematrix- was a kind of
experiment, which was interesting at least for me,
but it is disconcerting whenever people use it
and there are better tools available for what
they are doing. It doesn't support -by:-,
partly because other things do. In your case,
you intend using it to put a scalar in a 1 X 1
matrix, which is not necessary. -statsmat-, on
the other hand, does seem closer to what you
want to do here.
The main part of that is wanting a matrix,
because you want to use -mat2txt-, because you want
a tab-delimited file for output. Or so
I understand.
Let's generalise to (1) several
variables and keep (2) a single grouping
variable, and write a program on the
grounds that we can write something
a little more widely applicable with
not much more effort. This produces
a matrix with Ns, means and sums
of ranks and the Kruskal-Wallis P-value.
I rather gave up on where the variable
names should go.
program pourherve
version 9
syntax varlist [if] [in], by(varname) matrix(str)
marksample touse
markout `touse' `by', strok
qui count if `touse'
if r(N) == 0 error 2000
tempname col work
qui tab `by' if `touse'
matrix `col' = J(`r(r)',1,.)
matrix colnames `col' = "P-value"
qui foreach v of local varlist {
tempvar rank
egen `rank' = rank(`v') if `touse'
statsmat `rank', by(`by') s(N sum mean) matrix(`work')
mat `work' = `work' , `col'
kwallis `v', by(`by')
matrix `work'[1,4] = ///
max(chiprob(r(df), r(chi2_adj) + 1e-20),.0001)
matrix `matrix' = nullmat(`matrix') \ `work'
drop `rank'
With your data
pourherve index2 prov_ass, by(sodas3cl) matrix(m)
mat2txt, matrix(m) saving(table5)
may help you along. Given other variables just
change the varlist.
Herve STOLOWY
> Thanks again for your help. I applied your suggestion and it
> works. The principle of the loop is fine.
> I have a new question concerning how to get the p-value of
> the kwallis test.
> Here is my loop:
> local action replace
> foreach var of varlist index2 prov_ass {
> egen rank_`var' = rank(`var') if sodas3cl < .
> bysort sodas3cl : egen ranksum_`var' = sum(rank_`var')
> by sodas3cl : egen rankmean_`var' = mean(rank_`var')
> statsmat `var', s(N) by(sodas3cl) mat(m1)
> tabstat rankmean_`var', by(sodas3cl) save
> tabstatmat m2
> matrix m3 = m1,m2
> statsmat rankmean_`var', s(N 0) mat(m4)
> matrix m5 = m3\m4
> makematrix m6, from(r(chi2_adj) formula to add) label :
> kwallis `var', by(sodas3cl)
> matrix m7 = m5\m6
> mat2txt, matrix(m7) saving(table5) `action'
> local action append
> }
> As the p-value of the test is not returned as a scalar, I
> learnt from prior postings to the list that I should add the
> relevant formula. For the Mann-Whitney U test, I know that
> the formula is: 2*normprob(-abs(r(z))).
> Unfortunately, I don't know the right formula for the Kwallis
> test. I searched in the kwallis ado file but don't find it. I
> certainly miss it.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2005-11/msg00442.html","timestamp":"2014-04-19T02:21:42Z","content_type":null,"content_length":"10069","record_id":"<urn:uuid:bf229a17-3579-4312-9c98-9505802c59a6>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Internal type theory, in: TYPES ’95, Types for Proofs and
- Annals of Pure and Applied Logic , 2003
"... 1 Introduction Induction-recursion is a powerful definition method in intuitionistic type theory in the sense of Scott ("Constructive Validity") [31] and Martin-L"of [17, 18, 19]. The first
occurrence of formal induction-recursion is Martin-L"of's definition of a universe `a la T ..."
Cited by 28 (11 self)
Add to MetaCart
1 Introduction Induction-recursion is a powerful definition method in intuitionistic type theory in the sense of Scott ("Constructive Validity") [31] and Martin-L"of [17, 18, 19]. The
first occurrence of formal induction-recursion is Martin-L"of's definition of a universe `a la Tarski [19], which consists of a set U | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=14370465","timestamp":"2014-04-18T19:46:15Z","content_type":null,"content_length":"12029","record_id":"<urn:uuid:3483c622-47ce-4bb6-bb23-c2f9458abac1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about psychology on Serious Stats
Multicollinearity tutoral
I just posted brief multicollinearity tutorial on my other blog (loosely based on the material from the Serious Stats book).
You can read it here.
It is now increasingly common for experimental psychologists (among others) to use multilevel models (also known as linear mixed models) to analyze data that used to be shoe-horned into a repeated
measures ANOVA design. Chapter 18 of Serious Stats introduces multilevel models by considering them as an extension of repeated measures ANOVA models that can cope with missing outcomes, time-varying
covariates and can relax the sphericity assumption of conventional repeated measures ANOVA. They can also deal with other – less well known – problems such as having stimuli that are random factor
(e.g., see this post on my Psychological Statistics blog). Last, but not least, multilevel generalised linear models allow you to have discrete and bounded outcomes (e.g., dichotomous, ordinal or
count data) rather than be constrained by as assuming a continuous response with normal errors.
There are two main practical problems to bear in mind when switching to the multilevel approach. First, the additional complexity of the approach can be daunting at first – though it is possible to
built up gently to more complex models. Recent improvements in availability of software and support (textbooks, papers and online resources) also help. The second is that as soon as a model departs
markedly from a conventional repeated measures ANOVA, correct inferences (notably significance tests and interval estimates such as confidence intervals) can be difficult to obtain. If the usual
ANOVA assumptions hold in a nested, balanced design then there is a known equivalence between the multilevel model inferences using t or F tests and the familiar ANOVA tests (and this case the
expected output of the tests is the same). The main culprits are boundary effects (which effect inferences about variances and hence most tests of random effects) and working out the correct degrees
of freedom (df) to use for your test statistic. Both these problems are discussed in Chapter 18 of the book. If you have very large samples an asymptotic approach (using Wald z or chi-square
statistics) is probably just fine. However, the further you depart from conventional repeated measures ANOVA assumptions the harder it is to know how large a sample news to be before the asymptotics
kick in. In other words, the more attractive the multilevel approach the less you can rely on the Wald tests (or indeed the Wald-style t or F tests).
The solution I advocate in Serious Stats is either to use parametric bootstrapping or Markov chain Monte Carlo (MCMC) approaches. Another approach is to use some form of correction to the df or test
statistic such as the Welch-Satterthwaite correction. For multilevel models with factorial type designs the recommended correction is generally the Kenward-Roger approximation. This is implemented in
SAS, but (until recently) not available in R. Judd, Westfall and Kenny (2012) describe how to use the Kenward-Roger approximation to get more accurate significance tests from a multilevel model using
R. Their examples use the newly developed pbkrtest package (Halekoh & Højsgaard, 2012) – which also has functions for parametric bootstrapping.
My purpose here is to contrast the the MCMC and Kenward-Roger correction (ignoring the parametric bootstrap for the moment). To do that I’ll go through a worked example – looking to obtain a
significance test and a 95% confidence interval (CI) for a single effect.
The pitch data example
The example I’ll use is for the pitch data from from Chapter 18 of the book. This experiment (from a collaboration with Tim Wells and Andrew Dunn) involves looking at the at pitch of male voices
making attractiveness ratings with respect to female faces. The effect of interest (for this example) is whether average pitch goes up or done for higher ratings (and if so, by how much). A
conventional ANOVA is problematic because this is a design with two fully crossed random factors – each participant (n = 30) sees each face (n = 32) and any conclusions ought to generalise both to
other participants and (crucially) to other faces. Furthermore, there is a time-varying covariate – the baseline pitch of the numerical rating when no face is presented. The significance tests or CIs
reported by most multilevel modelling packages with also be suspect. Running the analysis in the R package lme4 gives parameter estimates and t statistics for the fixed effects but no p values or
CIs. The following R code loads the pitch data, checks the first few cases, loads lme4 and runs the model of interest. (You should install lme4 using the command install.packages(‘lme4′) if you
haven’t done so already).
pitch.dat <- read.csv('http://www2.ntupsychology.net/seriousstats/pitch.csv')
pitch.me <- lmer(pitch ~ base + attract + (1|Face) + (1|Participant), data=pitch.dat)
Note the lack of df and p values. This is deliberate policy by the lme4 authors; they are not keen on giving users output that has a good chance of being very wrong.
The Kenward-Roger approximation
This approximation involves adjusting both the F statistic and its df so that the p value comes out approximately correct (see references below for further information). It won’t hurt too much to
think of it as turbocharged Welch-Satterthwaite correction. To get the corrected p value from this approach first install the pbkrtest package and then load it. The approximation is computed using
the KRmodcomp() function. This takes the model of interest (with the focal effect) and a reduced model (one without the focal effect). The code below installs and loads everything, runs the reduced
model and then uses KRmodcomp() to get the corrected p value. Note that it may take a while to run (it took about 30 seconds on my laptop).
pitch.red <- lmer(pitch ~ base + (1|Face) + (1|Participant), data=pitch.dat)
KRmodcomp(pitch.me, pitch.red)
The corrected p value is .0001024. The result could reported as a Kenward-Roger corrected test with F(1, 118.5) = 16.17, p = .0001024. In this case the Wald z test would have given a p value of
around .0000435. Here the effect is sufficiently large that the difference in approaches doesn’t matter – but that won’t always be true.
The MCMC approach
The MCMC approach (discussed in Chapter 18) can be run in several ways – with the lme4 functions or those in MCMCglmm being fairly easy to implement. Here I’ll stick with lme4 (but for more complex
models MCMCglmm is likely to be better).
First you need to obtain a large number of Monte Carlo simulations from the model of interest. I’ll use 25,000 here (but I often start with 1,000 and work up to a bigger sample). Again this may take
a while (about 30 or 40 seconds on my laptop).
pitch.mcmc <- mcmcsamp(pitch.me, n = 25000)
For MCMC approaches it is useful to check the estimates from the simulations. Here I’ll take a quick look at the trace plot (though a density plot is also sensible – see chapter 18).
This produces the following plot (or something close to it):
The trace for the fixed effect of attractiveness looks pretty healthy – the thich black central portion indicating that it doesn’t jump around too much. Now we can look at the 95% confidence interval
(strictly a Bayesian highest posterior density or HPD interval – but for present purposes it approximates to a 95% CI).
This gives the interval estimate [0.2227276, 0.6578456]. This excludes zero so it is statistically significant (and MCMCglmm would have given an us MCMC-derived estimate of the p value).
Comparison and reccomendation
Although the Kenward-Roger approach is well-regarded, for the moment I would reccomend the MCMC approach. The pbkrtest package is still under development and I could not always get the approximation
or the parametric bootstrap to work (but the parametric bootstrap can also be obtained in other ways – see Chapter 18).
The MCMC approach is also preferable in that it should generalize safely to models where the performance of the Kenward-Roger approximation is unknown (or poor) such as for discrete or ordinal
outcomes. It also provides interval estimates rather than just p values. The main downside is that you need to familiarize yourself with some basic MCMC diagnostics (e.g., trace and density plots at
the very least) and be willing to re-run the simulations to check that the interval estimates are stable.
Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of
Personality and Social Psychology, 103, 54-69.
Halekoh, U., & Højsgaard, S. (2012) A Kenward-Roger approximation and parametric bootstrap methods for tests in linear mixed models – the R package pbkrtest. Submitted to Journal of Statistical
Ben Bolker pointed out that future versions of lme4 may well drop the MCMC functions (which are limited, at present, to fairly basic models). In the book I mainly used MCMCglmm – which is rather good
at fitting fully crossed factorial models. Here is the R code for the pitch data. Using 50,000 simulations seems to give decent estimates of the attractiveness effect. Plotting the model object gives
both MCMC trace plots and kernel density plots of the MCMC estimates (hit return in the console to see all the plots).
nsims <- 50000
pitch.mcmcglmm <- MCMCglmm(pitch ~ base + attract, random= ~ Participant + Face, nitt=nsims, data=pitch.dat)
Last but not least, any one interested in the topic should keep an eye on the draft r-sig-mixed-modelling FAQ for a summary of the challenges and latest available solutions for multilevel inference
in R (and other packages).
The companion web site for Serious stats is now live:
It includes a sample chapter (Chapter 15: Contrasts), data sets, R scripts for all the examples and supplementary material.
In Chapter 2 (Confidence Intervals) of Serious stats I consider the problem of displaying confidence intervals (CIs) of a set of means (which I illustrate with the simple case of two independent
means). Later, in Chapter 16 (Repeated Measures ANOVA), I consider the trickier problem of displaying of two or more means from paired or repeated measures. The example in Chapter 16 uses R functions
from my recent paper reviewing different methods for displaying means for repeated measures (within-subjects) ANOVA designs (Baguley, 2012b). For further details and links see a brief summary on my
psychological statistics blog. The R functions included a version for independent measures (between-subject) designs, but this was a rather limited designed for comparison purposes (and not for
actual use).
The independent measures case is relatively straight-forward to implement and I hadn’t originally planned to write functions for it. Since then, however, I have decided that it is worth doing.
Setting up the plots can be quite fiddly and it may be useful to go over the key points for the independent case before you move on to the repeated measures case. This post therefore adapts my code
for independent measures (between-subjects) designs.
The approach I propose is inspired by Goldstein and Healy (1995) – though other authors have made similar suggestions over the years (see Baguley, 2012b). Their aim was to provide a simple method for
displaying a large collection of independent means (or other independent statistics). At its simplest the method reduces to plotting each statistic with error bars equal to ±1.39 standard errors of
the mean. This result is a normal approximation that can be refined in various ways (e.g., by using the t distribution or by extending it to take account of correlations between conditions). Using a
Goldstein-Healy plot two means are considered different with 95% confidence if their two intervals do not overlap. In other words non-overlapping CIs are (in this form of plot) approximately
equivalent to a statistically significant difference between the two means with α = .05. For convenience I will refer to CIs that have this property as difference-adjusted CIs (to distinguish them
from conventional CIs).
It is important to realize that conventional 95% CIs constructed around each mean won’t have this property. For independent means they are usually around 40% too wide and thus will often overlap even
if the usual t test of their difference is statistically significant at p < .05. This happens because the variance of a difference is (in independent samples) equal to the sum of the variances of the
individual samples. Thus the standard error of the difference is around $\sqrt 2$ times too large (assuming equal variances). For a more comprehensive explanation see Chapter 3 of Serious stats or
Baguley (2012b).
What to plot
If you have only two means there are at least three basic options:
1) plot the individual means with conventional 95% CIs around each mean
2) plot the difference between means and a 95% CI for the difference
3) plot some form of difference-adjusted CI
Which option is best? It depends on what you are trying to do. A good place to start is with your reasons for constructing a graphical display in the first place. Graphs are not particularly good
for formal inference and other options (e.g., significance tests, reporting point estimates CIs in text, likelihood ratios, Bayes factors and so forth) exist for reporting the outcome of formal
hypothesis tests. Graphs are appropriate for informal inference. This includes exploratory data analysis, to aid the interpretation of complex patterns or to summarize a number of simple patterns in
a single display. If the patterns are very clear, informal inference might be sufficient. In other cases it can be supplemented with formal inference.
What patterns do the three basic options above reveal? Option 1) shows the precision around individual means. This readily supports inference about the individual means (but not their difference).
For example, a true population outside the 95% CI is considered implausible (and the observed mean would be different from that hypothesized value with p < .05 using a one sample t test).
Option 2) makes for a rather dull plot because it just involves a single point estimate for the difference in means and the 95% CI for the difference. If this is the only quantity of interest you’d
be better off just reporting the mean and 95% CI in the text. This has advantage of being more compact and more accurate than trying to read the numbers off a graph. [This is one reason that graphs
aren't optimal for formal inference; it can be hard, for instance, to tell whether a line includes zero or excludes zero when the difference is just statistically significant or just statistically
non-significant. With informal inference you shouldn't care where p = .049 or p = .051, but whether there are any clear patterns in the data]
Option 3) shows you the individual means but calibrates the CIs so that you can tell if it is plausible that the sample means differ (using 95% confidence in the difference as a standard). Thus it
seems like a good choice for graphical display if you are primarily interested in the differences between means. For formal inference it can be supplemented by reporting a hypothesis test in the text
(or possibly a Figure caption).
It is worth noting that option 3) becomes even more attractive if you have more than two means to plot. It allows you to see patterns that emerge over the set of means (e.g., linear or non-linear
trends or – if n per sample is similar – changes in variances) and to compare pairs of means to see whether it is plausible that they are different.
In contrast, option 2) is rather unattractive with more than two means. First, with J means there are J(J-1)/2 differences and thus an unnecessarily cluttered graphical display (e.g., with J = 5
means there are 10 Cis to plot). Second, plotting only the differences can obscure important patterns in the data (e.g., an increasing or decreasing trend in the means or variances would be difficult
to identify).
Difference-adjusted CIs using the t distribution
Where only a few means are to be plotted (as is common in ANOVA) it makes sense to take a slight more accurate approach than the approximation originally proposed by Goldstein and Healy for large
collections of means. This approach uses the t distribution. A similar approach is advocated by Afshartous and Preston (2010) who also provide R code for calculating multipliers for the standard
errors using the t distribution (and an extension for the repeated measures). My approach is similar, but involves calculating the margin of error (half width of the error bars) directly rather than
computing a multiplier to apply to the standard error.
Difference-adjusted CIs for the mean of each sample from an independent measures (between-subjects) ANOVA design is given by Equation 3.31 of Serious stats:
$\hat \mu _j \pm t_{n_j - 1,1 - {\alpha \mathord{\left/ {\vphantom {\alpha 2}} \right. \kern-ulldelimiterspace} 2}} {{\sqrt 2 } \over 2} \times \hat \sigma _{\hat \mu _j }$
The $\hat \mu _j$ term is the mean of the jth sample (where samples are labeled j = 1 to J) and $\hat \sigma _{\hat \mu _j }$ is the standard error of that sample. The $t_{n_j - 1,1 - {\alpha \
mathord{\left/ {\vphantom {\alpha 2}} \right. \kern-ulldelimiterspace} 2}}$ term is the quantile of the t distribution with $n_j - 1$ degrees of freedom (where $n_j$ is the size of jth sample) that
includes to 100(1 - α) % of the distribution.
Thus, apart from the ${{\sqrt 2 } \mathord{\left/ {\vphantom {{\sqrt 2 } 2}} \right. \kern-ulldelimiterspace} 2}$ term, this equation is identical to that for a 95% CI around the individual means,
with the proviso that the standard error here is computed separately for each sample. This differs from the usual approach to plotting CIs for independent measures ANOVA design – where it is common
to use a pooled standard error computed from a pooled standard deviation ( the root mean square error of the ANOVA) . While a pooled error term is sometimes appropriate, it is generally a bad idea
for graphical display of the CIs because it will obscure any patterns in the variability of the samples. [Nevertheless, where $n_j$ is very small it make make sense to use a pooled error term on the
grounds that each sample provides an exceptionally poor estimate of its population standard deviation]
However, the most important change is the ${{\sqrt 2 } \mathord{\left/ {\vphantom {{\sqrt 2 } 2}} \right. \kern-ulldelimiterspace} 2}$ term. It creates a difference-adjusted CI by ensuring that the
joint width of the margin of error around any two means is $latex \sqrt 2 $ times larger than for a single mean. The division by 2 arises merely as a consequence of dealing jointly with two error
bars. Their total has to be $latex \sqrt 2 $ times larger and therefore each one needs only to be ${{\sqrt 2 } \mathord{\left/ {\vphantom {{\sqrt 2 } 2}} \right. \kern-ulldelimiterspace} 2}$ times
its conventional value (for an unadjusted CI). This is discussed in more detail by Baguley (2012a; 2012b).
This equation should perform well (e.g., providing fairly accurate coverage) as long as variances are not very unequal and the samples are approximately normal. Even when these conditions are not
met, remember the aim is not to support formal inference. In addition, the approach is likely to be slightly more robust than ANOVA (at least to homogeneity of variance and unequal sample sizes). So
this method is likely to be a good choice whenever ANOVA is appropriate.
R functions for independent measures (between-subjects) ANOVA designs
Two R functions for difference-adjusted CIs in independent measures ANOVA designs are provided here. The first function bsci() calculates conventional or difference-adjusted CIs for a one-way ANOVA
bsci <- function(data.frame, group.var=1, dv.var=2, difference=FALSE, pooled.error=FALSE, conf.level=0.95) {
data <- subset(data.frame, select=c(group.var, dv.var))
fact <- factor(data[[1]])
dv <- data[[2]]
J <- nlevels(fact)
N <- length(dv)
ci.mat <- matrix(,J,3, dimnames=list(levels(fact), c('lower', 'mean', 'upper')))
ci.mat[,2] <- tapply(dv, fact, mean)
n.per.group <- tapply(dv, fact, length)
if(difference==TRUE) diff.factor= 2^0.5/2 else diff.factor=1
if(pooled.error==TRUE) {
for(i in 1:J) {
moe <- summary(lm(dv ~ 0 + fact))$sigma/(n.per.group[[i]])^0.5 * qt(1-(1-conf.level)/2,N-J) * diff.factor
ci.mat[i,1] <- ci.mat[i,2] - moe
ci.mat[i,3] <- ci.mat[i,2] + moe
if(pooled.error==FALSE) {
for(i in 1:J) {
group.dat <- subset(data, data[1]==levels(fact)[i])[[2]]
moe <- sd(group.dat)/sqrt(n.per.group[[i]]) * qt(1-(1-conf.level)/2,n.per.group[[i]]-1) * diff.factor
ci.mat[i,1] <- ci.mat[i,2] - moe
ci.mat[i,3] <- ci.mat[i,2] + moe
plot.bsci <- function(data.frame, group.var=1, dv.var=2, difference=TRUE, pooled.error=FALSE, conf.level=0.95, xlab=NULL, ylab=NULL, level.labels=NULL, main=NULL, pch=21, ylim=c(min.y, max.y), line.width=c(1.5, 0), grid=TRUE) {
data <- subset(data.frame, select=c(group.var, dv.var))
if(missing(level.labels)) level.labels <- levels(data[[1]])
if (is.factor(data[[1]])==FALSE) data[[1]] <- factor(data[[1]])
if (is.factor(data[[1]])==TRUE) data[[1]] <- factor(data[[1]])
dv <- data[[2]]
J <- nlevels(data[[1]])
ci.mat <- bsci(data.frame=data.frame, group.var=group.var, dv.var=dv.var, difference=difference, pooled.error=pooled.error, conf.level=conf.level)
moe.y <- max(ci.mat) - min(ci.mat)
min.y <- min(ci.mat) - moe.y/3
max.y <- max(ci.mat) + moe.y/3
if (missing(xlab))
xlab <- "Groups"
if (missing(ylab))
ylab <- "Confidence interval for mean"
plot(0, 0, ylim = ylim, xaxt = "n", xlim = c(0.7, J + 0.3), xlab = xlab,
ylab = ylab, main = main)
points(ci.mat[,2], pch = pch, bg = "black")
index <- 1:J
segments(index, ci.mat[, 1], index, ci.mat[, 3], lwd = line.width[1])
segments(index - 0.02, ci.mat[, 1], index + 0.02, ci.mat[, 1], lwd = line.width[2])
segments(index - 0.02, ci.mat[, 3], index + 0.02, ci.mat[, 3], lwd = line.width[2])
axis(1, index, labels=level.labels)
The default is difference=FALSE (on the basis that these are the CIs most likely to be reported in text or tables). The second function plot.bsci() uses the former function to plot the means and CIs
the default here is difference=TRUE (on the basis that it the difference-adjusted CIs are likely to be more useful for graphical display). For both functions the default is a pooled error term (
pooled.error=FALSE) and a 95% confidence level (conf.level=0.95). Each function also takes input as a data frame and assumes that the grouping variable is the first column and the dependent variable
the second column. If the appropriate variables are in different columns, the correct columns can be specified with the arguments group.var and dv.var. The plotting function also takes some standard
graphical parameters (e.g., for labels and so forth).
The following examples use the diagram data set from Serious stats. The first line loads the data set (if you have a live internet connection). The second line generated the difference-adjusted CIs.
The third line plots the difference adjusted CIs. Note that the grouping variable (factor) is in the second column and the DV is in the fourth column.
diag.dat <- read.csv('http://www2.ntupsychology.net/seriousstats/diagram.csv')
bsci(diag.dat, group.var=2, dv.var=4, difference=TRUE)
plot.bsci(diag.dat, group.var=2, dv.var=4, ylab='Mean description quality', main = 'Difference-adjusted 95% CIs for the Diagram data')
In this case the graph looks like this:
It should be immediately clear that while the segmented diagram condition (S) tends to have higher scores than the text (T) or picture (P) conditions, but the full diagram (F) condition is somewhere
in between. This matches the uncorrected pairwise comparisons where S > P = T, S = F, and F = P = T.
At some point I will also add a function to plot two-tiered error bars (combining option 1 and 3). For details of the extension to repeated measures designs see Baguley (2012b). The code and date
sets are available here.
Afshartous D., & Preston R. A. (2010). Confidence intervals for dependent data: equating nonoverlap with statistical significance. Computational Statistics and Data Analysis. 54, 2296-2305.
Baguley, T. (2012a, in press). Serious stats: A guide to advanced statistics for the behavioral sciences. Basingstoke: Palgrave.
Baguley, T. (2012b). Calculating and graphing within-subject confidence intervals for ANOVA. Behavior Research Methods, 44, 158-175.
Goldstein, H., & Healy, M. J. R. (1995). Journal of the Royal Statistical Society. Series A (Statistics in Society), 158, 175-177.
Schenker, N., & Gentleman, J. F. (2001). On judging the significance of differences by examining the overlap between confidence intervals. The American Statistician, 55, 182-186.
This is a blog to accompany my forthcoming book “Serious stats” published by Palgrave.
Baguley, T. (2012, in press). Serious stats: A guide to advanced statistics for the behavioral sciences. Basingstoke: Palgrave.
The book is available for pre-order (e.g., via amazon) and instructors should be able to pre-order inspection copies via Macmillan in the US (or Palgrave in the UK).
The proofs have been checked and returned and I am hoping for a publication date of May 2012.
Posted by Thom Baguley on November 11, 2013
Using multilevel models to get accurate inferences for repeated measures ANOVA designs
Posted by Thom Baguley on April 18, 2013
Serious stats companion web site now live: sample chapter, data and R scripts
Posted by Thom Baguley on March 23, 2012
Independent measures (between-subjects) ANOVA and displaying confidence intervals for differences in means
Posted by Thom Baguley on March 18, 2012
Serious stats: A guide to advanced statistics for the behavioral sciences
Posted by Thom Baguley on February 1, 2012 | {"url":"http://seriousstats.wordpress.com/tag/psychology/","timestamp":"2014-04-18T18:11:30Z","content_type":null,"content_length":"102815","record_id":"<urn:uuid:9beedf6c-3528-4046-be58-4e86336c4abb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basics of Electronic Devices
This ultimate unique application is for all students across the world. It covers 141 topics of Fundamentals of Electronics Devices in detail. These 141 topics are divided in 5 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
1. Transient and a-c conditions: time variation of stored charge
2. Photodiode
3. P-N-P-N diode
4. Semiconductor Controlled Rectifier
5. Light Emitting Diode
6. Tunnel Diode
7. TRIAC
8. DIAC
9. Insulated Gate Bipolar Transistor
10. GUNN diode- Basic Principle
11. GUNN diode-The Transferred Electron Mechanism
12. PNPN diode- Forward Blocking & conducting mode
13. Solar Cell- Working Principle
14. Solar Cell- I-V Characterstics
15. Rectifiers
16. The Breakdown Diode
17. Photodetectors
18. Photodiode Equations
19. PIN PHOTODIODE
20. Avalanche Photodiode
21. Light Emitting Materials
22. IMPATT diode
23. IMPATT diode operation
24. Semiconductor lasers
25. Semiconductor lasers- Under forward biased
26. Heterojunction Lasers Operation
27. Metal semiconductor field effect transistors (MESFET)
28. MESFET- The High Electron Mobility Transistor (HEMT)
29. The Metal Insulator Semiconductor FET (MISFET)
30. MISFET under different operating condition
31. The Ideal MOS Capacitor
32. MOSFET: Effects of Real Surfaces
33. MOSFET: Interface Charge
34. MOSFET: Threshold Voltage
35. MOS Capacitance Voltage Analysis
36. MOSFET: Time-Dependent Capacitance Measurements
37. Current-Voltage Characteristics of MOS Gate Oxides
38. The MOSFET
39. MOSFET: Output Characteristics
40. Conductance and Transconductance of the MOSFET
41. MOSFET: Transfer Characteristics
42. MOSFET: Mobility Models
43. MOSFET: Effective transverse field
44. Short Channel MOSFET l-V Characteristics
45. MOSFET: Control of Threshold Voltage
46. MOSFET: Threshold Adjustment by Ion Implantation
47. MOSFET: Substrate Bias Effects
48. MOSFET: Sub threshold Characteristics
49. Equivalent Circuit for the MOSFET
50. MOSFET Scaling and Short channel effect
51. MOSFET: Hot carrier effects
52. MOSFET: Drain-Induced Barrier Lowering
53. Short Channel Effect and Narrow Width Effect of MOSFET
54. Gate-Induced Drain Leakage in MOSFET
55. FUNDAMENTALS OF BJT OPERATION
56. BJT: Summary of hole and electron flow in a transistor
57. FUNDAMENTALS OF BJT OPERATION: PN junction
58. AMPLIFICATION WITH BJTS
59. EQUILIBRIUM CONDITIONS: The Contact Potential
60. Equilibrium Fermi Levels
61. Space Charge at a Junction
62. OPTICAL ABSORPTION
64. LUMINESCENCE
65. Photoluminescence
66. Carrier Lifetime And Photoconductivity: direct recombination
67. Indirect Recombination: Trapping
68. Steady State Carrier Generation: Quasi-Fermi Levels
69. Photoconductive Devices
70. DIFFUSION OF CARRIERS: Diffusion Processes
71. Diffusion and Drift of Carriers: Built-in Fields
72. The Einstein Relation
73. The Continuity Equation
74. Steady State Carrier Injection and Diffusion Length
75. The Haynes-Shockley Experiment
76. Gaussian distribution
77. Gradients in the Quasi-Fermi Levels
79. CRYSTAL LATTICES: Periodic Structures
80. Cubic Lattices Structures
81. Planes and Directions: Miller indices
82. The Diamond Lattice
83. Bonding Forces in Solids
84. Energy Bands
85. Linear combinations of the individual atomic orbitals (LCAO)
86. Metals, Semiconductors, and Insulators
87. Direct and Indirect Semiconductors
88. Variation of Energy Bands with Alloy Composition
89. Charge Carriers In Semiconductors: Electrons and Holes
All topics are not listed because of character limitations set by the Play Store.
Version 3 (1.2) updates:
Small bug fixed and this version will not support devices are below android version 3.0 (Level 11).
Version 2 (1.1) updates:
Many thanks to you all for the feedback and now you can zoom in/out the schematic, simply click on the image and open a new page to pinch to zoom.
Large collection of electronic circuits. Simple and interesting that you can build on your own.
Daily update to the database on the app so you will always find something new.
Apps will be updated soon we are working hard to make it better.
Also visit us online at http://diy.slmelectronic.co.uk/
All joking aside, this time you will understand how electronic circuits work.
"I stumbled upon some serious gold" - GeekBeat.tv
"This app takes design to a whole new level of interactivity" - Design News
Build any circuit, tap play button, and watch dynamic voltage, current, and charge animations. This gives you insight into circuit operation like no equation does. While simulation is running, adjust
circuit parameters with analog knob, and the circuit responds to your actions in real time. You can even generate an arbitrary input signal with your finger!
That's interactivity and innovation you can't find in best SPICE tools for PC like Multisim, LTspice, OrCad or PSpice (trademarks belong to their respective owners).
EveryCircuit is not just an eye candy. Under the hood it packs custom-built simulation engine optimized for interactive mobile use, serious numerical methods, and realistic device models. In short,
Ohm's law, Kirchhoff's current and voltage laws, nonlinear semiconductor device equations, and all the good stuff is there.
Growing library of components gives you freedom to design any analog or digital circuit from a simple voltage divider to transistor-level masterpiece.
Schematic editor features automatic wire routing, and minimalistic user interface. No nonsense, less tapping, more productivity.
Simplicity, innovation, and power, combined with mobility, make EveryCircuit a must-have companion for high school science and physics students, electrical engineering college students, breadboard
and printed circuit board (PCB) enthusiasts, and ham radio hobbyists.
Join EveryCircuit cloud community to store your circuits on cloud and access them from any of your Android devices. Explore public community circuits and share your own designs. The app requires a
permission to access your account for authentication in EveryCircuit community.
Thanks to Prof. N. Maghari for technical discussions, feedback, and help with designing circuit examples.
+ Thousands of public community circuits
+ Animations of voltage waveforms and current flows
+ Animations of capacitor charges
+ Analog control knob adjusts circuit parameters
+ Automatic wire routing
+ Oscilloscope
+ Seamless DC and transient simulation
+ Single play/pause button controls simulation
+ Saving and loading of circuit schematic
+ Mobile simulation engine built from ground-up
+ Shake the phone to kick-start oscillators
+ Intuitive user interface
+ No Ads
+ Sources, signal generators
+ Controlled sources (VCVS, VCCS, CCVS, CCCS)
+ Resistors, capacitors, inductors, transformers
+ Potentiometer, lamp
+ Switches, SPST, SPDT
+ Diodes, Zener diodes, light emitting diodes (LED)
+ MOS transistors (MOSFET)
+ Bipolar junction transistors (BJT)
+ Ideal operational amplifier (opamp)
+ Digital logic gates, AND, OR, NOT, NAND, NOR, XOR, XNOR
+ More components
If you like it, please rate, review, and buy!
This ultimate unique application is for all students across the world. It covers 147 topics of Analog Electronic Circuits in detail. These 147 topics are divided in 6 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
1. Oscillators
2. Sinusoidal Oscillators
3. Oscillation through Positive feedback and Backhausen criterion
4. Sinusoidal Oscillators operating frequency
5. Harmonic Oscillator
6. The RC Phase Shift Oscillator
7. Transistor Phase Shift Oscillator
8. Wien Bridge Oscillator:
9. Wien Bridge Oscillator-Operation
10. Tuned Oscillator
11. The Colpitts Oscillator
12. Crystal Oscillator
13. Analog Inverter and Scale Changer
14. Inverting summer
15. Non-Inverting summer
16. Differential Amplifier:
17. OPAMP-Integrator
18. Voltage to current converter
19. OPAMP-Differentiator
20. Grounded Load
21. Current to voltage converter:
22. First Order Low Pass Filter:
23. Low pass filter with adjustable corner frequency
24. Second Order Low-Pass Butterworth filter
25. First Order High Pass Butterworth filter
26. Precision Diodes
27. Active Clippers
28. Active Half Wave Rectifier
29. Axis Shifting of the Half Wave Rectifier
30. OPAMP-Comparators
31. Schmitt Trigger
32. Non-inverting Schmitt trigger
33. Relaxation Oscillator
34. Triangular Wave Generator
35. Bridge Amplifier
36. AC Voltage Follower
37. DC Voltage Follower
38. Logarithmic amplifiers
39. Logarithmic amplifiers
40. Antilog Amplifier
41. Differential Amplifiers
42. Differential Amplifiers-D.C. Analysis.
43. Dual Input, Balanced Output Difference Amplifier.
44. A.C. Analysis-Differential Amplifier
45. Differential Input and output Resistance
46. Inverting & Non - inverting Inputs
47. Common mode Gain
48. Dual Input, Unbalanced Output Differential Amplifier
49. Differential amplifier with swamping resistors
50. Biasing of Differential Amplifiers-Constant Current Bias
51. Current Mirror Circuit
52. Operational amplifier
53. Level Translator
54. Parameters of OPAMP
55. Parameters of OPAMP-Gain Bandwidth Product, Slew Rate and Input Offset Voltage and Current Drift: :
56. Ideal OPAMP and Equivalent Circuit of OPAMP
57. Ideal Voltage Transfer Curve
58. Emitter Coupled Differential Amplifier
59. Open loop-Differential Amplifier
60. Inverting Amplifier
61. Non-inverting amplifier
62. Closed Loop Amplifier
63. Voltage series feedback
64. Input Resistance with Voltage series Feedback
65. Output Resistance with Voltage series Feedback
66. Output Offset Voltage
67. Voltage Follower
68. Voltage shunt Feedback
69. Input Resistance with Voltage Shunt Feedback
70. Output Resistance with Voltage Shunt Feedback
71. Bandwidth with Feedback
72. Non-linear Distortion Reduction
73. Introduction-Power Amplifiers
74. Amplifier Efficiency
75. Types of Coupling
76. Ranges of Frequency.
77. Two Load Lines
78. SERIES-FED CLASS A AMPLIFIER
79. Series-fed class A amplifier-AC operation
80. Series-fed class A amplifier-OUTPUT POWER
81. Series-fed class A amplifier-Efficiency and Maximum Efficiency
82. Emitter-Follower Power Amplifier
83. Transformer-coupled class A amplifier
84. Signal swing and output ac power-Transformer Coupled Power Amplifier
85. Efficiency and Maximum theoretical efficiency-Transformer Coupled Power Amplifier
86. Class B amplifier-Input (DC) Power and Output (AC) Power
87. Class B amplifier operation
88. Class B amplifier-Efficiency
89. Maximum Power Considerations
90. Class B amplifier circuits
91. Transformer-Coupled Push-Pull Circuits
All topics are not listed because of character limitations set by the Play Store.
This program Electrical, Electronics and Communication Engineering disciplines to study or work in related industries take advantage of all the people able to do this simple knowledge were fabricated
in-depth knowledge has been more than just basic knowledge.
This is a tool for learning the basics of electronics, starting with a simplified understanding of electricity, how circuits work and going through to resistance. All written in an easy to follow
style for anyone to be able to enjoy reading.
It covers Charge, Current, Voltage and Resistance with diagrams and a small amount of history in places.
There are also interactive sections where you are able to play with the ideas of that which you are learning.
Use 'settings' to study in English or in Thai.
More languages will follow when there is enough demand - send us your feedback!
Please add a review so we can make changes, maybe even improvements.
8051 Micro controller Tutorials and basic electronics projects for students and hobbyists. Website:
With this app you can calculate;
-The Gain of a Basic Transistor Amplifer Circuit
-Voltage between Collector and Emmitter (Vce)
-Voltage between Drain and Source(Vds)
- More Will Come.
For BJT & MOS Transistors.
To understand what's going on in this app you need to know a little Transistors knowledge.
Semih H. Ünaldı
T. Furkan Akgül
Mehmet Soydaş
Key Words: Vosh, Electronic, Electronics, Elektronik, Electro , elektro , circuit, devre ,transistör, transistor, gain,kazanç ,hesap, hesaplama, hesapla, calculate , engineering , engineer , must ,
have , electronics, and , telecommunications, engineering , elektronik , ve , haberleşme , mühendisliği , mühendis , communication, resistors , direnç , resistor , component, eleman, capacitor ,
voltage , input
The primary purpose of this dictionary is to give students or anyone that interested to seek a better understanding of the Electronics and Electrical terms.
Electronic Dictionary application was the result of our effort to bring forward the Electronic terminology, engineering specific definitions.
We have included the basic definition and term explanation of all the Electronic term you want to understand.
Download it totally FREE today.
Simply put the term you looking for in the search box and you will be able to retrieve the information you looking for.
Solutions For Electronic Hardware Repair Centers
Electronic Testers, IC Testers,PCB Repair, Electronic Components, VI Curve, Immo Off, ECU Repair
El propósito de ésta app es utilizar realidad aumentada y una aproximación de procesamiento de imágen en conjunto con la cámara del celular para aproximar valores de resistores y ayudar a ingenieros
electronicos para evitar el uso de navegadores web. Ésta applicación fue escrita por Luis David López Tello Villafuerte y Rodrigo Adrián Mendoza Marín en ITESM (Instituto Tecnológico y de Estudios
Superiores de Monterrey)
Cualquier comentario constructivo es bien recibido
Próximamente manual de usuario (ver datos de contacto)
Imágen para utilizar realidad aumentada (Vuforia) : http://goo.gl/EhxME
Little android application, for the calculation of electronic assembly based operational amplifier.
- amp inverter and non inverter voltage
- Bridge voltage divider
- Color code of resistors
*****WAGmob: An app platform for learning, teaching and training is offering 50% DISCOUNT for a limited time only.
Download today!!!*****
WAGmob brings you simpleNeasy, on-the-go learning app for "Electronics Bundle".
You have limited access to the content provided.
In this mode you can access 2 tutorials, 1 quiz, and 1 set of flashcards.
For full access to the content, please purchase this application.
You can purchase "Electronics and Digital Electronics" application from within this app just for $0.99 each.
The app provides:
1. Snack sized chapters for easy learning.
2. Bite sized flashcards to memorize key concepts.
3. Simple and easy quizzes for self-assessment.
Designed for both students and adults.
This app provides a quick summary of essential concepts in Electronics and Digital Electronics by following snack sized chapters:
(Each chapter has corresponding flashcards and quizzes)
"Electronics" includes:
Electricity and Magnetism,
Kirchhoff's Laws,
Series and Parallel Circuits,
NPN, PNP, FET and MOSFET,
Circuit Symbols,
Measuring Instruments,
Number System and Digital Electronics,
Digital Logic Gates I,
Digital Logic Gates II,
"Digital Electronics" includes:
Digital Number System I,
Digital Number System II,
Binary Codes I,
Binary Codes II,
Boolean Laws and Simplification of Boolean Function,
Digital Logic Gates I,
Digital Logic Gates II,
Combinational Logic Circuit I,
Combinational Logic Circuit II,
Sequential Logic Circuit I,
Sequential Logic Circuit II.
About WAGmob apps:
1) A companion app for on-the-go, bite-sized learning.
2) Over Three million paying customers from 175+ countries.
Why WAGmob apps:
1) Beautifully simple, Amazingly easy, Massive selection of apps.
2) Effective, Engaging and Entertaining apps.
3) An incredible value for money. Lifetime of free updates!
*** WAGmob Vision : simpleNeasy apps for a lifetime of on-the-go learning.***
*** WAGmob Mission : A simpleNeasy WAGmob app in every hand.***
*** WAGmob Platform: A unique platform to create and publish your own apps & e-Books.***
Please visit us at www.wagmob.com or write to us at Team@wagmob.com.
We would love to improve our app and app platform.
This unique application is for all students across the world. It covers 145 topics of Analog electronics in detail. These 145 topics are divided in 7 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
1. Ideal diode
2. Semiconductor materials- Resistivity.
3. Semiconductor materials Ge and Si
4. Energy levels
5. Extrinsic materials n-Type
6. Extrinsic materials p-Type
7. Electron versus Hole Flow
8. Majority and Minority Carriers
9. Semiconductor diode
10. P-N junction with no external bias
11. P-N junction with Reverse-Bias Condition
12. P-N junction with Forward-Bias Condition
13. Silicon semiconductor diode characteristics
14. Zener Region
15. General characteristics of a semiconductor diode
16. Temperature Effects Silicon Doide
17. DC or Static Resistance
18. AC or Dynamic Resistance
19. Average AC Resistance
20. Diode equivalent circuits-Piecewise-Linear Equivalent Circuit
21. Simplified equivalent circuit for the silicon semiconductor diode
22. Transition and diffusion capacitance
23. Reverse recovery time
24. Light-emitting diodes
25. Load-line analysis
26. Diode approximations
27. Series diode configurations with dc inputs
28. Half-wave rectifier
29. Effect of VT on half-wave rectified signal
30. Peak Inverse Voltage PIV (PRV).
31. Full-wave rectification-Bridge Network
32. Peak Inverse Voltage PIV-Bridge Network
33. Center-Tapped Transformer
34. Center-Tapped Transformer-PIV
35. Series-Clippers
36. Parallel-biased clipper
37. Clampers
38. Zener diodes
39. Zener diodes- Fixed Vi, Variable RL
40. Zener diodes- Fixed RL, Variable Vi
41. Halfwave-Voltage Doubler
42. Full-wave-Voltage Doubler
43. Voltage Tripler and Quadrupler
44. Introduction to Transistor Biasing
45. Proper zero signal collector current
46. Proper minimum base-emitter voltage
47. Proper minimum VCE at any instant
48. Operating point
49. Stabilization.
50. Stability Factor
51. Transistor Biasing-Base Resistor Method
52. Transistor Biasing- Emitter Bias Circuit
53. Emitter Bias Circuit-Collector current (IC).
54. Emitter Bias Circuit-Collector-emitter voltage (VCE).
55. Emitter Bias Circuit-Stability
56. Biasing with Collector Feedback Resistor
57. Voltage Divider Bias Method.
58. Voltage Divider Bias Method-Circuit analysis
59. Voltage Divider Bias Method- Stabilization and Stability factor
60. Stability Factor for Potential Divider Bias
61. Design of Transistor Biasing Circuits
62. Mid-Point Biasing
63. Silicon Versus Germanium
64. Instantaneous Current and Voltage Waveforms
65. Bias compensation methods- Diode compensation for VBE
66. Bias compensation methods- Diode compensation for ICO .
67. Bipolar transistor
68. Bipolar transistor-Junction Biasing
69. Relation between different currents in a transistor
70. BJT-Common Base Configuration
71. BJT-Common Base Configuration-Output Characteristics
72. BJT-Common Base Configuration- Input Characteristic
73. Equivalent circuit of a transistor: (Common Base)
74. Common Base Amplifier
75. Common Base Amplifier-AC equivalent circuit
76. BJT-Common Emitter Configuration
77. BJT- Common Emitter Configuration- Input Characteristic:
78. BJT-Common Emitter Configuration-Output Characteristic:
79. Common Emitter Configuration-Large Signal Current Gain
80. Small Signal CE Amplifiers
81. Analysis of CE amplifier
82. CE amplifier-Phase Inversion
83. Common Collector Amplifier-Voltage gain
84. Swamped Amplifier
85. Common Collector Amplifier
All topics are not listed because of character limitations set by the Play Store.
Save time doing your electronic labs with this transistor configuration tool. Allowing you to perform some of the calculus made for *fixed* (with templates) transistor circuits configurations, like:
base current (IB), voltage gain (AV), current gain (Ai), input impedance (Zi), output impedance (Zo), etc.
GCSE Design and Technology - Electronics
Our most interactive app yet!
More than just questions - although there are 100 of those!
Contains 22 individual pages of information helping you to focus on the main topics you need to revise.
!!!!!!! Free version of this app is also available on same store...........
This ultimate application is for all engineering students across the world. It covers 169 topics of Advance Semiconductor Devices in detail. These 169 topics are divided in 5 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
1. The Haynes-Shockley Experiment
2. Semiconductor Materials
3. Crystal Lattice
4. Cubic Lattices
5. Planes and Directions
6. The Diamond Lattice
7. Bulk Crystal Growth
8. Growth of Single Crystal Ingots
9. Wafers
10. Epitaxial growth
11. Vapor-phase epitaxy
12. Molecular beam epitaxy
13. Charge Carriers in Semiconductors
14. Effective Mass
15. Intrinsic Material
16. Extrinsic Material
17. Electrons and Holes in Quantum Wells
18. The Fermi Level
19. Compensation and Space Charge Neutrality
20. Drift and Resistance
21. Optical absorption
22. Photoluminescence
23. Electroluminescence
24. Carrier Lifetime and Photoconductivity
25. Direct Recombination of Electrons and Holes
26. Indirect Recombination; Trapping
27. Steady State Carrier Generation; Quasi-Fermi Levels
28. Photoconductive Devices
29. Diffusion Processes
30. Diffusion and Drift of Carriers: Built-in Fields
31. Diffusion and Recombination; The Continuity Equation
32. Steady State Carrier Injection: Diffusion Length
33. Gradients in the Quasi-Fermi Levels
34. Temperature Dependence of Carrier Concentrations
35. Effects of Temperature and Doping on Mobility
36. High-Field Effects
37. The Hall Effect
38. Fabrication of p-n Junctions: Thermal oxidation
39. Diffusion of P-N junction
40. Rapid Thermal Processing
41. Ion Implantation
42. Chemical Vapor Deposition (CVD)
43. Photolithography
44. Etching
45. Metallization
46. Equilibrium Conditions
47. Equilibrium Fermi Levels
48. Space Charge at a Junction
49. Forward- and Reverse-Biased Junctions
50. Carrier Injection
51. Reverse Bias
52. Reverse-Bias Breakdown
53. Zener Breakdown
54. Avalanche Breakdown
55. Rectifiers
56. The Breakdown Diode
57. Transient and A-C Conditions
58. Reverse Recovery Transient
59. The Ideal Diode Model
60. Effects of Contact Potential on Carrier Injection
61. Switching Diodes
62. Capacitance of p-n junctions
63. Recombination and Generation in the Transition Region
64. Ohmic Losses
65. Graded Junctions
66. Metal semiconductor junctions: schottky barriers
67. Current Transport Processes
68. Thermionic-Emission Theory
69. Diffusion Theory
70. Thermionic-Emission-Diffusion Theory
71. Rectifying Contacts
72. Tunneling Current
73. Minority-Carrier Injection
74. MIS Tunnel Diode
75. Measurement of Barrier Height
76. Activation-Energy Measurement
77. Photoelectric Measurement
78. Ohmic Contacts
79. Typical Schottky Barriers
80. Heterojunctions
81. Tunnel Diodes
82. Construction of Tunnel Diodes
83. The Backward Diode
84. MIM tunnel diode
85. Structure of resonant-tunneling diode
86. I-V characteristics of resonant-tunneling diode
87. Photodiodes
88. The Varactor Diode
89. Current and Voltage in an Illuminated Junction
90. Solar Cell- Working Principle
91. Solar Cell- I-V Characterstics
92. Photodetectors
93. Gain, Bandwidth, and Signal-to-Noise Ratio of photodetector
94. Light Emitting Diodes
95. Light Emitting Materials
96. Fiber-Optic Communications
97. Semiconductor lasers
98. Population Inversion at a Junction
99. Emission Spectra for p-n Junction Lasers
All topics are not listed because of character limitations set by the Play Store.
More from developer
This unique Free application is for all students across the world. It covers 108 topics of Basic Electrical in detail. These 108 topics are divided in 5 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Few topics Covered in this application are:
1. Introduction of electrical engineering
2. Voltage and current
3. Electric Potential and Voltage
4. Conductors and Insulators
5. Conventional versus electron flow
6. Ohm's Law
7. Kirchoff's Voltage Law (KVL)
8. Kirchoff's Current Law (KCL)
9. Polarity of voltage drops
10. Branch current method
11. Mesh current method
12. Introduction to network theorems
13. Thevenin's Theorem
14. Norton's Theorem
15. Maximum Power Transfer Theorem
16. star-delta transformation
17. Source Transformation
18. voltage and current sources
19. loop and nodal methods of analysis
20. Unilateral and Bilateral elements
21. Active and passive elements
22. alternating current (AC)
23. AC Waveforms
24. The Average and Effective Value of an AC Waveform
25. RMS Value of an AC Waveform
26. Generation of Sinusoidal (AC) Voltage Waveform
27. Concept of Phasor
28. Phase Difference
29. The Cosine Waveform
30. Representation of Sinusoidal Signal by a Phasor
31. Phasor representation of Voltage and Current
32. AC inductor circuits
33. Series resistor-inductor circuits: Impedance
34. Inductor quirks
35. Review of Resistance, Reactance, and Impedance
36. Series R, L, and C
37. Parallel R, L, and C
38. Series-parallel R, L, and C
39. Susceptance and Admittance
40. Simple parallel (tank circuit) resonance
41. Simple series resonance
42. Power in AC Circuits
43. Power Factor
44. Power Factor Correction
45. Quality Factor and Bandwidth of a Resonant Circuit
46. Generation of Three-phase Balanced Voltages
47. Three-Phase, Four-Wire System
48. Wye and delta configurations
49. Distinction between line and phase voltages, and line and phase currents
50. Power in balanced three-phase circuits
51. Phase rotation
52. Three-phase Y and Delta configurations
53. Measurement of Power in Three phase circuit
54. Introduction of measuring instruments
55. Various forces/torques required in measuring instruments
56. General Theory Permanent Magnet Moving Coil (PMMC) Instruments
57. Working Principles of PMMC
58. A multi-range ammeters
59. Multi-range voltmeter
60. Basic principle operation of Moving-iron Instruments
61. Construction of Moving-iron Instruments
62. Shunts and Multipliers for MI instruments
63. Dynamometer type Wattmeter
64. Introduction to Power System
66. Magnetic Circuit
67. B-H Characteristics
68. Analysis of Series magnetic circuit
69. Analysis of series-parallel magnetic circuit
70. Different laws for calculating magnetic field-Biot-Savart law
71. Amperes circuital law
72. Reluctance & permeance
73. Introduction of Eddy Current & Hysteresis Losses
74. Eddy current
75. Derivation of an expression for eddy current loss in a thin plate
76. Hysteresis Loss
77. Hysteresis loss & loop area
78. Steinmetzs empirical formula for hysteresis loss
79. Inductor
80. Force between two opposite faces of the core across an air gap
81. ideal transformer
82. Practical transformer
83. equivalent circuit
84. Efficiency of transformer
85. Auto-Transformer
86. Introduction of D.C Machines
87. D.C machine Armature Winding
88. EMF Equation
89. Torque equation
90. Generator types & Characteristics
91. Characteristics of a separately excited generator
92. Characteristics of a shunt generator
93. Load characteristic of shunt generator
94. Single-phase Induction Motor
All topics are not listed because of character limitations set by the Play Store.
This unique application is for all students across the world. It covers 280 topics of Electrical Instrumentation and Process Control in detail. These 280 topics are divided in 5 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
1. Introduction to AC Electricity
2. Circuits with R, L, and C
3. RC Filters
4. AC Bridges
5. Magnetic fields
6. Analog meter
7. Electromechanical devices
8. Introduction to Basic Electrical Components
9. Resistance
10. Capacitance
11. Inductance
12. Introduction to Electronics
13. Discrete amplifiers
14. Operational amplifiers
15. Current amplifiers
16. Differential amplifiers
17. Buffer amplifiers
18. Nonlinear amplifiers
19. Instrument amplifier
20. Amplifier applications
21. Digital Circuits
22. Digital signals & Binary numbers
23. Logic circuits
24. Analog-to-digital conversion
25. Circuit Considerations
26. Introduction to Process control
27. Process Control
28. Definitions of the Elements in a Control Loop
29. Process Facility Considerations
30. Units and Standards
31. Instrument Parameters
32. Introduction to Level
33. Level Formulas
34. Direct level sensing
35. Indirect level sensing
36. Application Considerations
37. Introduction to Pressure
38. Basic Terms
39. Pressure Measurement
40. Pressure Formulas
41. Manometers
42. Diaphragms, capsules, and bellows
43. Bourdon tubes
44. Other pressure sensors
45. Vacuum instruments
46. Application Considerations
47. Introduction to Actuators and Control
48. Pressure Controllers
49. Flow Control Actuators
50. Power Control
51. Magnetic control devices
52. Motors
53. Application Considerations
54. Introduction to flow
55. Flow Formulas of Continuity equation
56. Bernoulli equation
57. Flow losses
58. Flow Measurement Instruments of Flow rate
59. Total flow and Mass flow
60. Dry particulate flow rate and Open channel flow
61. Application Considerations
62. Humidity
63. Humidity measuring devices
64. Density and Specific Gravity
65. Density measuring devices
66. Viscosity
67. Viscosity measuring instruments
68. pH Measurements, pH measuring devices and pH application considerations
69. Position and Motion Sensing
70. Position and motion measuring devices
71. Force, Torque, and Load Cells
72. Force and torque measuring devices
73. Smoke and Chemical Sensors
74. Sound and Light
75. Sound and light measuring devices
76. Sound and light application considerations
77. Introduction to Signal Conditioning
78. Conditioning
79. Linearization
80. Temperature correction
81. Pneumatic Signal Conditioning
82. Visual Display Conditioning
83. Electrical Signal Conditioning
84. Strain gauge sensors
85. Capacitive sensors
86. Capacitive sensors
87. Magnetic sensors
88. Thermocouple sensors
89. Introduction to Temperature and Heat
90. Temperature definition
91. Heat definitions
92. Thermal expansion definitions
93. Temperature and Heat Formulas
94. Thermal expansion
95. Temperature Measuring Devices
96. Thermometers
97. Pressure-spring thermometers
98. Resistance temperature devices
99. Thermistors
100. Thermocouples
101. Semiconductors
102. Application Considerations
103. Installation, Calibration & Protection
104. System Documentation
105. Pipe and Identification Diagrams
106. Functional Symbols
107. P and ID Drawings
108. Introduction to Instrument types and performance characteristics
109. Active and passive instruments
110. Null-type and deflection-type instruments
111. Analogue and digital instruments
112. Indicating instruments and instruments with a signal output
All topics are not listed because of character limitations set by the Play Store.
!!!!!! Now upgraded Free app of Basic Electrical Engineering is available named Basic Electrical Engineering-1
This unique application is for all students across the world. It covers 108 topics of Basic Electrical in detail. These 108 topics are divided in 5 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Few topics Covered in this application are:
1. Introduction of electrical engineering
2. Voltage and current
3. Electric Potential and Voltage
4. Conductors and Insulators
5. Conventional versus electron flow
6. Ohm's Law
7. Kirchoff's Voltage Law (KVL)
8. Kirchoff's Current Law (KCL)
9. Polarity of voltage drops
10. Branch current method
11. Mesh current method
12. Introduction to network theorems
13. Thevenin's Theorem
14. Norton's Theorem
15. Maximum Power Transfer Theorem
16. star-delta transformation
17. Source Transformation
18. voltage and current sources
19. loop and nodal methods of analysis
20. Unilateral and Bilateral elements
21. Active and passive elements
22. alternating current (AC)
23. AC Waveforms
24. The Average and Effective Value of an AC Waveform
25. RMS Value of an AC Waveform
26. Generation of Sinusoidal (AC) Voltage Waveform
27. Concept of Phasor
28. Phase Difference
29. The Cosine Waveform
30. Representation of Sinusoidal Signal by a Phasor
31. Phasor representation of Voltage and Current
32. AC inductor circuits
33. Series resistor-inductor circuits: Impedance
34. Inductor quirks
35. Review of Resistance, Reactance, and Impedance
36. Series R, L, and C
37. Parallel R, L, and C
38. Series-parallel R, L, and C
39. Susceptance and Admittance
40. Simple parallel (tank circuit) resonance
41. Simple series resonance
42. Power in AC Circuits
43. Power Factor
44. Power Factor Correction
45. Quality Factor and Bandwidth of a Resonant Circuit
46. Generation of Three-phase Balanced Voltages
47. Three-Phase, Four-Wire System
48. Wye and delta configurations
49. Distinction between line and phase voltages, and line and phase currents
50. Power in balanced three-phase circuits
51. Phase rotation
52. Three-phase Y and Delta configurations
53. Measurement of Power in Three phase circuit
54. Introduction of measuring instruments
55. Various forces/torques required in measuring instruments
56. General Theory Permanent Magnet Moving Coil (PMMC) Instruments
57. Working Principles of PMMC
58. A multi-range ammeters
59. Multi-range voltmeter
60. Basic principle operation of Moving-iron Instruments
61. Construction of Moving-iron Instruments
62. Shunts and Multipliers for MI instruments
63. Dynamometer type Wattmeter
64. Introduction to Power System
66. Magnetic Circuit
67. B-H Characteristics
68. Analysis of Series magnetic circuit
69. Analysis of series-parallel magnetic circuit
70. Different laws for calculating magnetic field-Biot-Savart law
71. Amperes circuital law
72. Reluctance & permeance
73. Introduction of Eddy Current & Hysteresis Losses
74. Eddy current
75. Derivation of an expression for eddy current loss in a thin plate
76. Hysteresis Loss
77. Hysteresis loss & loop area
78. Steinmetzs empirical formula for hysteresis loss
79. Inductor
80. Force between two opposite faces of the core across an air gap
81. ideal transformer
82. Practical transformer
83. equivalent circuit
84. Efficiency of transformer
85. Auto-Transformer
86. Introduction of D.C Machines
87. D.C machine Armature Winding
88. EMF Equation
89. Torque equation
90. Generator types & Characteristics
91. Characteristics of a separately excited generator
92. Characteristics of a shunt generator
93. Load characteristic of shunt generator
94. Single-phase Induction Motor
All topics are not listed because of character limitations set by the Play Store.
This unique application is for all students across the world. It covers 143 topics of Refrigeration and AirConditioning in detail. These 143 topics are divided in 4 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
5. DESIGN DOCUMENTS
7. PSYCHROMETRICS (MOIST AIR)
9. PSYCHROMETRICS (SPECIFIC HEAT)
10. PSYCHROMETRIC CHART
11. DETERMINING THE DEW-POINT TEMPERATURE OF A MOIST AIR SAMPLE
20. BASIC AIR-CONDITIONING CYCLE - SUMMER MODE
21. DESIGN SUPPLY VOLUME FLOW RATE
22. BASIC AIR-CONDITIONING CYCLE - WINTER MODE
24. REFRIGERANTS, COOLING MEDIUMS, AND ABSORBENTS
34. INDOOR TEMPERATURE, RELATIVE HUMIDITY, AND AIR VELOCITY
36. CONVECTIVE HEAT AND RADIATIVE HEAT
37. AIR HANDLING UNITS AND PACKAGED UNITS
38. PACKAGED UNITS
39. COILS USED IN REFRIGERATION
40. AIR FILTERS
43. ROTARY/ SCREW COMPRESSORS
45. AIR-COOLED CONDENSERS
49. EVAPORATIVE COOLING
52. AIR CONDITIONING SYSTEMS
54. GAS CYCLE REFRIGERATION
55. STEAM JET REFRIGERATION SYSTEM
59. ROTARY/ SCREW COMPRESSORS
65. HEAT AND WORK
68. FIRST LAW OF THERMODYNAMICS
69. SECOND LAW OF THERMODYNAMICS
70. HEAT ENGINES
71. EFFICIENCY OF HEAT ENGINES
72. ENTROPY
73. THIRD LAW OF THERMODYNAMICS
76. PROPERTIES OF PURE SUBSTANCE
77. T-S AND P-H DIAGRAMS FOR LIQUID-VAPOUR REGIME
81. THROTTLING (ISENTHALPIC) PROCESS
82. FLUID FLOW IN REFRIGERATION
83. CONSERVATION OF MOMENTUM
All topics are not listed because of character limitations set by the Play Store.
This ultimate unique application is for all students across the world. It covers 97 topics of Basic Electronics in detail. These 97 topics are divided in 5 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
1. Introduction to Electronic Engineering
2. Basic quantities
3. Passive and active devices
4. Semiconductor Devices
5. Current in Semiconductors
6. P-N Junction
7. Diodes
8. Power Diode
9. Switching
10. Special-Purpose Diodes
11. Tunnel diode and Optoelectronics
12. Diode Approximation
13. Applications of diode: Half wave Rectifier and Full Wave Rectifier
14. Bridge Rectifier
15. Clippers
16. Clamper Circuits
17. Positive Clamper
18. Voltage Doubler
19. Zener Diode
20. Zener Regulator
21. Design of Zener regulator circuit
22. Special-Purpose Diodes-1
23. Transistors
24. Bipolar Junction Transistors (BJT)
25. Beta and alpha gains
26. The Common Base Configuration
27. Relation between different currents in a transistor
28. Common-Emitter Amplifier
29. Common Base Amplifier
30. Biasing Techniques for CE Amplifiers
31. Biasing Techniques: Emitter Feedback Bias
32. Biasing Techniques: Collector Feedback Bias
33. Biasing Techniques: Voltage Divider Bias
34. Biasing Techniques: Emitter Bias
35. Small Signal CE Amplifiers
36. Analysis of CE amplifier
37. Common Collector Amplifier
38. Darlington Amplifier
39. Analysis of a transistor amplifier using h-parameters
40. Power Amplifiers
41. Power Amplifiers: Class A Amplifiers
42. Power Amplifiers: Class B amplifier
43. Power Amplifiers: Cross over distortion(Class B amplifier)
44. Power Amplifiers: Biasing a class B amplifier
45. Power Calculations for Class B Push-Pull Amplifier
46. Power Amplifiers: Class C amplifier
47. Field Effect Transistor (FET)
48. JFET Amplifiers
49. Transductance Curves
50. Biasing the FET
51. Biasing the FET: Self Bias
52. Voltage Divider Bias
53. Current Source Bias
54. FET a amplifier
55. Design of JFET amplifier
56. JFET Applications
57. MOSFET Amplifiers
58. Common-Drain Amplifier
59. MOSFET Applications
60. Operational Amplifiers
61. Depletion-mode MOSFET
62. Enhancement-mode MOSFET
63. The ideal operational amplifier
64. Practical OP AMPS
65. Inverting Amplifier
66. The Non-inverting Amplifier
67. Voltage Follower (Unity Gain Buffer)
68. The Summing Amplifier
69. Differential Amplifier
70. The Op-amp Integrator Amplifier
71. The Op-amp Differentiator Amplifier
72. History of The Numeral Systems
73. Binary codes
74. conversion of bases
75. Conversion of decimal to binary ( base 10 to base 2)
76. Octal Number System
77. Hexadecimal Number System
78. Rules of Binary Addition and Subtraction
79. Sign-and-magnitude method
80. Sign-and-magnitude method:2s complement Representation
81. Boolean algebra
82. Basic Theorems & Properties of Boolean algebra
83. Logic gate
84. Symbols of logic Gates
85. Universal Gates
86. No associativity of NAND and NOR Gates: universal gates
87. Introduction of minimization using K-map
88. Two variable maps
89. Don't Cares condition
90. 5 variable Karnaugh Maps
91. Binary coded decimal codes(bcd)
92. Principle and types digital instruments
93. digital voltmeter
94. Cathode ray oscilloscope(CRO)
95. Cathode ray tube: CRO
96. Channel: CRO
97. Measurements with the cathode ray oscilloscope C
All topics not listed due to character limitations set by Google Play.
This ultimate unique application has more than 300,000+ topics of engineering covered across five major engineering disciplines - Electronics & Communication, Electrical, Mechanical, Civil and
Computer Science Engineering. With the unique integration with the online version of this application, users can access their saved notes from any where. The USP of this application is
ultra-portability of your engineering education.
All topics are neatly arranged under subjects and units. Users can also use the search option to find any topic of relevance within 2 clicks!
Each topic is not more than 600 words and is complete with equations, diagrams and functional graphs.
The content is more than enough to study and clear any engineering examination held by most Indian Engineering Colleges & Universities.
Go-Ahead! Download it & Make your Studies Simpler and Easier!!
This ultimate unique application is for all students of Automobile Engineering across the world. It covers 188 topics of Automobile Engineering in detail. These 188 topics are divided in 5 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from any where they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
2. The expansion valve system
3. THE FIXED ORIFICE VALVE SYSTEM (CYCLING CLUTCH ORIFICE TUBE)
4. THE COMPRESSOR
5. THE CONDENSER
6. THE CONDENSER
9. THE EVAPORATOR
10. ANTI-FROSTING DEVICES
11. BASIC CONTROL SWITCHES
12. THE BASIC THEORY OF COOLING
14. ALTERNATIVES CYCLES
17. PRESSURE GAUGE READINGS
18. CYCLE TIME TESTING
19. A/C SYSTEM LEAK TESTING
20. SIGHT GLASS
21. GLOBAL WARMING
22. THE OZONE LAYER
23. INTRODUCTION TO TRANSFER OF HEAT AND MASS TO THE BASE METAL IN GAS-METAL ARC WELDING
26. TYPES OF AUTOMOBILES
27. LAYOUT OF AN AUTOMOBILE CHASSIS
28. MAJOR COMPONENTS OF AN AUTOMOBILE
31. USE OF THE ENGINES
36. ADVANTAGES OF A MULTI-CYLINDER ENGINE FOR THE SAME POWER
37. ENGINE CONSTRUCTION
38. CYLINDER BLOCKS
39. CYLINDER LINER
40. CRANK CASE
41. CYLINDER HEAD
42. GASKETS
43. PISTON
44. PISTON RINGS
45. PISTON PIN
46. CONNECTING ROD
47. CRANKSHAFT
48. VALVES
49. PORT-TIMING DIAGRAM
50. FLYWHEEL
51. MANIFOLDS
52. ROLLING RESISTANCE
53. AIR RESISTANCE.
54. GRADIENT RESISTANCE
55. TRACTIVE EFFORT
56. GEAR BOX
57. TYPES OF THE GEAR BOX
58. MERITS AND DEMERITS OF GEAR BOX
59. GEAR SHIFTING MECHANISM
60. Transmission in Automobile
62. FUNCTION OF CLUTCH
63. MAIN PARTS OF CLUTCH
64. TYPES OF THE CLUTCH
65. UNIVERSAL JOINT
67. FUNCTION OF STEERING SYSTEM
68. FRONT AXLE
69. CASTER ANGLE
70. CASTER ANGLE
71. CAMBER
72. TOE-IN
73. TOE-OUT
74. ACKERMAN MECHANISM
75. FUNCTIONS OF A BRAKE
76. CLASSIFICATION OF BRAKES
77. DISC BRAKES
78. FLOATING CALIPER BRAKE
79. POWER BRAKES
80. AIR BRAKE SYSTEM
81. HYDRAULIC BRAKES
82. TYPES OF THE STARTING MOTORS
83. GENERATOR
84. ALTERNATOR
85. LIGHTING SYSTEM
86. IGNITION SYSTEM
87. IGNITION TIMING
88. IGNITION ADVANCE
89. SPARK PLUGS
91. AUTOMOBILE BATTERY
93. HORNS
94. CLUTCH OPERATION
95. Types of clutch
96. Gearbox operation
97. Gear change mechanisms
98. Gears and components
102. DIRECT SHIFT GEARBOX
104. WHEEL BEARINGS
105. FOUR-WHEEL DRIVE
106. TIRE DESIGN
107. TIRE PLY AND BELT DESIGN
108. Tire Tread Design
109. TIRE RATINGS AND SIDEWALL INFORMATION
110. SPECIALTY TIRES
111. REPLACEMENT TIRES
112. Tire Valves
113. COMPACT SPARE TIRES
114. Run-Flat Tires
115. Tire Pressure Monitoring Systems
116. TIRE CONTACT AREA
117. Wheel Rims
118. Static Wheel Balance Theory
119. Dynamic Wheel Balance Theory
120. On-Car Wheel Balancing
This ultimate unique application is for all students across the world. It covers 113 topics of Discrete Mathematics in detail. These 113 topics are divided in 5 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
1. Set Theory
2. Decimal number System
3. Binary Number System
4. Octal Number System
5. Hexadecimal Number System
6. Binary Arithmetic
7. Sets and Membership
8. Subsets
9. Introduction to Logical Operations
10. Logical Operations and Logical Connectivity
11. Logical Equivalence
12. Logical Implications
13. Normal Forms and Truth Table
14. Normal Form of a well formed formula
15. Principle Disjunctive Normal Form
16. Principal Conjunctive Normal form
17. Predicates and Quantifiers
18. Theory of inference for the Predicate Calculus
19. Mathematical Induction
20. Diagrammatic Representation of Sets
21. The Algebra of Sets
22. The Computer Representation of Sets
23. Relations
24. Representation of Relations
25. Introduction to Partial Order Relations
26. Diagrammatic Representation of Partial Order Relations and Posets
27. Maximal, Minimal Elements and Lattices
28. Recurrence Relation
29. Formulation of Recurrence Relation
30. Method of Solving Recurrence Relation
31. Method for solving linear homogeneous recurrence relations with constant coefficients:
32. Functions
33. Introduction to Graphs
34. Directed Graph
35. Graph Models
36. Graph Terminology
37. Some Special Simple Graphs
38. Bipartite Graphs
39. Bipartite Graphs and Matchings
40. Applications of Graphs
41. Original and Sub Graphs
42. Representing Graphs
43. Adjacency Matrices
44. Incidence Matrices
45. Isomorphism of Graphs
46. Paths in the Graphs
47. Connectedness in Undirected Graphs
48. Connectivity of Graphs
49. Paths and Isomorphism
50. Euler Paths and Circuits
51. Hamilton Paths and Circuits
52. Shortest-Path Problems
53. A Shortest-Path Algorithm (Dijkstra Algorithm.)
54. The Traveling Salesperson Problem
55. Introduction to Planer Graphs
56. Graph Coloring
57. Applications of Graph Colorings
58. Introduction to Trees
59. Rooted Trees
60. Trees as Models
61. Properties of Trees
62. Applications of Trees
63. Decision Trees
64. Prefix Codes
65. Huffman Coding
66. Game Trees
67. Tree Traversal
68. Boolean Algebra
69. Identities of Boolean Algebra
70. Duality
71. The Abstract Definition of a Boolean Algebra
72. Representing Boolean Functions
73. Logic Gates
74. Minimization of Circuits
75. Karnaugh Maps
76. Dont Care Conditions
77. The Quine MCCluskey Method
78. Introduction to Lattices
79. The Transitive Closure of a Relation
80. Cartesian Product of Lattices
81. Properties of Lattices
82. Lattices as Algebraic System
83. Partial Order Relations on a Lattice
84. Least Upper Bounds and Latest Lower Bounds in a Lattice
85. Sublattices
86. Lattice Isomorphism
87. Bounded, Complemented and Distributive Lattices
88. Propositional Logic
89. Conditional Statements
90. Truth Tables of Compound Propositions
91. Precedence of Logical Operators and Logic and Bit Operations
92. Applications of Propositional Logic
93. Propositional Satisfiability
94. Quantifiers
95. Nested Quantifiers
96. Translating from Nested Quantifiers into English
97. Inference
98. Rules of Inference for Propositional Logic
99. Using Rules of Inference to Build Arguments
100. Resolution and Fallacies
101. Rules of Inference for Quantified Statements
102. Introduction to Algebra
103. Rings
104. Properties of rings
105. Subrings
106. Homomorphisms and quotient rings
107. Groups
108. Properties of groups
109. Subgroups
All topics not listed due to character limitations set by Google Play.
This unique application is for all students across the world. It covers 143 topics of Material Science in detail. These 143 topics are divided in 3 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
3. Classification of engineering materials
4. Organic and inorganic materials
5. Semiconductors
6. Biomaterials
8. Advanced materials
9. Smart materials (materials of the future)
10. Nanostructured materials and nanotechnology
11. Quantum dots
12. Spintronics
13. Level of material structure examination and observation
14. Material structure
15. Engineering metallurgy
16. Selection of the materials
17. Atomic concepts in physics and chemistry
18. Atomic Structure: FUNDAMENTAL CONCEPTS
19. Atomic Structure: FUNDAMENTAL CONCEPTS
20. ELECTRONS IN ATOMS
21. THE PERIODIC TABLE
22. BONDING FORCES AND ENERGIES
23. Ionic Bonding
24. Covalent Bonding
25. Metallic Bonding
26. SECONDARY BONDING OR VAN DER WAALS BONDING
27. Crystal Structures: FUNDAMENTAL CONCEPTS
28. The Face-Centered Cubic Crystal Structure
29. The Body-Centered Cubic Crystal Structure
30. The Hexagonal Close-Packed Crystal Structure
32. CRYSTAL SYSTEMS
33. POINT COORDINATES
35. Hexagonal Crystals
36. Atomic Arrangements
39. IMPURITIES IN SOLIDS
40. DISLOCATIONS - LINEAR DEFECTS
41. INTERFACIAL DEFECTS
42. Microscopic Examination
43. Optical Microscopy
44. Electron Microscopy
45. GRAIN SIZE DETERMINATION
46. Introduction to mechanical properties
47. CONCEPTS OF STRESS AND STRAIN
48. Compression Tests
49. STRESS-STRAIN BEHAVIOR
50. ANELASTICITY
52. Plastic Deformation
53. Yielding and Yield Strength
54. Tensile Strength
55. Ductility
56. Resilience
57. Toughness
58. TRUE STRESS AND STRAIN
60. HARDNESS
61. Rockwell Hardness Tests
62. Brinell Hardness Tests
63. Knoop and Vickers Microindentation Hardness Tests
64. Hardness Conversion
65. Correlation Between Hardness and Tensile Strength
67. Computation of Average and Standard Deviation Values
68. DESIGN/SAFETY FACTORS
69. Phase diagrams-introduction
70. SOLUBILITY LIMIT
71. PHASES
72. PHASE EQUILIBRIA
73. ONE-COMPONENT (OR UNARY) PHASE DIAGRAMS
74. Binary Phase Diagrams
76. Determination of Phase Compositions
77. Determination of Phase Amounts
78. Equilibrium Cooling
79. Nonequilibrium Cooling
81. BINARY EUTECTIC SYSTEMS
86. THE GIBBS PHASE RULE
87. THE IRON-IRON CARBIDE (Fe-Fe3C) PHASE DIAGRAM
89. Hypoeutectoid Alloys
90. Hypereutectoid Alloys
91. THE INFLUENCE OF OTHER ALLOYING ELEMENTS
92. FERROUS ALLOYS
93. Low-Carbon Steels
94. Medium-Carbon Steels
95. High-Carbon Steels
96. Stainless Steels
97. Cast Irons
98. Gray Iron
99. Ductile (or Nodular) Iron
All topics are not listed because of character limitations set by the Play Store.
This ultimate unique application is for all students across the world. It covers 86 topics of Engineering Geology in detail. These 86 topics are divided in 6 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
4. Geological Materials
5. Description of Geological materials
6. Porosity and Permeability
7. Deformation
10. BRANCHES OF GEOLOGY
11. INTRODUCTION TO SOILS
12. INTRODUCTION TO ROCKS
13. GEOLOGICAL MASSES
14. Standard Weathering Description Systems
15. Ground Mass Description
16. Rock Mass Classification
17. SCHISTS AND GNEISSES
18. GEOLOGICAL MAPS
19. Understanding Geological Maps
20. Interpretation of Geological Maps
21. DRILLING TOOLS
22. MAPPING AT SMALL SCALE
23. MAPPING AT LARGE SCALE
24. Engineering Geological Maps
25. GIS in Engineering Geology
27. DRILLING PROCESS
28. DRILLING AND SAMPLING IN SOIL
29. Boring and Sampling over Water
30. Field Tests and Measurements
31. Strength and Deformation Tests
32. Measurements in Boreholes and Excavations
33. Engineering Geophysics
34. Properties of Minerals
35. ROCK-FORMING MINERALS
36. FELDSPAR - FAMILY
37. Quartz family
38. The Mineral AUGITE
39. Rhyolite Family
40. Fundamentals of process of formation of ore minerals
41. COAL AND PETROLEUM
42. Coal- Its origin and occurrence in India
43. Phyllite
44. GARNET AND MISCELLANEOUS ROCKS
45. Classification of Rocks
46. Difference Between Igneous, Sedimentary and Metamorphic Rocks
47. MAGMA
48. Gabbro(rock)
49. PEGMATITE
50. Igneous rock types
51. LIMESTONE
52. Metamorphic Rocks
53. GRANITE
54. Syenite
55. Larvikite,ijolite and Carbonatite
56. Phonolite, Ultramafic Rocks and Pyroxenite
57. Conglomerate and breccia
58. METAMORPHIC ROCKS
59. SLATE
60. Beds in 3D space
61. Strike and Dip
62. Inclined Bedding on Maps
63. FOLDS
64. FAULTS
65. JOINTS
66. Seismic Surveys
67. Electrical Resistivity Surveys
68. Electromagnetic Conductivity Surveys
69. Magnetic Surveys
70. REMOTE SENSING TECHNIQUES
71. Aerial Photographs
72. Satellite Images
73. Design and Construction of Road Tunnels
74. Factors that Influence Tunnel Seismic Performance
75. PREVENTIONS OF DAM CONSTUCTION
76. Sea erosion and coastal Protection
77. Internal Structure of the Earth
78. Building stones occurrences and characteristics
79. Origin of Sedimentary Rock
80. Earthquakes
81. causes of Earthquakes
82. Classification of Earthquake
83. Classification of Seismic Waves
84. Fault Types
85. Seismic Zones of India
86. Construction of Earthquake Resistant Buildings and Infrastructure
This ultimate application is useful for all students of Artificial Intelligence across the world. It covers 142 topics of Artificial Intelligence in detail. These 142 topics are divided in 5 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
The USP of this application is "ultra-portability". Students can access the content on-the-go from any where they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
1. Turing test
2. Introduction to Artificial Intelligence
3. History of AI
4. The AI Cycle
5. Knowledge Representation
6. Typical AI problems
7. Limits of AI
8. Introduction to Agents
9. Agent Performance
10. Intelligent Agents
11. Structure Of Intelligent Agents
12. Types of agent program
13. Goal based Agents
14. Utility-based agents
15. Agents and environments
16. Agent architectures
17. Search for Solutions
18. State Spaces
19. Graph Searching
20. A Generic Searching Algorithm
21. Uninformed Search Strategies
22. Breadth-First Search
23. Heuristic Search
24. A∗ Search
25. Search Tree
26. Depth first Search
27. Properties of Depth First Search
28. Bi-directional search
29. Search Graphs
30. Informed Search Strategies
31. Methods of Informed Search
32. Greedy Search
33. Proof of Admissibility of A*
34. Properties of Heuristics
35. Iterative-Deepening A*
36. Other Memory limited heuristic search
37. N-Queens eample
38. Adversarial Search
39. Genetic Algorithms
40. Games
41. Optimal decisions in Games
42. minimax algorithm
43. Alpha Beta Pruning
44. Backtracking
45. Consistency Driven Techniques
46. Path Consistency (K-Consistency)
47. Look Ahead
48. Propositional Logic
49. Syntax of Propositional Calculus
50. Knowledge Representation and Reasoning
51. Propositional Logic Inference
52. Propositional Definite Clauses
53. Knowledge-Level Debugging
54. Rules of Inference
55. Soundness and Completeness
56. First Order Logic
57. Unification
58. Semantics
59. Herbrand Universe
60. Soundness, Completeness, Consistency, Satisfiability
61. Resolution
62. Herbrand Revisited
63. Proof as Search
64. Some Proof Strategies
65. Non-Monotonic Reasoning
66. Truth Maintenance Systems
67. Rule Based Systems
68. Pure Prolog
69. Forward chaining
70. backward Chaining
71. Choice between forward and backward chaining
72. AND/OR Trees
73. Hidden Markov Model
74. Bayesian networks
75. Learning Issues
76. Supervised Learning
77. Decision Trees
78. Knowledge Representation Formalisms
79. Semantic Networks
80. Inference in a Semantic Net
81. Extending Semantic Nets
82. Frames
83. Slots as Objects
84. Interpreting frames
85. Introduction to Planning
86. Problem Solving vs. Planning
87. Logic Based Planning
88. Planning Systems
89. Planning as Search
90. Situation-Space Planning Algorithms
91. Partial-Order Planning
92. Plan-Space Planning Algorithms
93. Interleaving vs. Non-Interleaving of Sub-Plan Steps
94. Simple Sock/Shoe Example
95. Probabilistic Reasoning
96. Review of Probability Theory
97. Semantics of Bayesian Networks
98. Introduction to Learning
99. Taxonomy of Learning Systems
100. Mathematical formulation of the inductive learning problem
101. Concept Learning
102. Concept Learning as Search
103. Algorithm to Find a Maximally-Specific Hypothesis
104. Candidate Elimination Algorithm
105. The Candidate-Elimination Algorithm
106. Decision Tree Construction
107. Splitting Functions
108. Decision Tree Pruning
109. Neural Networks
110. Artificial Neural Networks
111. Perceptron
112. Perceptron Learning
113. Multi-Layer Perceptrons
114. Back-Propagation Algorithm
115. Statistical learning
All topics not listed here because of character limit set by Play Store
This ultimate unique application is for all students across the world. It covers 217 topics of Advanced Welding Technology in detail. These 217 topics are divided in 5 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
3. WELDING WITH PRESSURE
4. FUSION WELDING
10. INTRODUCTION TO HEAT FLOW IN FUSION WELDING
12. PARAMETRIC EFFECTS OF HEAT FLOW IN FUSION WELDING
15. GAS TUNGSTEN ARC WELDING
17. GAS METAL ARC WELDING
18. Submerged Arc Welding
19. INTRODUCTION TO TRANSFER OF HEAT AND MASS TO THE BASE METAL IN GAS-METAL ARC WELDING
20. HEAT TRANSFER IN GAS-METAL ARC WELDING
21. PROCEDURE DEVELOPMENT TRANSFER OF HEAT AND MASS TO THE BASE METAL IN GAS-METAL ARC WELDING
22. INTRODUCTION TO ARC PHYSICS OF GAS-TUNGSTEN ARC WELDING
23. ELECTRODE REGIONS AND ARC COLUMN IN GTAW
24. ARC WELDING POWER SOURCES
25. POWER SOURCE SELECTION
26. PULSED POWER SUPPLIES
27. Resistance Welding Power Sources
28. ELECTRON-BEAM WELDING POWER SOURCES
33. EFFECT OF WELDING RATE ON WELD POOL SHAPE AND MICROSTRUCTURE
34. BRAZING
35. SOLDERING
36. PHYSICAL PRINCIPLES OF BRAZING
37. ELEMENTS OF THE BRAZING PROCESS
38. HEATING METHODS FOR BRAZING
40. FUNDAMENTALS OF SOLDERING
41. GUIDELINES FOR FLUX SELECTION
42. TYPES OF FLUXES
43. JOINT DESIGN
45. SOLDER APPLICATION
47. SOLDERING EQUIPMENT
49. SHIELDING GAS SELECTION
52. DIFFUSION BONDING PROCESS
54. OUTPUT LEVEL, SEQUENCE AND FUNCTION CONTROL
55. MMAW CONSUMABLES
57. FILLER WIRES FOR GMAW AND FCAW
59. DIRECT DRIVE WELDING
60. INERTIA-DRIVE WELDING
61. JOINING OF SIMILAR METALS
62. JOINING OF DISSIMILAR METALS
65. MECHANISM OF DIFFUSION BONDING
66. BONDING PRACTICE
67. FLYER PLATE ACCELERATION
68. IMPACT ENERGY IN EXPLOSION WELDING
70. JET FORMATION IN EXPLOSION WELDING
72. BOND MORPHOLOGY AND PROPERTIES
74. TENSILE LOADING OF SOFT-INTERLAYER WELDS
75. THE SMAW PROCESS
77. ELECTRODES IN SMAW
78. WELD SCHEDULES AND PROCEDURES
79. VARIATIONS OF THE SMAW PROCESS
80. SPECIAL APPLICATIONS OF THE SMAW PROCESS
81. SAFETY CONSIDERATIONS IN SMAW
82. INTRODUCTION TO GAS-METAL ARC WELDING
83. PROCESS FUNDAMENTALS IN GMAW
All topics are not listed because of character limitations set by the Play Store.
This unique application is for all students across the world. It covers 157 topics of Strength Of Materials in detail. These 157 topics are divided in 5 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
1. TENSILE STRESS
3. SHEAR STRESS
6. SHEAR STRAIN
8. TENSILE STRAIN
9. STRESS STRAIN DIAGRAM
10. THERMAL STRESS
11. POISSON RATIO
12. THREE MODULUS
13. Temperature Stress In Composite Bar
14. STRAIN ENERGY
15. MODULUS OF RESSILINCE AND TOUGHNESS
16. STRAIN ENERGY IN GRADUAL AND SUDDEN LOAD
17. STRAIN ENERGY IN IMPACT LOAD
18. HOOK'S LAW
19. STRESS, STRAIN AND CHANGE IN LENGTH
20. CHANGE IN LENGTH WHEN ONE BOTH ENDS ARE FREE
21. CHANGE IN LENGTH WHEN ONE BOTH ENDS ARE FIXED
22. Composite bar in tension or compression
23. Principal Stress And Principal Plane
24. Maximum Shear Stress
25. Theories Of Elastic Failure
26. Maximum Principal Stress Theory
27. Maximum Shear Stress Theory:
28. Maximum Principal Strain Theory
29. Total Strain Energy Per Unit Volume Theory
30. Maximum Shear Strain Energy Per Unit Volume Theory
31. Mohr’s Rupture Theory For Brittle Materials
32. Mohr’s Circle
33. Introduction to Bending Moment And Shearing Force
34. Shearing Force, And Bending Moment In A Straight Beam
35. Sign Conventions For Bending Moments And Shearing Forces
36. Bending Of Beams
37. Procedure For Drawing Shear Force And Bending Moment Diagram
38. SFD & BMD Of Cantilever Carrying Load At Its One End
39. SFD And BMD Of Simply Supported Beam Subjected To A Central Load
40. SFD And BMD Of A Cantilever Beam Subjected To U.D.L
41. Simply Supported Beam Subjected To U.D.L
42. Simply Supported Beam Carrying UDL & End Couples
43. Points Of Inflection
44. Theory Of Bending : Assumption And General Theory
45. Elastic Flexure Formula
46. Beams Of Composite Cross Section
47. Flexural Buckling Of A Pin-Ended Strut
48. Rankine-Gordon Formula
49. Comparison Of The Rankine-Gordon And Euler Formulae
50. Effective Lengths Of Struts
51. Struts And Columns With One End Fixed And The Other Free
52. Thin Cylinder
53. Members Subjected to Axisymmetric Load
54. ANALYSIS: Pressurized thin walled cylinder
55. Longitudinal Stress: Pressurized thin walled cylinder
56. Change in Dimensions: Pressurized thin walled cylinder
57. Volumetric Strain or Change in the Internal Volume
58. Cylindrical Vessel with Hemispherical Ends
59. Thin rotating ring or cylinder
60. Stresses in thick cylinders
61. Stresses in thick cylinders
62. Representation of radial and circumferential strain
63. A thick cylinder with both external and internal pressure
64. The stress distribution within the cylinder wall
65. Methods of increasing the elastic strength of a thick cylinder by pre-stressing
66. Composite cylinders
67. Combined stress distribution in a composite cylinder
68. Multilayered or Laminated cylinder
69. Autofrettage
70. Derivation of the hoop and radial stress equations for a thickwalled circular cylinder
71. Lame line
72. Plastic deformation of thick tubes
73. Lame line for elastic zone
74. Portion of the cylinder is plastic
75. Stress in thin cylinders
76. Horizontal diametrical plane
77. Strains in thin cylinders
78. Change in Volume of Cylinder
79. Compound Cylinders
80. Press Fits
81. Analysis of Press Fits
82. Castigliano's first theorem
83. Castigliano’s Second Theorem
84. Torsion of a thin circular tube
85. Torsion of solid circular shafts
86. Torsion of a hollow circular shaft
All topics are not listed because of character limitations set by the Play Store.
This unique application is for all students across the world. It covers 232 topics of Power Plant Engineering in detail. These 232 topics are divided in 5 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
1. FUEL SYSTEM OF THE DIESEL POWER PLANT
3. POWER
4. ENERGY
5. SOURCES OF ENERGY
7. CARNOT CYCLE
8. RANKINE CYCLE
9. EFFICIENCY OF THE RANKINE CYCLE
10. REHEAT CYCLE
11. REGENERATIVE CYCLE
12. BINARY VAPOUR CYCLE
13. EFFICIENCY OF BINARY VAPOUR POWER CYCLE
14. REHEAT REGENERATIVE CYCLE
15. INDIAN ENERGY SCENARIO
16. COAL ANALYSIS
17. STEAM POWER PLANT
18. NUCLEAR POWER PLANT
19. DIESEL POWER PLANT
20. FUELS AND COMBUSTION
21. STEAM GENERATORS
22. STEAM PRIME MOVERS
23. STEAM CONDENSERS
24. SURFACE CONDENSERS
25. JET CONDENSERS
26. TYPES OF JET CONDENSERS
27. HYDRAULIC TURBINES
28. IMPULSE AND REACTION TURBINES
29. SCIENCE VERSUS TECHNOLOGY
30. SCIENTIFIC RESEARCH
32. FACTS VERSUS VALUES
33. ATOMIC ENERGY
37. INTRODUCTION OF STEAM POWER PLANT
38. STEAM POWER STATION DESIGN
39. COAL HANDLING
40. DEWATERING OF COAL
42. TYPES OF FUEL BURNING SURFACES
43. METHOD OF FUEL FIRING
44. AUTOMATIC BOILER CONTROL
45. PULVERIZED COAL
46. BALL MILL
47. BALL AND RACE MILL
48. SHAFT MILL
49. PULVERISED COAL FIRING
50. CYCLONE FIRED BOILERS
51. WATER WALLS
52. ASH DISPOSAL
53. ASH HANDLING EQUIPMENT
54. SMOKE AND DUST REMOVAL
55. TYPES OF THE DUST COLLECTOR
56. FLY ASH SCRUBBER
57. FLUIDISED BED COMBUSTION
58. TYPES OF FBC SYSTEMS
60. CLASSIFICATION OF THE BOILERS
61. COCHRAN BOILER
62. LANCASHIRE BOILERS
63. LOCOMOTIVE BOILER
64. BABCOCK WILCOX BOILER
65. INDUSTRIAL BOILERS
67. REQUIREMENTS OF A GOOD BOILER
68. LA MONT BOILER
69. BENSON BOILER
70. LOEFFLER BOILER
71. SCHMIDT-HARTMANN BOILER
72. VELOX-BOILER
74. THE SIMPLE IMPULSE TURBINE
75. COMPOUNDING OF IMPULSE TURBINE
79. IMPULSE-REACTION TURBINE
80. ADVANTAGES OF STEAM TURBINE OVER STEAM ENGINE
81. STEAM TURBINE GOVERNING
82. STEAM TURBINE PERFORMANCE
83. STEAM TURBINE TESTING
85. STEAM TURBINE GENERATORS
87. INTRODUCTION OF NUCLEAR POWER PLANT
88. STRUCTURE OF ATOM
89. LAYOUT OF NUCLEAR POWER PLANT
90. NUCLEAR WASTE DISPOSAL
91. SITE SELECTION OF NUCLEAR POWER PLANT
92. PERFORMANCE OF NUCLEAR POWER PLANTS
93. NUCLEAR STABILITY
94. NUCLEAR BINDING ENERGY
95. NUCLEAR FISSION
96. NUCLEAR REACTORS
97. NUCLEAR CHAIN REACTION
99. NEUTRON LIFE CYCLE
All topics are not listed because of character limitations set by the Play Store.
This unique application is for all students across the world. It covers 213 topics of Soil Mechanics in detail. These 213 topics are divided in 5 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
3. BASIC GEOLOGY
4. Composition of the Earth’s Crust
5. COMPOSITION OF SOILS
6. Surface Forces and Adsorbed Water
8. Particle Size of Fine-Grained Soils
11. PHASES OF A SOILS INVESTIGATION
12. SOILS EXPLORATION PROGRAM
13. Soil Identification in the Field
14. Soil Sampling
15. Groundwater Conditions
16. Types of In Situ or Field Tests
17. PHASE RELATIONSHIPS
19. DETERMINATION OF THE LIQUID, PLASTIC, AND SHRINKAGE LIMITS
21. Importance of soil compaction
23. FIELD COMPACTION
24. HEAD AND PRESSURE VARIATION IN A FLUID AT REST
25. DARCY’S LAW
26. FLOW PARALLEL TO SOIL LAYERS
28. Falling-Head Test
29. Pumping Test to Determine the Hydraulic Conductivity
31. STRESSES AND STRAINS
32. IDEALIZED STRESS - STRAIN RESPONSE AND YIELDING
34. Axisymmetric Condition
35. ANISOTROPIC, ELASTIC STATES
36. Mohr’s Circle for Stress States
37. Mohr’s Circle for Strain States
38. The Principle of Effective Stress
39. Effective Stresses Due to Geostatic Stress Fields
40. Effects of Capillarity
41. Effects of Seepage
42. LATERAL EARTH PRESSURE AT REST
43. STRESSES IN SOIL FROM SURFACE LOADS
44. Strip Load
45. Uniformly Loaded Rectangular Area
46. Vertical Stress Below Arbitrarily Shaped Areas
47. STRESS AND STRAIN INVARIANTS
48. Hooke’s Law Using Stress and Strain Invariants
49. STRESS PATHS
50. Plotting Stress Paths Using Two-Dimensional Stress Parameters
51. BASIC CONCEPTS
52. Consolidation Under a Constant Load Primary Consolidation
53. Void Ratio and Settlement Changes Under a Constant Load
54. Primary Consolidation Parameters
56. Procedure to Calculate Primary Consolidation Settlement
58. Solution of Governing Consolidation Equation Using Fourier Series
59. Finite Difference Solution of the Governing Consolidation Equation
61. Oedometer Test
62. Determination of the Coeffi cient of Consolidation
63. Determination of the Past Maximum Vertical Effective Stress
65. TYPICAL RESPONSE OF SOILS TO SHEARING FORCES
67. Effects of Increasing the Normal Effective Stress
68. Effects of Soil Tension
69. Coulomb’s Failure Criterion
70. Taylor’s Failure Criterion
71. Mohr - Coulomb Failure Criterion
72. INTERPRETATION OF THE SHEAR STRENGTH OF SOILS
74. Conventional Triaxial Apparatus
75. Unconfi ned Compression (UC) Test
76. Consolidated Undrained (CU) Compression Test
79. Hollow-Cylinder Apparatus
80. FIELD TESTS
81. BASIC CONCEPTS
82. Soil Yielding
All topics are not listed because of character limitations set by the Play Store.
This ultimate unique application is for all students across the world. It covers 113 topics of Electrical System Design and Estimation in detail. These 113 topics are divided in 6 units.
Each topic is around 600 words and is complete with diagrams, equations and other forms of graphical representations along with simple text explaining the concept in detail.
This USP of this application is "ultra-portability". Students can access the content on-the-go from anywhere they like.
Basically, each topic is like a detailed flash card and will make the lives of students simpler and easier.
Some of topics Covered in this application are:
1. Types of lighting schemes
2. Electrical Symbols
3. Lists of Electrical symbols
4. Salient Features of Electricity Act, 2003
5. Consequences of Electricity Act, 2003
6. Indian Electricity Rules (1956)
7. General safety precautions
8. Role and Scope of National Electric Code
9. Components of National electric code
10. Classification of Supply Systems - TT system
11. Classification of Supply Systems - TN system
12. Classification of Supply Systems - IT system
13. Selection criteria for the TT, TN and IT systems
14. Load break switches
15. Switch Fuse Units & Fuse Switches
16. Circuit Breakers - MCB
17. Circuit Breakers - MCB Selection & Characteristics
18. Circuit Breakers - RCCB
19. Circuit Breakers - MCCB
20. Circuit Breakers - ELCB
21. Circuit Breakers - Voltage Base ELCB
22. Circuit Breakers - Current-operated ELCB
23. Circuit Breakers - ACB
24. Operation of ACB
25. Air Blast Circuit Breaker
26. Different Types of Air Blast Circuit Breaker
27. Circuit Breakers - OCB
28. Bulk Oil Circuit Breaker
29. Single & Double Break Bulk Oil Circuit Breaker
30. Circuit Breakers - Minimum Oil
31. Circuit breakers - VCB
32. Electrical Switchgear
33. SF6 Circuit Breaker
34. Types and Working of SF6 Circuit Breaker
35. Vacuum Arc or Arc in Vacuum
36. Different types of fuses
37. Protection against over load
38. Delay curves
39. Service connections
40. Electrical Diagrams
41. Methods for representation for wiring diagrams
42. Systems of House Wiring
43. Neutral and earth wire
44. Load Factor for Electrical Installations
45. Earth bus- Design of earthing systems
46. Demand Factor for electrical installations
47. Diversity Factor for electrical installations
48. Utilization factor & Maximum Demand for electrical installations
49. Coincidence factor for electrical installations
50. Demand Factor & Load Factor according to Type of Buildings
51. Design of LT panels
52. Current Rating of single core XLPE Un-armoured INSULATED Cables
53. Current Rating of single core XLPE Armoured INSULATED Cables
54. Current Rating of Two core XLPE Un-armoured INSULATED Cables
55. Current Rating of Two core XLPE Armoured INSULATED Cables
56. Current Rating of Three core XLPE Un-Armoured Insulated Cables
57. Current Rating of Three core XLPE Armoured INSULATED Cables
58. Current Rating of Three & Half core XLPE Un-Armoured INSULATED Cables
59. Current Rating of Three & Half core XLPE Armoured INSULATED Cables
60. Current Rating of Four core XLPE Un-Armoured INSULATED Cables
61. Current Rating of Four core XLPE Armoured INSULATED Cables
62. Qualities of good lighting schemes
63. Luminous flux
64. Luminous intensity
65. Illuminance
66. Luminance
67. Reflection and Reflection Factor
68. Laws of illumination
69. Necessity of Illumination
70. Photometry & Luminaire
71. Photometric Bench
72. Incandescent Lamps
73. Characteristics of Incandescent Lamps
74. Discharge Lamps
75. Mercury Vapor Lamp
76. Sodium Vapor Lamp
77. Fluorescent Lamp
78. Luminaries in Illumination Schemes
79. Mounting of Luminaries
80. Glare
81. Evaluation of Glare
82. Color
83. Color Specification Systems - Munsell system
84. Color Specification Systems
85. Interior Lighting
86. Trends and finishing of Interior Lightning
87. Sports Lighting
All topics are not listed because of character limitations set by the Play Store. | {"url":"https://play.google.com/store/apps/details?id=com.faadooengineers.fundamentalselectronicdevice","timestamp":"2014-04-19T01:30:20Z","content_type":null,"content_length":"284915","record_id":"<urn:uuid:c97f5004-2bf4-4f86-b3f6-630d17fa1808>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Questions about the drag equation and aerodynamics
I have some questions about the drag equation and aerodynamics:
[itex]F = \frac{1}{2}ρv^2CA[/itex]
I'm trying to calculate the atmospheric drag on a streamlined body (the drag coefficient will be a very small number) with a velocity of about 8 km/s at about 38,000 meters altitude, where the
atmospheric density is only about [itex]5.4\times10^-3[/itex][itex]kg/m^3[/itex]. So my question is; is the drag equation valid even for these extreme values, or is there a better equation that I can
Secondly, which is the optimal geometrical shape for [itex]\frac{Volume}{Drag}[/itex]? Is it a streamlined body shape? If it is a streamlined body shape, what is the equation for calculating its
volume, and what is the equation for calculating its reference area? Can't find it!
Really appreciate any help on this! | {"url":"http://www.physicsforums.com/showpost.php?p=4268942&postcount=1","timestamp":"2014-04-18T13:46:27Z","content_type":null,"content_length":"9335","record_id":"<urn:uuid:957fd430-3985-4d8a-b65d-41b68633a74b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Point that is at the shortest total distance from multiple points
June 1st 2010, 11:12 AM #1
Jan 2009
Point that is at the shortest total distance from multiple points
This question is purely for curiosity's sake:
Suppose I have a collection of points on a 2D plane {P1, P2,..., Pn}
How would I find the point X such that the sum of the magnitude of all vectors (||PnX||) is the smallest possible.
Last edited by BigC; June 1st 2010 at 11:15 AM. Reason: Didn't specify magnitude
The line of best fit uses least-squares. If your points are in a linear fashion, you can come up with a line, y=mx+b, where the magnitude is minimized. The line you achieve will be the line of
best fit. Of course, we could do this for circles, quadratics, polynomials, etc.
Bah, this is way simpler than I thought it was. Just clicked that all I need is the average of the points.
June 1st 2010, 11:34 AM #2
MHF Contributor
Mar 2010
June 1st 2010, 12:05 PM #3
Jan 2009
June 1st 2010, 12:44 PM #4
MHF Contributor
Mar 2010
June 1st 2010, 12:59 PM #5
Jan 2009
June 1st 2010, 01:07 PM #6
MHF Contributor
Mar 2010 | {"url":"http://mathhelpforum.com/advanced-algebra/147323-point-shortest-total-distance-multiple-points.html","timestamp":"2014-04-16T09:00:53Z","content_type":null,"content_length":"46210","record_id":"<urn:uuid:29d9d4e9-67cc-4bd3-bcbf-ee3a6cc15605>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Standard Deviation from Percentage groups
October 9th 2012, 12:22 PM #1
Oct 2012
Standard Deviation from Percentage groups
I have data that I can use to indicate what percentage of the US population falls with different groups. IE
under 100 lbs .2%
100-110 lbs. 4%
The whole will equal 100%. But I want to calculate the standard deviation of the data, but have no idea where to start. Any help would be greatly appreciated. Here is the data I am working with:
Re: Standard Deviation from Percentage groups
Hey badams.
To calculate the standard deviation (or sample error), you need to have either a population distribution or a sample of observations from that distribution.
Calculate the sample standard error of a sample is basically 1/(n-1) * sum(x_i - [x])^2 where [x] is the mean of the sample and x_i is the ith observation summed over all observations (basically
sum of squares divided by n - 1).
This can be used to estimate the standard deviation of a sample, but you still need to decide whether you have an assumed population or whether you don't and these are two very distinct things
using very different kinds of statistics (but the same core principles do apply).
October 9th 2012, 06:07 PM #2
MHF Contributor
Sep 2012 | {"url":"http://mathhelpforum.com/statistics/204960-standard-deviation-percentage-groups.html","timestamp":"2014-04-17T23:03:13Z","content_type":null,"content_length":"32533","record_id":"<urn:uuid:8af7c96f-404c-45f8-b1b9-15ee7d7c89a1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sine and Cosine Rule
Trigonometry - Sine and Cosine Rule
The solution for an oblique triangle can be done with the application of the Law of Sine and Law of Cosine, simply called the Sine and Cosine Rules. An oblique triangle, as we all know, is a triangle
with no right angle. It is a triangle whose angles are all acute or a triangle with one obtuse angle.
The two general forms of an oblique triangle are as shown:
Sine Rule (The Law of Sine)
The Sine Rule is used in the following cases:
CASE 1: Given two angles and one side (AAS or ASA)
CASE 2: Given two sides and a non-included angle (SSA)
The Sine Rule states that the sides of a triangle are proportional to the sines of the opposite angles. In symbols,
Case 2: SSA or The Ambiguous Case
In this case, there may be two triangles, one triangle, or no triangle with the given properties. For this reason, it is sometimes called the ambiguous case. Thus, we need to examine the possibility
of no solution, one or two solutions.
Cosine Rule (The Law of Cosine)
The Cosine Rule is used in the following cases:
1. Given two sides and an included angle (SAS)
2. Given three sides (SSS)
The Cosine Rule states that the square of the length of any side of a triangle equals the sum of the squares of the length of the other sides minus twice their product multiplied by the cosine of
their included angle. In symbols:
Go to the next page to start practicing what you have learnt. | {"url":"http://mathematics.laerd.com/maths/trigonometry-sine-and-cosine-rules-intro.php","timestamp":"2014-04-19T09:29:24Z","content_type":null,"content_length":"18938","record_id":"<urn:uuid:34f8bcac-659c-463a-8553-949a0fd12e06>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/ineedanswersbadfail/asked","timestamp":"2014-04-17T07:01:19Z","content_type":null,"content_length":"106940","record_id":"<urn:uuid:d4a1f6eb-05c4-45f1-8882-004fad9f95c4>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Figure 4: Plot of . The first few dispersion curves are shown. Solid curves for the nonlinear case (solutions of (7.14)); dashed curve for the linear case (solutions of (8.5)). The following
parameters are used for both cases: , , , and for the nonlinear case and . Dashed lines are described by formulas: (thickness of the layer), (lower bound for ) and (upper bound for in the case of
linear medium in the layer). | {"url":"http://www.hindawi.com/journals/amp/2012/609765/fig4/","timestamp":"2014-04-18T12:48:03Z","content_type":null,"content_length":"16971","record_id":"<urn:uuid:ad8aeb9a-3bd5-48c8-b657-83858a2e53a1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ferris Wheel Physics
Photo credit:
Ferris wheel physics is directly related to centripetal acceleration, which results in the riders feeling "heavier" or "lighter" depending on their position on the Ferris wheel.
The Ferris wheel consists of an upright wheel with passenger gondolas (seats) attached to the rim. These gondolas can freely pivot at the support where they are connected to the Ferris wheel. As a
result, the gondolas always hang downwards at all times as the Ferris wheel spins.
To analyze the Ferris wheel physics, we must first simplify the problem. The figure below shows a schematic of the Ferris wheel, illustrating the essentials of the problem.
(1) is the top-most position and (2) is the bottom-most position
is where the gondolas are attached to the Ferris wheel
is where the passengers sit (on the gondola)
is the radius of the Ferris wheel
is the angular velocity of the Ferris wheel, in radians/s
The forces acting on the passengers are due to the combined effect of gravity and centripetal acceleration, caused by the rotation of the Ferris wheel with angular velocity
We wish to analyze the forces acting on the passengers at locations (1) and (2). The figure below shows a free-body diagram for the passengers at these locations.
is the force of gravity pulling down on the passengers, where
is the mass of the passengers and
is the acceleration due to gravity, which is 9.8 m/s
^2 N[1]
is the force exerted on the passengers (by the seats) at point
, at location (1)
is the force exerted on the passengers (by the seats) at point
, at location (2)
is the centripetal acceleration of point
. This acceleration is always pointing towards the center of the wheel. So at location (1) this acceleration is pointing directly down, and at location (2) this acceleration is pointing directly up.
The centripetal acceleration is given by
The centripetal acceleration always points towards the center of the circle. So at the bottom of the circle,
is pointing up. At the top of the circle
is pointing down. At these two positions
is a vector which is aligned (parallel) with gravity, so their contributions can be directly added together.
By Newton's second law
where Σ
is the sum of the forces.
To solve for
we must apply this equation in the vertical direction.
The acceleration of the passengers at point
is equal to the acceleration of the Ferris wheel at point
. This is because point
does not move relative to point
. Therefore, the velocity and acceleration of these two points are the same.
First, solve for
Next, solve for
We can see that
. This means that the passengers feel "heaviest" at the bottom of the Ferris wheel, and the "lightest" at the top.
So basically, Ferris wheel physics affects your bodies "apparent" weight, which varies depending on where you are on the ride. The riders only feel their "true weight", when the centripetal
acceleration is pointing horizontally and has no vector component parallel with gravity, and as a result it has no contribution in the vertical direction. This occurs when the riders are exactly
halfway between the top and bottom (i.e. they are at the same height as the center of the Ferris wheel).
It's informative to look at an example to get an idea of how much force acts on the passengers.
Let's say we have a Ferris wheel with a radius of 50 meters, which makes two full revolutions per minute.
Two full revolutions per minute translates into
= 0.21 radians/s.
Substituting this into the above equations we find that
= 2.2 m/s
(centripetal acceleration)
— 2.2)
+ 2.2)
= 9.8 m/s
= 7.6
m N[2]
= 12
At the top of the Ferris wheel the passengers experience 0.78
(they feel lighter).
At the bottom of the Ferris wheel the passengers experience 1.2
(they feel heavier).
Now that we understand Ferris wheel physics, one can imagine how important it is for a large radius Ferris wheel to turn slowly, given how much influence the rotation rate
will have on the centripetal acceleration
, and on
, as a result.
Return from Ferris Wheel Physics to Amusement Park Physics page Return from Ferris Wheel Physics to Real World Physics Problems home page | {"url":"http://www.real-world-physics-problems.com/ferris-wheel-physics.html","timestamp":"2014-04-16T13:28:20Z","content_type":null,"content_length":"16346","record_id":"<urn:uuid:16697256-ae2e-40b3-89d6-339d93ef84a9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spring 1999 LACC Math Contest - Problem 10
Problem 10.
A merchant buys goods at 25% off the list price (the usual price for the goods). He wants to mark the goods so that after a 20% discount of this marked price, 25% of the sale is profit.
What percent of the list price must he mark the goods?
Let L be the list price and P be the Merchant's marked price.
Merchant's cost = 0.75L
Merchant's discounted price = 0.80P
(This is just the price at which he finally sells the goods i.e.,
the final price to which he discounts the marked price so we have the equation inter-relating these quantities:
That is, the merchant's marked price P is 125% of the list price L. | {"url":"http://lacitycollege.edu/academic/departments/mathdept/samplequestions/sp99_12.htm","timestamp":"2014-04-20T00:51:45Z","content_type":null,"content_length":"4364","record_id":"<urn:uuid:71bdf44c-73dd-49b9-8879-53a5b041c127>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hillside, NJ Science Tutor
Find a Hillside, NJ Science Tutor
...I have a background as a Teacher's Assistant for a Forensic Anthropology class where I was responsible for setting up labs and tutoring classes. Some of my specialties are in Cultural
Anthropology, Sexuality and Culture, Paleopathology, Introductory and Advance Human Evolution. I have a minor in Archaeology from SUNY Plattsburgh.
2 Subjects: including anthropology, archaeology
...As a college graduate with a degree in biophysics. I have completed several upper level biology courses (immunology, genetics, cell biology, physiology and microbiology). I have experience
tutoring college level introduction to biology, human biology and physiology. In order to receive my degree in biophysics, I completed three college level calculus classes.
18 Subjects: including organic chemistry, genetics, biology, calculus
...I also remember that certain types of lecturing styles made course material interesting, while others were not effective. In fact, I observed that, on occasion, alternatives to lectures, such
as small-group discussions, hands-on laboratory sessions, and question-and-answer periods, worked well. Others, such as take-home exams were not effective.
39 Subjects: including zoology, ACT Science, biostatistics, genetics
...I did my undergraduate in Physics and Astronomy at Vassar, and did an Engineering degree at Dartmouth. I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and
will be done in a year. I have a lot of experience tutoring physics and math at all levels.
11 Subjects: including astronomy, physical science, physics, Spanish
...I create dishes ranging from Chicken Chipotle Pasta to tomato sauce fresh from the garden with homemade pasta. I want to help others find the joy of cooking. During the summer between my
junior and senior year of college, I interned with the Fordham University Career Services office under the associate director.
10 Subjects: including sociology, psychology, ESL/ESOL, GED
Related Hillside, NJ Tutors
Hillside, NJ Accounting Tutors
Hillside, NJ ACT Tutors
Hillside, NJ Algebra Tutors
Hillside, NJ Algebra 2 Tutors
Hillside, NJ Calculus Tutors
Hillside, NJ Geometry Tutors
Hillside, NJ Math Tutors
Hillside, NJ Prealgebra Tutors
Hillside, NJ Precalculus Tutors
Hillside, NJ SAT Tutors
Hillside, NJ SAT Math Tutors
Hillside, NJ Science Tutors
Hillside, NJ Statistics Tutors
Hillside, NJ Trigonometry Tutors
Nearby Cities With Science Tutor
Cranford Science Tutors
Elizabeth, NJ Science Tutors
Elizabethport, NJ Science Tutors
Harrison, NJ Science Tutors
Irvington, NJ Science Tutors
Kenilworth, NJ Science Tutors
Maplewood, NJ Science Tutors
Roselle Park Science Tutors
Roselle, NJ Science Tutors
South Orange Science Tutors
Springfield, NJ Science Tutors
Townley, NJ Science Tutors
Union Center, NJ Science Tutors
Union, NJ Science Tutors
Weequahic, NJ Science Tutors | {"url":"http://www.purplemath.com/Hillside_NJ_Science_tutors.php","timestamp":"2014-04-19T23:43:08Z","content_type":null,"content_length":"24063","record_id":"<urn:uuid:ff56d42a-fd7d-4716-8b42-baca8e59fdca>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptography on a quantum computer
Claude Crepeau
The basic notion of Quantum Key Distribution will first be discussed. Then information theoretical notions of cryptography over quantum states such as encryption and authentication will be covered.
In particular, we show that for quantum data, authentication imply encryption. Computational analogues will also be presented: quantum public-key cryptography, public-key authentication and
impossibility of quantum digital signatures.
No prior knowledge of quantum physics is expected.
Gates 4B (opposite 490), 10/17/02 (THURSDAY), 4:30 PM | {"url":"http://crypto.stanford.edu/seclab/sem-02-03/crepeau_abstract.html","timestamp":"2014-04-20T00:39:05Z","content_type":null,"content_length":"1356","record_id":"<urn:uuid:7bcf6465-1c53-4a63-a863-3eb1ffae8d71>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mechanics problem
November 12th 2008, 11:15 AM
Mechanics problem
Could anybody help with this one please:
Q. A point particle moves in a plane with trajectory (in metres per second)
r (t) = x(t)i + y(t) j,
x(t) = 1/2 tē
y(t) = 2 cos(t)
a) Sketch the trajectory of the particle for t ≥ 0. (not sure if you can do this on a msg board!).
b) Compute the velocity,
v(t), of the particle for t ≥ 0.
c) Compute the acceleration,
a (t), of the particle for t ≥ 0 and determine the maximum value of the magnitude of the vector
a (t).
I know the velocity is the derivative and acceleration is the second derivative but I'm not sure about magnitude, especially the max value.
November 12th 2008, 01:48 PM
$x = \frac{t^2}{2}$
$v_x = t$
$a_x = 1$
$y = 2\cos{t}$
$v_y = -2\sin{t}$
$a_y = -2\cos{t}$
$v(t) = (t)\vec{i} - (2\sin{t}) \vec{j}$
$a(t) = \vec{i} - (2\cos{t}) \vec{j}$
$|a| = \sqrt{1 + 4\cos^2{t}}$
$|a_{max}| = \sqrt{5}$... why?
November 12th 2008, 01:58 PM
graph ... | {"url":"http://mathhelpforum.com/calculus/59191-mechanics-problem-print.html","timestamp":"2014-04-18T07:29:43Z","content_type":null,"content_length":"6455","record_id":"<urn:uuid:3fedb1b0-fc30-4ec0-ba26-2d7eadc57b61>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Dara
Total # Posts: 23
Nevermind, I got it. It's [-6, infinity)
square root x + 6
How to find domain of: y = -3 + ãx+6
ie math
A right triangle has hypotenuse 8 and area 8. Find the perimeter? I think I have to apply Heron's formula to this question, but I don't know how to solve. Please help. Your help is very much
Which sentence is subject verb agreement? Vitamins that are sold in a health-food store are not regulated by the Food and Drug Administration. Vitamins that are sold in a health-food store is not
regulated by the Food and Drug Administration
Which sentence is subject verb-agreement? Peer editing academic papers require critical-thinking skills and diplomacy. Peer editing academic papers requires critical-thinking skills and diplomacy.
Subject-verb agreement
Which sentence is subject verb agreement? Did you know a famous animal-rights activist has criticized horseracing because of the dangers involved? Did you know a famous animal-rights activist have
criticized horseracing because of the dangers involved?
DC = 10 What is the measure of angle ABD?
Find the polar coordinates of (8, 8) for r ≥ 0. I am getting 8 square root of 2 as the first coordinate. I am not sure if I am finding the arctan correctly.
I followed this example A tire has a diameter of 20 inches and is revolving at a rate of 10rpm, at t=0, a certain point is at height 0. What is the height of the point above the ground after 20
seconds? 10rpm means one cycle in 6 seconds. So after 20 seconds it has completed 3...
that 1 2/3 cycles not 12
I tried it a different way I converted 45 rpm into seconds then to cycles. I got 12/3 cycles. Do you know how to find th phase shift?
It really doesn't matter if the answer was posted or not. It was how it was done. I did not want the answer I just wanted to know what formula I could use so I could do it myself.
I know 45rpm translates to 3pi/2 radians. I am not sure what formula to use to figure out height.
Thank you for trying to help me but that answer does not fit any of the answer choices.
But it leads me back to this same page?
I am not sure what that means?
A car tire has a diameter of 3 feet and is revolving at a rate of 45 rpm. At t = 0, a certain point is at height 0. What is the height of the point above the ground after 45 seconds? Can you lead me
into the right direction?
If you're given a 90 degree angle split down the middle, and one of the angles is 30 degrees, what would the other angle be? sorry im not too good at math
5th grade social studies
What happens when the government raises taxes? what would happen if consumers did not pay taxes on goods and services?
What is an Adverb? and How is it used in a sentance?
social studies
what problems arose because of the increase in industry during the early 1900s
what other ways do sociologists use to calculate prejudice? If you tell us the ways you've already listed about how sociologists calculate prejudice, we may be able to help you find OTHER ways. yuyu
Harvard's IAT exams, test your prejudice by presenting images of light... | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Dara","timestamp":"2014-04-18T16:37:45Z","content_type":null,"content_length":"10298","record_id":"<urn:uuid:427fa95f-3547-4f9c-bb19-00ead9c3d56e>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] degree matrix construction
Satya Upadhya satyaupadhya at yahoo.co.in
Fri Sep 15 08:09:25 CDT 2006
Dear Friends,
my question is the following:
Suppose i have the following code:
>>> from LinearAlgebra import *
>>> from Numeric import *
>>> A = [1,2,1,3,1,3,4,1,2]
>>> B = reshape(A,(3,3))
>>> C = sum(B,1)
>>> C
array([4, 7, 7])
Now, my problem is to construct a degree matrix D which is a 3 * 3 matrix with diagonal elements 4,7,7 (obtained from the elements of C) and all off-diagonal elements equal to 0.
Could some kind soul kindly tell me how to do this.
I've looked at the help for the diagonal function and i am unable to do what i wish to. Furthermore i dont understand the meaning of axis1 and axis2:
>>> help (diagonal)
Help on function diagonal in module Numeric:
diagonal(a, offset=0, axis1=0, axis2=1)
diagonal(a, offset=0, axis1=0, axis2=1) returns all offset diagonals
defined by the given dimensions of the array.
Thanking you,
Find out what India is talking about on - Yahoo! Answers India
Send FREE SMS to your friend's mobile from Yahoo! Messenger Version 8. Get it NOW
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20060915/2b020246/attachment.html
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-September/010681.html","timestamp":"2014-04-20T21:15:40Z","content_type":null,"content_length":"3927","record_id":"<urn:uuid:75241422-a109-4306-8080-3eaa307a6889>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/abhimanyupudi/medals","timestamp":"2014-04-17T16:26:49Z","content_type":null,"content_length":"98113","record_id":"<urn:uuid:750c3bbd-8424-4e42-a2cb-6a0230c60c3e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: April 2012 [00158]
[Date Index] [Thread Index] [Author Index]
Re: gives wrong result
• To: mathgroup at smc.vnet.net
• Subject: [mg125890] Re: gives wrong result
• From: Yi Wang <tririverwangyi at gmail.com>
• Date: Fri, 6 Apr 2012 05:53:54 -0400 (EDT)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
Dear Bob,
Thanks a lot for your reply! Your suggestion indeed works but I am
still curious about it:
Because I used Replace instead of ReplaceAll by purpose. This is to
mean, in the rule I want Mathematica to match the entire expression
instead of part of it. For example, if I am not using Simplify, but
instead use the rule directly:
Replace[aa PDD[z, d], aa_. PDD[bb_, cc_] :> -bb PDD[aa, cc]]
I get the (correct) non-zero result. Thus there is still something I
don't understand in Simplify[].
Thanks again,
PS: I am new to this discussion group. Thus I am not sure if directly
reply to your personal email and cc to the group is appropriate. If
not, sorry, and let me know so that I will stop doing it to you and
On Thu, Apr 5, 2012 at 7:38 AM, Bob Hanlon <hanlonr357 at gmail.com> wrote:
> ClearAll[PDD, try, t, z, aa];
> PDD[a_?NumericQ, idx_] := 0;
> t = aa PDD[z, d];
> try[expr_] := Replace[expr, aa_. PDD[bb_, cc_] :> -bb PDD[aa, cc]];
> Simplify[t, TransformationFunctions :> {try}]
> 0
> The default value for aa in aa_. PDD[bb_, cc_] is one. Consequently,
> PDD[z, d] by itself matches the LHS of the rule and becomes -z PDD[1,
> d] which evaluates to zero. Note the behavior if the default is
> removed:
> try[expr_] := Replace[expr, aa_ PDD[bb_, cc_] :> -bb PDD[aa, cc]];
> Simplify[t, TransformationFunctions :> {try}]
> aa PDD[z, d]
> Bob Hanlon
> On Thu, Apr 5, 2012 at 5:52 AM, Yi Wang <tririverwangyi at gmail.com> wrote:
>> Hi, all,
>> I met a problem when using the TransformationFunctions option in Simplify:
>> ClearAll[PDD,try,t,z,aa];
>> PDD[a_?NumericQ, idx_] := 0;
>> t = aa PDD[z, d];
>> try[expr_] := Replace[expr, aa_. PDD[bb_, cc_] :> -bb PDD[aa, cc]];
>> Simplify[t, TransformationFunctions :> {try}]
>> I expect Simplify to do nothing, because the replace rule in try[] does not make the function simpler in this special case. However, Simplify[...] returns 0.
>> If I delete the line " PDD[a_?NumericQ, idx_] := 0; ", Simplify will give the correct result. However, z (or N[z]) is not a number thus it seems the above line shouldn't matter.
>> I also tried several other tests. I found a_?NumberQ, a_?IntegerQ both have the above problem, while a_ListQ has the desired behaviour. | {"url":"http://forums.wolfram.com/mathgroup/archive/2012/Apr/msg00158.html","timestamp":"2014-04-19T17:07:28Z","content_type":null,"content_length":"27499","record_id":"<urn:uuid:33b5b0a9-eab1-4041-a08f-6cd090098bbb>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Faster
BERKELEY -- Physicians and surgeons are constantly confronted with questions about the invisible. How big is a brain tumor? Is it shrinking with treatment? How thin is a heart-chamber wall, and
how much blood does the heart pump? What is the exact shape and volume of a liver?
Three frames from a movie of a beating heart showing the walls and interior of the chambers
A fast new way to compute three-dimensional models of internal organs and other anatomical features has been developed by Ravi Malladi and James Sethian of the Department of Energy's Lawrence
Berkeley National Laboratory. Both are in the Mathematics Department of Berkeley Lab's Computing Sciences Directorate; Sethian is also a professor of mathematics at the University of California,
"We want to make the task of visualizing and reconstructing medical shapes easy for doctors," says Malladi, who notes that noninvasive imaging has made great advances in recent decades. Whether
from x rays or ultrasound or magnetic-resonance imaging or computed tomography, however, even the best images have problems. They are flat -- at best a series of map-like slices through the
anatomical region of interest -- and they are usually noisy, like a TV picture plagued with snow.
"One of the first things we set out to do was make cleaner images without destroying essential information," says Malladi. "Of course trained physicians and medical technicians can find the
boundaries even in noisy images, and indeed hospitals hire interns to sit in front of computers and painstakingly click out the edges on series of digital images. The challenge is to create a
program that can make these decisions in automatic fashion."
Malladi and Sethian have used their methods to make images of organs with shapes as intricate as those of the human brain; from sonograms they have modeled the fetus in the womb; they have made
movies of a pumping heart, relating blood flow in and out of the chambers to the thickness of the heart walls.
"Now all a physician has to do is click once or twice inside the region of interest and the computer program will build a model in a few seconds," Malladi says. The new methods make medical
images useful to doctors in real time, aiding fast, well informed decisions for effective treatment.
The way Malladi and Sethian build cleaner images is closely related to the way they build three-dimensional models from a series of flat images. An "implicit representation of curves" is the
underlying mathematical approach, a form of partial-differential equations pioneered by Sethian which tracks boundaries as they evolve in space and time. "Level Sets" and "Fast Marching" are two
methods important in recovering medical shapes. (See background information below.)
Level Sets is a method of modeling curves and solids by incorporating an extra dimension -- viewing the representation from above, as it were. Fast Marching is a method of approximating the
position of curves and surfaces moving under a simple "speed law," which attracts the evolving curve to a boundary and closely relates it to the regularity of the emerging shape. With these
methods a computer algorithm can determine the internal and external boundaries of anatomical solids more quickly and less ambiguously than traditional ways of interpreting visual information.
The process begins when the physician uses the computer to plant a visual seed in the image to be modeled. "Even a single point, represented by a computer mouse click, can be thought of as a very
small circle or sphere," Malladi says. From that point an increasingly complex shape starts to grow. Knowing when to stop, however, depends on the algorithm recognizing edges that are often hard
to read.
The trick lies in mathematically adjusting the speed of the growing curve. As the curve advances, it encounters changes in the gray-scale values of the pixels in the image. Where changes are
small from one pixel to the next, the curve moves quickly -- the algorithm assumes there is no nearby boundary -- but where changes are large, the curve senses a boundary and slows down.
Too-abrupt changes in curvature are also smoothed by the calculation.
Since the mathematical view is always from one dimension "above" the shape being modeled, a complex 3-D model can be quickly constructed, with the propagating boundary curve easily working its
way around holes and voids. What's inside and outside a complex solid can be modeled separately. Models constructed at different moments in time produce movies of 3-D shapes in action.
Malladi's and Sethian's recent results were published in Proceedings of the Sixth International Conference on Computer Vision, Mumbai (Bombay), India, January, 1998. Movies of anatomical model
construction can be found on the web at http://www.lbl.gov/~malladi.
The Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California.
Level Sets and Fast Marching, combined with other mathematical techniques, have yielded quick, efficient, and manageable programs for anatomical images.
Imagine tracking the changing boundary of two rings of fire burning outward in dry grass. One way to follow the changing boundary between the burned area and the unburned area is to track points
on each expanding circle. But as soon as the circles overlap, all the points formerly on boundaries but now in the burned area must be abruptly discarded, which makes the calculation messy.
An online movie illustrating Level Sets and Fast Marching
Now instead of two circles on a plane, picture two upright cones, partly overlapping: the original two rings of fire lie on the boundary where a level plane slices through the surfaces of both
cones near their tips. Move the plane up, and the circles grow together. The boundary follows automatically, with no points to be tracked or arbitrarily discarded -- only a level to be
determined, which describes the evolving curve.
It's easy to imagine adding a third dimension to a two-dimensional curve; it's hard to picture a fourth spatial dimension associated with a volume. Yet mathematically it is no more difficult to
use this Level Sets approach with solids.
What drives a curve's evolution, whether in two dimensions or three, is a "speed law." Typically a speed law might move a point on a boundary faster in regions of tight curvature and slower in
less curved regions. This would have the effect of smoothing out transitions where the curve changes direction abruptly, an effect known as "viscosity" because it resembles the slow, smooth
transitions of a thick liquid like honey.
Fast Marching equations come into play in making choices to compute the evolving curve most efficiently. Curvature can be negative (inward) or positive (outward). Any closed curve, no matter how
complex, whose points move perpendicularly inward in the positive regions and outward in the negative regions, at curvature-dependent speed, will inevitably form a circle -- which in turn will
shrink to a point.
To build smooth maps and 3-D images of internal organs, however, the curve or surface has to move in the opposite direction, starting from small geometric shapes or solids and evolving outward to
complex ones. Therefore a speed law is written containing two terms, one that attracts the curve or surface to the boundary of the target object and another that closely relates the curve or
surface to the regularity of the evolving shape -- for example by adding a little "viscosity" from changes in curvature. In this way a complex shape can be quickly and accurately bounded and
filled in.
The principles of Level Sets and the method's many applications are discussed in "Tracking interfaces with level sets," by James Sethian, American Scientist, May-June 1997, page 254. A discussion
with online video can be found on the web at http://math.berkeley.edu/~sethian/level_set.html. | {"url":"http://www.lbl.gov/Science-Articles/Archive/anatomical-imaging.html","timestamp":"2014-04-17T12:38:22Z","content_type":null,"content_length":"12318","record_id":"<urn:uuid:77dfab2a-84b4-4d01-ab01-2b23dbbc788c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sub-Linear Time Error-Correction and Error-Detection
EECS Joint Colloquium Distinguished Lecture Series
Professor Luca Trevisan
EECS Department, U.C. Berkeley
Wednesday, October 18, 2000
Hewlett Packard Auditorium, 306 Soda Hall
4:00-5:00 p.m.
Error-correcting codes based on multivariate polynomials admit very efficient (polylogarithmic-time, or even constant-time, if the computational model is sufficiently generous) probabilistic decoding
procedures and error-detection procedures.
The efficient decoding procedures can recover (with high probability) a selected bit or block of the original message while given a corrupted encoding; the error-detection procedures can distinguish
(with high probability) a valid codeword from a string containing a large fraction of errors.
The existence of such procedures has been known and exploited in complexity theory for a while, and it is relevant to program testing, average-case complexity, probabilistically checkable proofs and
private information retrieval.
We are interested in two questions: Is it possible to construct similar procedures for more general classes of codes? Is there a trade-off between the efficiency of the procedures and the information
rate of the code?
Partial answers follow from work by several people, including the speaker. More general answers would follow from the resolution of certain conjectures that seem to be of independent interest. In the
long run, this line of research could lead, among other things, to a better assessment of the practicality of information-theoretic private information retrieval (a certain class of cryptographic
constructions) and to a simpler proof of the hardest part of the PCP Theorem. Hopefully, such codes could be useful to the practice of fault-tolerant information transmission and storage, but this
remains to be seen.
Luca Trevisan received his PhD from the University of Rome "La Sapienza" in 1997, and he has later been a post-doc at MIT, a post-doc at DIMACS, and an assistant professor at Columbia University. He
is currently an assistant professor at UC Berkeley. Luca received the STOC'97 best student paper award, and is a recipient of the Sloan Research Fellowship and of the NSF Career Award. | {"url":"https://www.eecs.berkeley.edu/Colloquium/Archives/00-01/00fall/trevisan.html","timestamp":"2014-04-17T06:44:18Z","content_type":null,"content_length":"2948","record_id":"<urn:uuid:8e5efa92-6488-4014-80f0-3e6970f129a5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: ZFC and God
Date: Jan 24, 2013 7:50 PM
Author: Jesse F. Hughes
Subject: Re: ZFC and God
WM <mueckenh@rz.fh-augsburg.de> writes:
> On 24 Jan., 14:46, "Jesse F. Hughes" <je...@phiwumbda.org> wrote:
>> It would be swell if you could write it in more or less set-theoretic
>> terms, since, after all, you are allegedly providing a proof in ZF.
>> Thanks much.
> A last approach to support your understanding:
> Define the set of all terminating decimals 0 =< x =< 1 in ZF.
> Do all that you want to do (with respect to diagonalization).
> Stop as soon as you encounter a non-terminating decimal
I've no idea what you're talking about.
Let t:N -> [0, 1) be the usual list of non-terminating decimals.
I define
d(j) = 7 if t_j(j) != 7
= 6 else
There. Done. Just as soon as I specified what d is, it is obviously
non-terminating. There's no process here to stop. The variable d was
undefined and then it was defined and once defined, it is clearly a
non-terminating decimal. (I even hate to use temporal talk like "once
defined", but hopefully my meaning is clear enough.)
I patiently await a proof in ZF that d is not non-terminating. Fun
fact: the axioms of ZF don't talk about time or stopping or anything
like that. So, you'll have to rework this "argument" so that it is a
valid argument in ZF.
Let's start with the theorem you want to prove, shall we? What
theorem is it precisely? Is it this:
Let d be defined as above. Then d is a terminating decimal.
Is that your claim? I.e.,
Let d be defined as above. Then there is an i in N such that for
all j > i, d(j) = 0.
Or do you plan on proving something else?
Thanks much. Eagerly awaiting further enlightenment, etc.
"Being in the ring of algebraic integers is just kind of being in a
weird place, but it's no different than if you are in an Elk's Lodge
with weird made up rules versus just being out in regular society."
-- James S. Harris, teacher | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8141305","timestamp":"2014-04-20T19:37:40Z","content_type":null,"content_length":"3147","record_id":"<urn:uuid:f98e998c-4299-401a-89d0-3b1407077e5c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to solve the following:
4a/15 = 3/5
- Homework Help - eNotes.com
How to solve the following:
4a/15 = 3/5
We need to solve the equation "4a/15 = 3/5"
This equation can be rewritten as:
a*(4/15) = 3/5
To make the left hand side of the equation equal to a we multiply bot side of equation by 15/4. Giving
a*(4/15)*(15/4) = (3/5)*(15/4)
We simplify this equation in following steps.
a*(4*15)/(15*4) = (3*15)/5*4)
a*(60/60) = 45/20
a = 9/4 = 2.25
To solve 4a/15 = 3/5.
4a/15 =3/5. Multiply by 15 both sides :
15*4a/15= 3*15/5 Or
4a = 9. Divide by 4 both sides:
a = 9/4 = 2.25
given that,
we may written as
To solve this question, the most important part is to know that 'a' is what you have to find.
In this quetion, it is easy to see what 'a' is because 15 is a multiple of 5. If you mutiply 5 by 3, you will get 15! This is what the two denominators are. So, you have to do the same thing to the
numeratorl you have to multiply 3. 3x3 is 9, and that is equal to 4a.
To put it neatly, it is:
4a/15=9/15 (At this stage, you don't have to be concerned of the denominators because they are equal)
This can be said because with the demonimators equal, the numerators have to be equal as well.
So, in doing these sorts of questions, you have to make the denominators of both sides equal, and then solve the variable.
This is how I would do it
a=2.25 or a= 2 1/4
4a/15 = 3/5
4a = (3/5) x 15 = 9
a = 9/4
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/how-solve-following-4a-15-3-5-142575","timestamp":"2014-04-17T02:46:34Z","content_type":null,"content_length":"35612","record_id":"<urn:uuid:644fa09b-8fe5-422b-94c0-10559c79050d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dowtown Carrier Annex, CA Algebra Tutor
Find a Dowtown Carrier Annex, CA Algebra Tutor
Greetings,My name is Chris and the following is a summary of my educational background, and my tutoring and teaching experience. I earned my MBA in June 2009 from the UCLA Anderson School of
Management (#1 Fully-Employed MBA Program 2007, Business Week). I also have a Master’s of Science in Biomed...
30 Subjects: including algebra 2, calculus, algebra 1, physics
...The techniques of both Calculus and Linear Algebra are brought to bear on problems of dynamics -- that is how things move and change with respect to one another. I matriculated into the
Doctoral Program in Mathematics at University of Illinois. My coursework includes the following: Calculus, Mu...
20 Subjects: including algebra 2, algebra 1, chemistry, reading
...The more a student sees and works through a problem or concept, the more comfortable they become. Repetition is key! Once a concept becomes second nature, other layers can be added on to deepen
the knowledge in a subject.
14 Subjects: including algebra 1, algebra 2, calculus, physics
...Some of the concepts covered are: Writing algebraic expressions, the properties of rational numbers, Solving equations with one and several variables,the laws of exponents, writing linear
equations, solving systems of equations, graphing systems of equations, graphing linear equations,factoring p...
10 Subjects: including algebra 2, algebra 1, Spanish, trigonometry
...I've been professionally tutoring since I was 17, when the Princeton Review hired me to teach SAT classes in Michigan. I later graduated magna cum laude from Yale with joint bachelor’s and
master’s degrees in history and then completed an Master's of Philosophy in literature at King's College, ...
42 Subjects: including algebra 2, algebra 1, reading, English | {"url":"http://www.purplemath.com/Dowtown_Carrier_Annex_CA_Algebra_tutors.php","timestamp":"2014-04-18T05:42:36Z","content_type":null,"content_length":"24622","record_id":"<urn:uuid:3c22beb4-3884-4f64-afc0-f5049008ffad>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
form of symmetric matrix of rank one
April 16th 2013, 04:22 PM #1
Mar 2013
Mount Pleasant, MI, USA
form of symmetric matrix of rank one
The question is:
Let $C$ be a symmetric matrix of rank one. Prove that $C$ must have the form $C=aww^T$, where $a$ is a scalar and $w$ is a vector of norm one.
(I think we can easily prove that if $C$ has the form $C=aww^T$, then $C$ is symmetric and of rank one. But what about the opposite direction...that is what we need to prove. How to prove this?)
Re: form of symmetric matrix of rank one
Hey ianchenmu.
Have you tried proof by contradiction? (Assume it has different properties and find a contradiction).
April 16th 2013, 06:49 PM #2
MHF Contributor
Sep 2012 | {"url":"http://mathhelpforum.com/advanced-algebra/217625-form-symmetric-matrix-rank-one.html","timestamp":"2014-04-20T09:50:08Z","content_type":null,"content_length":"33292","record_id":"<urn:uuid:0b119a96-438b-48bc-a72d-6e9b466457b1>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
Object counting
Object counting
I am looking for a quick way to count objects in a binary image (object = 1, background = 0).
I have tried Connected Component Labeling and it works for square shapes but the rounded shapes it has a problems. Could someone have a look at the code and let me know where am wrong.
Attached is what I have done for the connected component labeling.
Are you just just counting bits or do you need to do image analysis of a binary image to find the number of geometric shapes?
It sounds like your just trying to find connected components. Think of it this way:
You have a graph, each cell of your image with a 1 is a node, two nodes are connected if they are adjacent to each other. Now you can just use any old connected component algorithm.
If you are trying to find components based on pixel intensity (ie., not just 0's and 1's) things get a little trickier.
Wouldn't you want to perform some type of Laplace or Sobel edge filter on it first before doing a connected component algo? | {"url":"http://cboard.cprogramming.com/general-ai-programming/119712-object-counting-printable-thread.html","timestamp":"2014-04-24T17:04:49Z","content_type":null,"content_length":"7553","record_id":"<urn:uuid:e22b0342-5ed2-4719-ba9f-d4bf8a2cca9f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Underground Telephone Cable, Consisting Of A ... | Chegg.com
An underground telephone cable, consisting of a pair of wires,has suffered a short somewhere along its length (at point P in theFigure). The telephone cable is 6.00 km long, and in order todetermine
where the short is, a techician first measures theresistance between terminals AB; then he measures the resistanceacross the terminals CD. The first measurement yields 15.00 Ohm;the second 110.00
Ohm. Where is the short? Give your answer as adistance from point C. | {"url":"http://www.chegg.com/homework-help/questions-and-answers/underground-telephone-cable-consisting-pair-wires-suffered-short-somewhere-along-length-po-q186901","timestamp":"2014-04-16T09:00:24Z","content_type":null,"content_length":"21458","record_id":"<urn:uuid:a454338c-6b49-4357-b7be-653ad29d99bc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
An orchestra conductor divides 48 violinists, 24 violists, and 36 cellists into ensembles. Each ensemble has the same number of each instrument.
a. What is the greatest number of ensembles that can be formed?
b. How many violinists, violists, and cellists will be in each?
An orchestra conductor divides 48 violinists, 24 violists, and 36 cellists into ensembles. Each ensemble has the same number of each instrument. a. What is the greatest number of ensembles that can
be formed? b. How many violinists, violists, and cellists will be in each?
H1. Your three groups are 48, 24 & 36. The largest number that divide each of these numbers is 12. So you will have 12 ensembles. In each ensemble you will have 48/12 = 4 violins, 24/12 = 2 violas &
36/12 = 3 cellos.ow do you become Indie?
Expert answered|xontos|Points 21|
Asked 5/26/2011 4:29:26 PM
0 Answers/Comments
Not a good answer? Get an answer now. (FREE)
There are no new answers. | {"url":"http://www.weegy.com/home.aspx?ConversationId=F9293E6A","timestamp":"2014-04-17T18:57:07Z","content_type":null,"content_length":"39691","record_id":"<urn:uuid:eb380dc7-2441-466b-9f56-3d020d0b79ec>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
cos(x) = cosh(x)?
1. The problem statement, all variables and given/known data
2. Relevant equations
from the identities found on the internet:
3. The attempt at a solution
Assuming for the definition of cosh(x), if we take x as being equal to (ix), then surely this shows that cosh(x)=cos(x)? Can someone explain why this is wrong please? because i can't see it | {"url":"http://www.physicsforums.com/showthread.php?t=480306","timestamp":"2014-04-16T10:24:53Z","content_type":null,"content_length":"22305","record_id":"<urn:uuid:47c5b16a-a23d-404a-b7ec-700f70034a3f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manassas, VA Science Tutor
Find a Manassas, VA Science Tutor
...I have also worked with high school students on Math and Science. While working on my Molecular Biology BS from Johns Hopkins University, I tutored college students on Math (including
Calculus) and Science (including Chemistry). I have worked with individual students and small groups. I like to...
40 Subjects: including genetics, chemistry, physical science, anatomy
...I had a 4.0 GPA in math as an undergraduate (graduating with more than twice the number of required credit hours in math). I also obtained a perfect score in Computer Science (20 out of 20)
while a student at the University Pierre and Marie Curie, Paris, France. I enjoy explaining math to others...
37 Subjects: including astronomy, physical science, physics, ACT Science
...I believe learning should be a rewarding and enjoyable experience, not a complicated or unhappy experience. I thoroughly enjoy working with students of all ages.I am currently certified in
numerous elementary school level subjects and available to work with students at the elementary school leve...
40 Subjects: including chemistry, phonics, Latin, trigonometry
...I have been an ESL teacher for 4 years. My kids are also in middle school so I know the books inside out and I have tutored for a long time. I have an ESL certification from London.
20 Subjects: including anatomy, writing, physiology, chemistry
...I acquired two national certifications early in 2013 to become a certified Intensive Care Unit RN (CCRN) and Post Anesthesia Care Unit RN (CPAN) and I just completed my Master's Degree in
Nursing Informatics in Dec of 2013. I plan to become board certified in Informatics in 3 months. My clinica...
2 Subjects: including nursing, NCLEX
Related Manassas, VA Tutors
Manassas, VA Accounting Tutors
Manassas, VA ACT Tutors
Manassas, VA Algebra Tutors
Manassas, VA Algebra 2 Tutors
Manassas, VA Calculus Tutors
Manassas, VA Geometry Tutors
Manassas, VA Math Tutors
Manassas, VA Prealgebra Tutors
Manassas, VA Precalculus Tutors
Manassas, VA SAT Tutors
Manassas, VA SAT Math Tutors
Manassas, VA Science Tutors
Manassas, VA Statistics Tutors
Manassas, VA Trigonometry Tutors
Nearby Cities With Science Tutor
Annandale, VA Science Tutors
Burke, VA Science Tutors
Centreville, VA Science Tutors
Chantilly Science Tutors
Fairfax, VA Science Tutors
Falls Church Science Tutors
Herndon, VA Science Tutors
Manassas Park, VA Science Tutors
Mc Lean, VA Science Tutors
Reston Science Tutors
Springfield, VA Science Tutors
Stafford, VA Science Tutors
Sterling, VA Science Tutors
Vienna, VA Science Tutors
Woodbridge, VA Science Tutors | {"url":"http://www.purplemath.com/Manassas_VA_Science_tutors.php","timestamp":"2014-04-21T00:18:45Z","content_type":null,"content_length":"23829","record_id":"<urn:uuid:5e2b7867-23af-45ca-b5a1-f1ea20e91f97>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
Normal Probability distribution to find true length of an object
July 17th 2010, 11:39 AM
Normal Probability distribution to find true length of an object
The question is as follows:
The normal distribution is commonly used to model the variability expected when making measurements. In this context, a measured quantity x is assumed to have a normal distribution whose mean is
assumed to be the true value of the object being measured. The precision of the measuring instrument determines the standard deviation of the distribution.
a) If the measurements of the length of an object have a normal probability distribution with a standard deviation 1mm what is the probability that a single measurement will lie within 2mm of the
true length of the object?
b) Suppose the measuring instrument in part a is replaced witha a more precise measuring instrument having a standard deviation of .5mm What is the probability that a measurement from the new
instrument lies within 2mm of the true length of the object?
- What I think part a is asking is for me to find the probability of P(x-mu/sigma<x<x-mu/sigma) but I am confused because it also makes me think that it is asking P(-1<z<3) and I am just not sure
how to even approach solving this problem. I think that when it asks for the probability of 2mm of the true length that it is asking for the area between two sets of z values. Any help ,
examples, or explanations would be appreciated.
July 18th 2010, 12:31 AM
The question is as follows:
The normal distribution is commonly used to model the variability expected when making measurements. In this context, a measured quantity x is assumed to have a normal distribution whose mean is
assumed to be the true value of the object being measured. The precision of the measuring instrument determines the standard deviation of the distribution.
a) If the measurements of the length of an object have a normal probability distribution with a standard deviation 1mm what is the probability that a single measurement will lie within 2mm of the
true length of the object?
b) Suppose the measuring instrument in part a is replaced witha a more precise measuring instrument having a standard deviation of .5mm What is the probability that a measurement from the new
instrument lies within 2mm of the true length of the object?
- What I think part a is asking is for me to find the probability of P(x-mu/sigma<x<x-mu/sigma) but I am confused because it also makes me think that it is asking P(-1<z<3) and I am just not sure
how to even approach solving this problem. I think that when it asks for the probability of 2mm of the true length that it is asking for the area between two sets of z values. Any help ,
examples, or explanations would be appreciated.
It is asking what is the probability of the absolute value of a RV with zero mean (the error in the measured length) and SD 1mm being less than 2mm. Which is the same thing in this case as asking
the probability that a standard normal RV lies between +/-2 SD.
In the secod case this translates to the same as asking the probability that a standard normal RV lies between +/-4 SD | {"url":"http://mathhelpforum.com/advanced-statistics/151201-normal-probability-distribution-find-true-length-object-print.html","timestamp":"2014-04-19T04:34:56Z","content_type":null,"content_length":"6973","record_id":"<urn:uuid:9c6c44c1-cb10-4f74-a07e-956220e1a611>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Marque Trigonometry Tutor
Find a La Marque Trigonometry Tutor
...They range from Differential Geometry to Ordinary differential Equations. I am well versed in this topic and can pull from a wide variety of real world examples that can help ease the
complicated problems. Through my years of experience teaching and tutoring all levels of math, from 6th grade ...
16 Subjects: including trigonometry, calculus, geometry, algebra 1
...More advanced topics such as vectors, polar coordinates, parametric equations, matrix algebra, conic sections, sequences and series, and mathematical induction can be covered. I can reteach
lessons, help with homework, or guide you through a more rigorous treatment of these topics. As needed, we can reinforce prerequisite topics from algebra and pre-algebra.
30 Subjects: including trigonometry, calculus, physics, geometry
...I like to focus on building a student's confidence and understanding over memorization. I taught algebra and geometry as a high school teacher. I also helped out students that were struggling
at the higher levels.
34 Subjects: including trigonometry, English, reading, chemistry
...I am very good at finding alternative ways to explain things. My ability to reach students at all levels is outstanding. In addition to my classroom experience, I have engaged in other jobs and
activities that involved teaching.
12 Subjects: including trigonometry, physics, geometry, statistics
...I am a professional engineer with tutoring experience in the Los Angeles School System. I can teach physics and math at any level. Physics and mathematics are difficult subjects.
10 Subjects: including trigonometry, calculus, algebra 2, physics | {"url":"http://www.purplemath.com/la_marque_tx_trigonometry_tutors.php","timestamp":"2014-04-21T07:08:18Z","content_type":null,"content_length":"24022","record_id":"<urn:uuid:1c9a31fa-a534-47e7-b16a-79fbce5cb04a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alpine, CA Prealgebra Tutor
Find an Alpine, CA Prealgebra Tutor
I am currently enrolled at Cuyamaca Community College and majoring in Statistics. I am proficient in elementary math, algebra I & II, geometry, precalculus, and calculus I/II/III. As well as
general physics, chemistry, and biology.
18 Subjects: including prealgebra, chemistry, calculus, physics
...I am a certified teacher in Kansas. I am certified in Elementary Education and Middle School Language Arts. I enjoy working with children of all ages.
18 Subjects: including prealgebra, reading, geometry, GED
...I used to hold events where I would cook healthy meals for up to fifty people every other week. I have trained cooking experience in Santa Cruz where I worked full time in a kitchen preparing
a wide variety of foods. I have spent much of my time at UCSD helping other UCSD students with their st...
10 Subjects: including prealgebra, calculus, geometry, algebra 1
...My tutoring has often produced success in those subjects. Until now, I have never been compensated for my tutoring; I have done it simply because I enjoy it. I love the feeling that I can help
someone succeed, and I am good at it.
26 Subjects: including prealgebra, chemistry, Spanish, physics
...Why you should consider me as a tutor? I have a solid foundation through my education and work experience in Math and Science. I have several years of experience tutoring grade school and high
school level kids, e.g. kids of friends and their friends.
2 Subjects: including prealgebra, algebra 1
Related Alpine, CA Tutors
Alpine, CA Accounting Tutors
Alpine, CA ACT Tutors
Alpine, CA Algebra Tutors
Alpine, CA Algebra 2 Tutors
Alpine, CA Calculus Tutors
Alpine, CA Geometry Tutors
Alpine, CA Math Tutors
Alpine, CA Prealgebra Tutors
Alpine, CA Precalculus Tutors
Alpine, CA SAT Tutors
Alpine, CA SAT Math Tutors
Alpine, CA Science Tutors
Alpine, CA Statistics Tutors
Alpine, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Alpine_CA_prealgebra_tutors.php","timestamp":"2014-04-20T16:30:52Z","content_type":null,"content_length":"23714","record_id":"<urn:uuid:25746843-dc7f-43a0-b5e8-1c57684515cb>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
Congrats, Jack! Reminiscences and Appreciation
Congrats, Jack! Reminiscences and Appreciation
July 10, 2001
John Todd "devoted at least eighty of his ninety years to the cause of mathematics and to its applications," said Philip Davis, an invited speaker at the Caltech conference held in May to honor Todd
on his 90th birthday. (Photo by Sarah Emery Bunn, Caltech)
Philip J. Davis
A two-day conference, Numerical Analysis, Linear Algebra and Computations, was held at the California Institute of Technology, May 16-17, to celebrate the 90th birthday of John Todd. Philip Davis,
whose association with Todd dates back to the time just after World War II, looked back on those early days of computing and numerical analysis in the first invited talk at the conference. What
follows is an adapted version of that talk.
We gathered in Pasadena to honor a nonagenarian of international reputation who has devoted at least eighty of his ninety years to the cause of mathematics and to its applications. There is an old
proverb that asks: "Who is honored?" The answer given is: "He who honors other people." So I was greatly honored by the opportunity to pay tribute to John Todd. It is by no means an easy matter to
select out the high points of a long profes-sional career, but for this occasion it seemed important that I make the attempt.
Jack Todd was born in 1911. He received his BSc at Queens College Belfast, Northern Ireland, in 1931. During World War II, he was with the Mine Design Department in Portsmouth, England, and later
with the Admiralty Computing Service in London. In 1939, just before the war began, Jack and Olga Taussky were married. In 1946, he gave his first course in numerical mathematics at King's College
London. In 1949, John Curtiss hired him to head the Computation Laboratory at the National Bureau of Standards (now NIST) in Washington, and from 1954 to 1957 he was chief of the Numerical Analysis
Section. He moved to Caltech as a professor of mathematics in 1957 and has been there since that time.
If you check the Web site for this conference,* you will find a complete list of Jack's scientific work, which spans algebra, analysis, computation, special functions, history, biography, and many
other topics. You will also find an interesting account of how, late in World War II, he and another British officer were the first to occupy the buildings of the Mathematisches Forschungsinstitut in
Oberwohlfach, and how their occupation saved the institute from being destroyed. Anyone who has profited from a stay at Oberwohlfach owes a belated debt of gratitude to Jack.
One can speak generically about a person, and by that I mean provide the standard contents of a curriculum vitae-material that is now often webified. But I had in mind to say something personal: how
Jack Todd impacted my life. And for this reason, this piece contains a bit of self-revelation. Olga Taussky-Todd subtitled her autobiography "the truth, nothing but the truth, but not all the truth"
[1]. And here I must do likewise.
I received my bachelor's degree in mathematics in the middle of World War II. Shortly thereafter, I was inducted into the Air Force, placed on reserve status, and given a position at NACA, the
precursor of NASA, at Langley Field in Virginia. My job was in NACA's Aircraft Loads Division, which studied dynamic loads on the components of fighter aircraft during a variety of maneuvers. I was
partially a computer---working with the raw data provided by flight instrumentation, accelerometers, and pressure gauges---and partially an interpreter of what I had computed. My colleagues and I
worked with slide rules, planimeters, nomograms, and various electromechanical calculators (Marchants, Friedens, and so forth). And we had one other mathematical aid. Even as Jack Todd reported in
his History of Computation [2] how, in his war work in the British Mine Design Department, he had used the (American) WPA tables of special functions, so also did we make substantial use of these and
other tables computed some years before.
Reciprocally, we got numerous reports from U.K. laboratories (all stamped "confidential" or classified at even higher levels). These reports were circulated widely, and I recall reading one on
eigenvalue computation authored by Olga Taussky. It must have dealt with the Gershgorin circle theorem.
How difficult, how tedious, how time-consuming it was in those days to solve a second-order linear differential equation with constant coefficients but with a graphically given right-hand side that
represented the pilot's action. My first published paper reduced, computationally speaking, to this---a task that I would guess is now performed routinely in nanoseconds.
Although I had had no college courses in numerical methods, I didn't come into my job at NACA totally green. In high school I had studied (with no deep understanding, I can assure you) a thin book by
David Gibb entitled Interpolation and Numerical Integration (1915), which derived from E.T. Whittaker's Mathematical Laboratory at the University of Edinburgh. (Incidentally, a full-page picture of
Whittaker can be found at the beginning of Nash's collection of articles in which Jack's History appeared.)
The war over, I returned to graduate school and did a thesis in pure mathematics under the supervision of Ralph Boas. The subject: uniqueness theorems for infinite interpolatory systems as applied to
entire analytic functions of exponential type. This was, by the way, a subject in which Edmund Whittaker's son, J.T. Whittaker, had specialized, producing a Cambridge monograph.
After several years of working on contracts and grants for Stefan Bergman, I accepted a position in the Numerical Analysis Section of NBS in Washington. I recall the snooty disdain, the lifted
eyebrows of my contemporaries when they heard I'd accepted a job at a government laboratory. The established wisdom in those days---and it lingers on---was that the only allowable career for a
mathematician was to prove theorems in a university environment. What served as significant credentiation for the place, as I began to consider the possibility, was the fact that John Curtiss was
then head of mathematics at NBS, that he had written his thesis under J.L. Walsh, and that I had co-authored several papers with Walsh.
In the decade beginning in about 1948, NBS was surely one of the principal places in the world studying and investigating numerical methods, taking into account the potentialities of the new
computing machines. The National Physical Laboratory in England was another such place. When I arrived at NBS, I found several senior mathematicians: Jack and Olga Todd, Milton Abramowitz, Irene
Stegun, Churchill Eisenhart, Ida Rhodes, Ted Motzkin, Ky Fan. Among my contemporaries were Alan Hoffman, Morris Newman, Karl Goldberg, Henry Antosiewicz, and a bit later, Philip Rabinowitz, Walter
Gautschi, Peter Henrici, John Rice, Marvin Marcus, Emilie Haynsworth, Joan Rosenblatt, Marvin Zelen.
There was a steady stream of visitors, often from abroad, all mathematicians of the first class with a deep interest in computation. I can cite Eduard Stiefel, Helmut Wielandt, Alexander Ostrowski,
and J.M. Synge. The division had an advisory committee among whose members Mina Rees and Marc Kac played an active role in suggesting names of visitors to invite.
The factors and forces, the people that influence a career, are often difficult to perceive at the time. Many of us, I'm sure, have had this experience: Someone knocks on your door, and a person you
do not recognize enters. You look perplexed. He or she then says, "You don't remember me, Professor, but I took your course twenty-five years ago, and it really turned my life around." Embarrassed,
you say, "Why yes, of course. Come in; sit down and tell me what you've been doing all this while."
For four years---from 1952 to 1957---Jack Todd was my boss. And I use the word "boss" in the "weak topology," for what he did was great. Over and above certain contractual obligations, which paid for
my bread, he left me alone. He left me alone to interact with the great group that was assembled around him and then to do my own thing.
And I did interact with many of the people just mentioned. A number became very good friends: Phil Rabinowitz, Emilie Haynsworth, Alexander Ostrowski. An interest in matrix theory developed (sparked
by Jack and Olga, for matrix theory was perhaps the strong suit of the group). I left NBS because I wanted to write, and found by hard experience that a government agency was by no means the ideal
place to write.
In the mid-fifties, computers and computation laboratories were beginning to appear at universities where previously only a single, heavy, rusty, dusty, adding machine might have been found in the
office of the professor of astronomy. Jack conceived the idea of running a training program for future directors of academic computing labs. He received NSF support for the program, which was held in
1957, with great success. As an inheritance from Jack, I repeated his formula in 1959 and was equally gratified by the results.
Mathematics is probably as old as civilization itself; it has always played a role in the ways in which people deal with one another, with discovery, with the arrangement and incorporation of
experience into a coherent, interpretable, occasionally predictable system. Numerical analysis, which is probably where mathematics began three or four millennia ago, has itself changed mightily in
Jack Todd's and all of our lifetimes. How many of the tools of yesteryear have become quaint and obsolete! In 1911, numerical analysis was an accumulation of recipes and advice: how to deal with
squared paper and what checks to make. It is amusing to read David Gibb's words from 1915:
"Each desk [in the Mathematical Laboratory at the University of Edinburgh] is equipped with a copy of Barlow's Tables, a copy of Crelle's Tables which gives at sight the product of any two numbers
less than 1000. For the neat and methodical arrangement of the work computing paper is essential. . . . It will be found conducive to speed and accuracy if, instead of taking down a number one digit
at a time, the computer takes it down two digits at a time."
Numerical analysis is now a full-blown theoretical subject that also incorporates the accumulated wisdom of hundreds of thousands of experimental runs, all of whose ideas have diffused and filtered
into packages of scientific computation.
Paraphrasing John Milton, we can assert: "They also serve who only make experimental runs." I'm sure that Cleve Moler, himself a student of Jack Todd, would agree with me that in the practical sense
the very best numerical analysis is now to be found in Matlab and other such packages, and not on the pages of textbooks.
Our lives are now increasingly mathematized, and there is every indication that this tendency will continue unabated for the foreseeable future. To the average person, these mathematizations are not
visible: They are hidden in age-old practices that we take for granted, or they lie buried deep in computer programs and chips.
Mathematics is the backbone of much that is new, surprising, utilitarian, aesthetic, and occasionally regrettable about our contemporary world. Take, for example, modern communications and the Web.
The Web site for this conference contains much information about the venue and agenda of the conference. It contains much detail about, even many photos of, the man whose career we are celebrating. I
can recommend it to all here.
The organizing committee could have set up a celebratory chat-site conference, attended by hundreds. No airplane or hotel bookings would be necessary, no jet lag. No cancellations or other
But this would not have done at all. For mathematics, even through the latest communication schemes, is at a loss to summarize an individual, or the career of an individual, or the impact of the
individual on other individuals or on the larger social and scientific world, and to do it in a specified number of bytes. Yes, using mathematics, we can make a complete genetic analysis; but
mathematics is powerless to describe fully what the eye sees or the ear hears.
And that is why we gathered at Caltech---to learn, and judge, and praise, and come away refreshed and built up. And we came together to pay tribute to a man who has devoted his life to the
furtherance of our profession.
[1] O. Taussky-Todd, An autobiographical essay: The truth, nothing but the truth, but not all the truth, in Mathematical People, D.J. Albers and G.L Alexanderson, eds., Birkhauser, Boston, 1985.
[2] J. Todd, The prehistory and early history of computation at the NBS, in A History of Scientific Computation, S.G. Nash, ed., Addison Wesley, 1990, 251-268.
See http://www.cacr.caltech.edu/todd and http://www.cacr.caltech.edu/todd/pictures/conference/.
Philip J. Davis, professor emeritus of applied mathematics at Brown University, is an independent writer, scholar, and lecturer. He lives in Providence, Rhode Island and can be reached at | {"url":"http://www.siam.org/news/news.php?id=561","timestamp":"2014-04-21T10:50:00Z","content_type":null,"content_length":"20461","record_id":"<urn:uuid:da0a5be0-4e1f-4faa-ac83-884829fcaaac>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Invalid Lags message - gmm, a system of two simultaneous equatio
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Invalid Lags message - gmm, a system of two simultaneous equations
From "Brian P. Poi" <bpoi@stata.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Invalid Lags message - gmm, a system of two simultaneous equations
Date Mon, 15 Feb 2010 08:22:16 -0600 (CST)
On Mon, 15 Feb 2010, Ari Dothan wrote:
Dear Statalist participants,
The two equations are:
(1) C_t = B1*P_t-1 + B2*C_t-1 + B3*A_t-1 + B4*CM_t-1 + alpha + u_it
(2) P_t = B5*C_t-1 + B6*P_t-1 + B7*PM_t-1 + alpha1 + v_it
The first stage equation alone in Stata code:
gmm (D.C_t - {rho}*LD.C_t -{xb: LD.P_t LD.A_t LD.CM_t}), xtinstruments(C_t,
lags (2/4)) instruments (LD.P_t LD.A_t LD.CM_t , noconstant)
deriv(/rho=-1*LD.w10_or) deriv (/xb=-1) winitial (xt D) onestep
Both stages are combined in Stata code:
gmm (eq:D.C_t - {rho1}*LD.C_t -{xb: LD.P_t LD.A_t LD.CM_t}) (eq2:D.P_t -
{rho1}* LD.P_t - {xc: LD.C_t LD.PM_t-1}),
xtinstruments(eq1:C_t,lags(2/4)) (eq2:P_t, lags(2/4)) instruments
(eq1:LD.(P_t - A_t) LD.CM_t) instruments(eq2:LD.C_t LD.CM_t) deriv(eq1:/rho
= - 1*LD.C_t) (eq2:/rho1 = - 1*LD.P_t) deriv(eq1:/xb = -1) (eq2:/xc = -1)
winitial(xt D) wmatrix(robust) onestep
I am aware of the fact that I am trying to run 2 level equations and two
difference equations, and hope that I understand the procedure as explained
in the Stata manual on gmm.
The problem is that I get an "invalid lags" message.
With dynamic panel models, -gmm- allows you to specify at most two equations. If you specify two equations, then one equation should be in differenced form and the other in levels form. In that case
the winitial() option should either be winitial(xt DL) or winitial(xt LD) depending on whether you specify the differenced or level equation first.
-- Brian Poi
-- bpoi@stata.com
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-02/msg00662.html","timestamp":"2014-04-18T08:10:09Z","content_type":null,"content_length":"7941","record_id":"<urn:uuid:b5bcabe0-5a55-49a7-a63d-7ad1b14d742b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applications of Automata Theory
Automata theory is the basis for the theory of formal languages. A proper treatment of formal language theory begins with some basic definitions:
• A symbol is simply a character, an abstraction that is meaningless by itself.
• An alphabet is a finite set of symbols.
• A word is a finite string of symbols from a given alphabet.
• Finally, a language is a set of words formed from a given alphabet.
The set of words that form a language is usually infinite, although it may be finite or empty as well. Formal languages are treated like mathematical sets, so they can undergo standard set theory
operations such as union and intersection. Additionally, operating on languages always produces a language. As sets, they are defined and classified using techniques of automata theory.
Formal languages are normally defined in one of three ways, all of which can be described by automata theory:
• regular expressions
• standard automata
• a formal grammar system
Regular Expressions Example
alphabet A1 = {a, b}
alphabet A2 = {1, 2}
language L1 = the set of all words over A1 = {a, aab, ...}
language L2 = the set of all words over A2 = {2, 11221, ...}
language L3 = L1 ∪ L2
language L4 = {an | n is even} = {aa, aaaa, ...}
language L5 = {anbn | n is natural} = {ab, aabb, ...}
Languages can also be defined by any kind of automaton, like a Turing Machine. In general, any automata or machine M operating on an alphabet A can produce a perfectly valid language L. The system
could be represented by a bounded Turing Machine tape, for example, with each cell representing a word. After the instructions halt, any word with value 1 (or ON) is accepted and becomes part of the
generated language. From this idea, one can defne the complexity of a language, which can be classified as P or NP, exponential, or probabilistic, for example.
Noam Chomsky extended the automata theory idea of complexity hierarchy to a formal language hierarchy, which led to the concept of formal grammar. A formal grammar system is a kind of automata
specifically defined for linguistic purposes. The parameters of formal grammar are generally defined as:
• a set of non-terminal symbols N
• a set of terminal symbols Σ
• a set of production rules P
• a start symbol S
Grammar Example
start symbol = S
non-terminals = {S}
terminals = {a, b}
production rules: S → aSb, S → ba
S → aSb → abab
S → aSb → aaSbb → aababb
L = {abab, aababb, ...}
As in purely mathematical automata, grammar automata can produce a wide variety of complex languages from only a few symbols and a few production rules. Chomsky's hierarchy defines four nested
classes of languages, where the more precise aclasses have stricter limitations on their grammatical production rules.
The formality of automata theory can be applied to the analysis and manipulation of actual human language as well as the development of human-computer interaction (HCI) and artificial intelligence
To the casual observer, biology is an impossibly complex science. Traditionally, the intricacy and variation found in life science has been attributed to the notion of natural selection. Species
become "intentionally" complex because it increases their chance for survival. For example, a camoflauge-patterned toad will have a far lower risk of being eaten by a python than a frog colored
entirely in orange. This idea makes sense, but automata theory offers a simpler and more logical explanation, one that relies not on random, optimizing mutations but on a simple set of rules.
Basic automata theory shows that simplicity can naturally generate complexity. Apparent randomness in a system results only from inherent complexities in the behavior of automata, and seemingly
endless variations in outcome are only the products of different initial states. A simple mathematical example of this notion is found in irrational numbers. The square root of nine is just 3, but
the square root of ten has no definable characteristics. One could compute the decimal digits for the lifetime of the universe and never find any kind of recurring patter or orderly progression;
instead, the sequence of numbers seemse utterly random. Similar results are found in simple two-dimensional cellular automaton. These structures form gaskets and fractals that sometimes appear
orderly and geometric, but can resemble random noise without adding any states or instructions to the set of production rules.
The most classic merging of automata theory and biology is John Conway's Game of Life. "Life" is probably the most frequently written program in elementary computer science. The basic structure of
Life is a two-dimensional cellular automaton that is given a start state of any number of filled cells. Each time step, or generation, switches cells on or off depending on the state of the cells
that surround it. The rules are defined as follows:
• All eight of the cells surrounding the current one are checked to see if they are on or not.
• Any cells that are on are counted, and this count is then used to determine what will happen to the current cell:
1. Death: if the count is less than 2 or greater than 3, the current cell is switched off.
2. Survival: if (a) the count is exactly 2, or (b) the count is exactly 3 and the current cell is on, the current cell is left unchanged.
3. Birth: if the current cell is off and the count is exactly 3, the current cell is switched on.
Like any manifestation of automata theory, the Game of LIfe can be defined using extremely simple and concise rules, but can produce incredibly complex and intricate patterns.
In addition to the species-level complexity illustrated by the Game of Life, complexity within an individual organism can also be explained using automata theory. An organism might be complex in its
full form, but examining constituent parts reveals consistency, symmetry, and patterns. Simple organisms, like maple leaves and star fish, even suggest mathematical structure in their full form.
Using ideas of automata theory as a basis for generating the wide variety of life forms we see today, it becomes easier to think that sets of mathematical rules might be responsible for the
complexity we notice every day.
Inter-species observations also support the notion of automata theory instead of the specific and random optimization in natural selection. For example, there are striking similarities in patterns
between very different orgranisms:
• Mollusks and pine cones grow by the Fibonacci sequence, reproducible by math.
• Leopards and snakes can have nearly identical pigmentation patterns, reproducible by two-dimensional automata.
With these ideas in mind, it is difficult not to imagine that any biolgical attribute can be simulated with abstract machines and reduced to a more manageable level of simplicity.
Other Applications
Many other branches of science also involve unbelievable levels of complexity, impossibly large degrees of variation, and apparently random processes, so it makes sense that automata theory can
contribute to a better scientific understanding of these areas as well. The modern-day pioneer of cellular automata applications is Stephen Wolfram, who argues that the entire universe might
eventually be describable as a machine with finite sets of states and rules and a single initial condition. He relates automata theory to a wide variety of scientific pursuits, including:
• Fluid Flow
• Snowflake and crystal formation
• Chaos theory
• Cosmology
• Financial analysis | {"url":"http://cs.stanford.edu/people/eroberts/courses/soco/projects/2004-05/automata-theory/apps.html","timestamp":"2014-04-17T06:55:10Z","content_type":null,"content_length":"11753","record_id":"<urn:uuid:59d23668-cd27-4d11-8f56-739345fe7314>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fundamental group of the moduli stack of elliptic curves
up vote 17 down vote favorite
I've heard that the étale fundamental group of the moduli stack of elliptic curves (over $\mathbb{Z}$) is trivial. Is there an easy proof of that? (Note that there are plenty of étale covers once one
inverts a prime $p$, given by taking elliptic curves with some form of a level $p^n$ structure.)
More generally, I'd be interested in how it works out for the stack of cubic curves which are allowed either to be smooth or to have a nodal singularity.
elliptic-curves ag.algebraic-geometry
3 I believe the paper `Horizontal divisors on arithmetic surfaces associated with Belyĭ uniformizations' by Ihara proves that the fundamental group of $\mathbb{P}^1\setminus \{0,1,\infty\}_{\mathbb
{Z}}$ is trivial. I'm not sure because I rarely think about fundamental groups of stacks, but the result you want might follow from this. – Minhyong Kim Aug 19 '12 at 17:36
1 Upon further thought, maybe not. – Minhyong Kim Aug 19 '12 at 17:53
1 I dunno, that implication sounds reasonable to me! – JSE Aug 19 '12 at 20:38
add comment
1 Answer
active oldest votes
Yes, there is a proof which is long but one might consider to be "easy" after digesting it.
Let's first show that the question (of triviality of connected finite etale covers) for the moduli stack $M_1$ is equivalent to its counterpart for the "Deligne-Rapoport" compactification $
\overline{M}_1$ (a regular proper DM stack), since the method of the harder direction will be used in our argument for the case of $M_1$. The easier direction is that if the case of ${M}_1$
is known then we can settle the case of $\overline{M}_1$. It suffices to show that if a normal noetherian DM (or Artin) stack $X$ has a dense open substack $U$ with no nontrivial connected
finite etale cover then the same holds for $X$. It suffices to show more generally that if $U$ is a dense open substack of $X$ and $X' \rightarrow X$ is a finite etale cover and $s:U \
rightarrow X'|_U$ is a section over $U$ then $s$ uniquely extends over $X$. The uniqueness allows us to work over a smooth scheme chart, so we're reduced to the well-known case when $X$ is
a scheme (though one can also adapt to the case of stacks the proof in the scheme case via chasing connected components).
The more interesting direction is the converse: the case of $\overline{M}_1$ implies the case of $M_1$. For this we just have to show that for any connected finite etale cover $T$ of $M_1$
the connected finite normalization $\overline{T} \rightarrow \overline{M}_1$ (which is flat since $\overline{M}_1$ is a regular DM stack of dimension 2) is etale around the closed
complement $\infty$ of $M_1$ in $\overline{M}_1$. Since $\overline{M}_1$ is regular and $\infty$ is a relative Cartier divisor that is regular with generic characteristic 0, it follows from
the relative Abyhankar Lemma (Exp. XIII, SGA1 for schemes, easily adapts to DM stacks by usual etale-localization stuff) that $\overline{T}$ is "relatively tamely ramified" along $\infty$.
Thus, the etale-local structure near $\infty$ is given by an $e$th-root extraction of a local generator of the ideal of the connected substack $\infty$ with $e$ a unit along $\infty$. But
(as in Saito's argument mentioned by Minhyong) ${\rm{Spec}} \mathbf{Z}$ supports points of every possible prime residue characteristic, so $e$ has no prime factors and hence $e = 1$, so $\
overline{T}$ is etale over $\overline{M}_1$ as desired.
Now we directly attack the case of $M_1$ (though one can also solve the case of $\overline{M}_1$ directly in a more illuminating "topological" manner over $\mathbf{C}$ after some
preliminaries with the triviality of $\pi_1({\rm{Spec}}(\mathbf{Z}))$ to handle geometric connectivity of connected components, but that gets caught up in "foundational" issues related to
analytification of stacks). We shall initially work with the open substack $M$ of elliptic curves with $j \ne 0, 1728$ on fibers; i.e., the stack of elliptic curves whose automorphism
scheme is the constant group $\langle \pm 1 \rangle$ (exercise: equivalent to impose this condition on automorphism groups of geometric fibers, since that constant group has no nontrivial
automorphisms); one might say this is the moduli stack of elliptic curves with "no extra automorphisms". I claim that $M$ has exactly one nontrivial connected finite etale cover, a scheme
cover of degree 2, and we'll use this to bootstrap to get a handle on the entire moduli stack.
Let $Y$ be the open subscheme of $\mathbf{A}^1_{\mathbf{Z}} = {\rm{Spec}}(\mathbf{Z}[j])$ defined by $j(j-1728)$ being a unit. Early in Silverman's first book on elliptic curves you'll find
an elliptic curve $E$ over $Y$ with $j$-invariant $j$, and I claim that the corresponding morphism $f:Y \rightarrow M$ is a finite etale $\mathbf{Z}/(2)$-torsor. The $j$-invariant of the
universal elliptic curve over $M$ defines a morphism $j:M \rightarrow Y$, so it makes sense to form the elliptic curve $j^{\ast}(E)$ over $M$. One sees that $f$ is precisely the Isom-stack
between $j^{\ast}(E)$ and the universal elliptic curve over $M$, and this Isom-stack is a torsor for the automorphism functor of the universal elliptic curve over $M$, which is to say for
the constant group $\mathbf{Z}/(2)$ over $M$, because any two elliptic curves with no extra automorphisms are isomorphic etale-locally on the base if they have the same $j$-invariant
(exercise in deformation theory, etc.).
Let's grant that $\pi_1(Y) = 1$, and see how to conclude. Then at the end we will prove $\pi_1(Y) = 1$. Consider a connected finite etale cover $q:M' \rightarrow M$ of degree $> 1$. I claim
it is isomorphic to $f$. Consider the pullback $Y' \rightarrow Y$ of $q$ along $f:Y \rightarrow M$. Since $Y$ has trivial fundamental group, this pullback splits as a disjoint union of
copies of $Y$. Choosing such a component of the pullback defines a morphism $s:Y \rightarrow M'$ over $M$. But $f$ and $q$ are finite etale maps, so $s$ is also a finite etale map. But $M'$
is connected, so the open and closed image of $s$ is full; i.e., $s$ identifies $Y$ as a finite etale cover of $M'$, so we conclude that $M'$ is sandwiched inside the degree-2 finite etale
cover $f$. But $q$ has degree $> 1$, so it follows that $q = f$ as desired.
OK, now we can solve the original problem for $M_1$ (conditional on the triviality of $\pi_1(Y)$). The moduli stack $M_1$ is regular and connected with $M$ a dense open substack that we
have just seen has exactly one nontrivial connected finite etale cover. Let $M'_1 \rightarrow M_1$ be a connected finite etale cover with degree $> 1$. We seek a contradiction. The
restriction over $M$ is a finite etale cover $M' \rightarrow M$ of degree $> 1$, and since $M'$ is open in the connected regular stack $M'_1$ it must also be connected. Thus, $M'$ is
$M$-isomorphic to $Y$ (over $M$ via $f$). Hence, the elliptic curve $E$ over $Y$ extends to an elliptic curve $E'_1$ over $M'_1$ (namely, the pullback of the universal elliptic curve over
The integral structure has done its job, and now to get the contradiction we consider a connected etale scheme neighborhood $(S,s)$ of a point $\xi$ with $j=0$ (or $j=1728$) on the DM stack
up vote $M'_1$ considered over $\mathbf{Q}$. We extend $S$ to a smooth connected complete curve $\overline{S}$ (with constant field that might be larger than ${\mathbf{Q}}$, but that won't matter
22 down for the ramification considerations we are about to undertake). Clearly $\overline{S}$ is a finite flat cover of the projective $j$-line over $\mathbf{Q}$ and at the point $s$ over $j=0$
vote (or $j=1728$) it has ramification degree 4 or 6 (I can't remember which is which) because of etaleness over $M_1$ and the fact that $M_1$ over the $j$-line has ramification over $j=0$ (or
accepted $j=1728$) equal to 4 or 6 (due to deformation theory considerations). By design, there is an elliptic curve over the open curve $S$ (namely, the pullback of the elliptic curve $E'_1$ over
$M'_1$ that extends the elliptic curve $E$ over $Y$) whose discriminant in the function field of $\overline{S}$ is $(j(j-1728))^{-1}$ (well-defined up to 12th powers of nonzero elements, of
course). But the "good reduction" at $s \in S$ forces the discriminant of any model over the function field of $S$ to have valuation at $s$ that is a multiple of 12, whereas for $(j
(j-1728))^{-1}$ this valuation is $-4$ or $-6$ (since $S$ at $s$ has ramification over the $j$-line equal to 4 or 6). This is a contradiction, so $M_1$ has no nontrivial connected finite
etale cover, assuming $\pi_1(Y) = 1$.
Finally, we prove $\pi_1(Y) = 1$. Note that $Y$ is the open complement in $\mathbf{P}^1_{\mathbf{Z}}$ of the union of the sections $\infty$, $j=0$, and $j=n$ with $n = 1728$. We will now
work with any nonzero integer $n$. Since $\infty$ is disjoint from the others, by Saito's argument with the relative Abhyankar's Lemma as explained above, we see that any finite etale cover
of $Y$ has normalization over that projective $j$-line over $\mathbf{Z}$ that is etale over $\infty$. Hence, it suffices to show that the open complement of $j(j-n)=0$ in $\mathbf{P}^1_{\
mathbf{Z}}$ has trivial $\pi_1$. Making the change of coordinates $t = 1/j$ (which moves $j = 0$ out to $\infty$), this open complement is identified with the open complement $U_n$ in the
affine $t$-line $\mathbf{A}^1_{\mathbf{Z}}$ of the locus $nt=1$. So it is enough to prove that $\pi_1(U_n) = 1$. Equivalently, we claim that $U_n$ has no nontrivial Galois connected finite
etale covers. This will rest on three special facts about $\mathbf{Z}$: the triviality of $\pi_1({\rm{Spec}}(\mathbf{Z}))$, the triviality of ${\rm{Pic}}(\mathbf{Z})$, and the smallness of
the group of roots of unity in $\mathbf{Z}$. Beware that $U_n(\mathbf{Z})$ is empty when $n \not\in \mathbf{Z}^{\times}$.
Let $h:V \rightarrow U_n$ be a Galois connected finite etale cover with degree $> 1$, so over $\mathbf{Q}$ we get a nontrivial connected finite etale cover $V'$ of the $\mathbf{Q}$-fiber
$U'_n$ of $U_n$. We first claim that $V'$ must be geometrically connected over $\mathbf{Q}$. Since we're in characteristic 0, this amounts to the condition that $V'$ has constant field $\
mathbf{Q}$. If we let the number field $K$ be its constant field then by normality of $V$ it follows that $h$ factors through $(U_n)_{O_K}$, with $V \rightarrow (U_n)_{O_K}$ necessarily
surjective. This forces $(U_n)_{O_K}$ to be etale over $U_n$ (since $h$ is a finite etale cover), so since $U_n$ is fpqc over ${\rm{Spec}}(\mathbf{Z})$ it follows that ${\rm{Spec}}(O_K)$ is
etale over ${\rm{Spec}}(\mathbf{Z})$. This forces $K = \mathbf{Q}$, as desired.
By the coordinate change $x = t/n$ we identify $U'_n$ with ${\rm{GL}}_1$, so $V'$ is a geometrically connected cover of ${\rm{GL}}_1$ over $\mathbf{Q}$. Thus, for $d = {\rm{deg}}(h) > 1$ we
see that over an algebraically closed extension $k$ of $\mathbf{Q}$ the map $h_k$ is identified with the endomorphism $x^d$ of ${\rm{GL}}_1$. That is, the map $h'$ induced by $h$ between $\
mathbf{Q}$-fibers is a "$\mathbf{Q}$-form" of the $\mu_d$-torsor ${\rm{GL}}_1$ over ${\rm{GL}}_1 = U'_n$. The set of isomorphism classes of such forms is given by $${\rm{H}}^1({\rm{GL}}_1,\
mu_d) = (\mathbf{Q}^{\times}/({\mathbf{Q}}^{\times})^d) \times x^{\mathbf{Z}/d\mathbf{Z}}$$ (since ${\rm{GL}}_1$ has trivial Pic and has unit group $\mathbf{Q}^{\times} x^{\mathbf{Z}}$).
Explicitly, for $q \in \mathbf{Q}^{\times}$ and $j \in \mathbf{Z}$ the finite etale cover of ${\rm{GL}}_1 = U'_n$ associated to the class of $(q,j \bmod d)$ is given by the covering
equation $y^d = q x^j$. As a covering of ${\rm{GL}}_1$ with coordinate $x$, this has geometric covering group $\mu_d(\overline{\mathbf{Q}})$ via scaling on $y$, so by inspection this
geometric covering group action is not defined over $\mathbf{Q}$ (i.e., the automorphism group scheme for the covering is not a constant group over $\mathbf{Q}$, or in other words not all
of these geometric automorphisms are defined over $\mathbf{Q}$) except when $d = 2$. Ah, but recall that we arranged for $h$ to be a Galois covering, so in our setting with the covering
$h'$ the geometric covering group must be defined entirely over $\mathbf{Q}$ (as a constant group). In particular, $h$ must have degree $d = 2$. Also, the geometric connectedness of the
covering forces ${\rm{gcd}}(j,d) = 1$.
To summarize, we have proved that $V \rightarrow U_n$ viewed over $\mathbf{Q}$ is given by $y^2 = q(t/n)$ for some $q \in \mathbf{Q}^{\times}$. By changing $y$ by a $\mathbf{Q}^{\times}
$-scaling (as we may certainly do), we can change $q$ by any square multiple we wish, so we can arrange that $q/n$ is equal to a squarefree integer $r$. Then $V$ is identified with the
normalization of $\mathbf{Z}[t][1/(nt-1)]$ in the $(nt-1)$-localization of $\mathbf{Z}[y,t]/(y^2 - rt)$. Using that $r$ is a squarefree integer, we claim that $\mathbf{Z}[y,t]/(y^2 - rt)$
is normal (in contrast with the situation for $\mathbf{Z}[y]/(y^2 - r)$ when $r$ is odd!). This is clear after inverting 2, so by Serre's homological criterion the only issue is to check
the normality at the generic points in characteristic 2, which is to say that the maximal ideal of the local ring at these points is principal. If $r$ is even (so it is twice an odd
integer) then $y$ lies in such primes, so $(2,y)$ is the only such prime and $t$ isn't in this prime. Hence, in such cases $y$ is a local generator (as the equation $rt = y^2$ with $t$ a
local unit and $r$ twice an odd integer makes $2$ a local unit multiple of $y^2$). If $r$ is odd then $(2)$ is itself prime because the reduction of $y^2 - rt$ modulo 2 is the element $y^2
- t \in \mathbf{F}_2[t,y]$ that is irreducible.
We conclude that
$$V = {\rm{Spec}}(\mathbf{Z}[y,t]/(y^2 - rt))_{nt-1}$$ over $U_n = {\rm{Spec}}(\mathbf{Z}[t])_{nt-1}$. Ah, but this is not etale over $U_n$, since passing to characteristic 2 turns this
into a dense open piece of a purely inseparable quadratic cover in characteristic 2. Contradiction, so $\pi_1(U_n) = 1$. QED
I hadn't gone through things carefully as you did, but this is the kind of issue I was worried about. I seem to recall that the disjointness of the divisors was rather important in
Takeshi Saito's argument cited in Ihara's paper. But I can't remember how it goes anyways. – Minhyong Kim Aug 20 '12 at 2:46
Thanks! This is an awesome answer. Even modulo the $j/1728$ thing it was very helpful, and I'm accepting it. – Akhil Mathew Aug 20 '12 at 14:35
I have fixed the earlier incomplete analysis of $\pi_1(Y)$, so now its triviality is completely proved. (The comments above refer to the earlier version with a gap that is now gone.) –
user22479 Aug 21 '12 at 13:45
4 This is an awesome answer. – DamienC Aug 21 '12 at 16:15
OK -- wow! Thanks again. There's a lot here, and I haven't digested most of it yet. I hope I'll be able to make an intelligent comment about this after I've grokked Abyankhar's lemma. –
Akhil Mathew Aug 22 '12 at 0:47
show 1 more comment
Not the answer you're looking for? Browse other questions tagged elliptic-curves ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/105047/fundamental-group-of-the-moduli-stack-of-elliptic-curves/105062","timestamp":"2014-04-21T00:06:04Z","content_type":null,"content_length":"73352","record_id":"<urn:uuid:0b992b44-f8c2-4ead-9de6-fc10a8e1f6e3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inertia and bicycle wheels related to wattage and time
There are tons of factors here. But I am mainly interested in seeing if there is any real life measurable advantage to decreasing inertia. Keeping everything constant (stiffness, thickness, price,
etc), is there an advantage if I build up a wheel using rim A at 545 grams, rim B at 425 grams, and rim C at 385 grams.
Decreasing the overall weight by x will increase your power to weight ratio. Keeping power output the same, your time would decrease. But that is for static weight, is there a way to quantify weight
savings in inertia over static weight. If the wheels are the same price, quality, stiffness, etc, how would going from a 545g rim to a 385g rim benefit me beyond a total static weight savings of
I'm 61kg, bike is 9.9kg. At 250 watts my p/w ratio is 3.526 w/kg. If I get new rims my p/w ratio goes up to 3.542 w/kg. Is there a measurable increase by having the weight savings in inertia? And how
do I go about calculating this?
Seconds matter, it could mean the difference between a paycheck or a pat on the back.
What 'increase' are you expecting? You are using some very simple calculations, I think but what do they show and how can you apply the result to make any conclusion? If you are just considering the
situation when sprinting then mass and MI are considerations but you appear to want to consider the whole race, with lots of speed variation. You suggest a "power saving" of a small fraction of a
Watt. Is the Energy not more important? How would you calculate that? The extra KE you put in whilst you are accelerating is not wasted as you need to be putting in less of your muscle energy when
slowing down.
But how can it be worth while to do any more than this very approximate calculation, which shows that the energy involved is small, when the energy loss due to flexing hasn't been calculated? I seem
to remember that one of the reasons for making frames as stiff as possible is to reduce the effort needed and this has been shown to be relevant. The same thing must apply to the wheel stiffness,
I think you may as well go along with what you have found in connection with the wheel mass and now move on to get a ball park figure for flexing losses. When you have compared the two then the
result could make the choice more clear for you as to which is more relevant. There will be other factors like roadholding - which may improve as your 'unsprung weight' is being reduced. (Not quite
the same as with motor car suspension but I guess it must apply in some way.)
Good Engineering involves identifying the most relevant parameters to work at when you want to improve performance. It also involves defining what is the most relevant aspect of 'performance'. I
suppose I'm basically saying that you need to devote an appropriate amount of effort on this particular problem. Your idea of 'keeping all other things equal' is ok as far as it goes and it's a fair
principle to apply at the start but other things are not proved to be equal, yet. Other effects may far outweigh what you are looking at. | {"url":"http://www.physicsforums.com/showthread.php?p=4260928","timestamp":"2014-04-19T15:03:54Z","content_type":null,"content_length":"51690","record_id":"<urn:uuid:ea08fec5-5f36-438d-80e3-652474cc83c9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pleasanton, CA Prealgebra Tutor
Find a Pleasanton, CA Prealgebra Tutor
...I have tutored in all junior high and high school math subject areas. I am comfortable with and have ample experience tutoring students of all ages. I help students to thoroughly understand
math so they can do A+ work.
5 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I also am talented at breaking down difficult material and explaining it in a way easy to understand, tailored to the level the student is at. As I've always said, "If you can't explain it to
an intelligent 12 year old, then you don't really understand it" I explain to my students because I u...
24 Subjects: including prealgebra, chemistry, calculus, physics
...From higher level math right down to elementary school math - I love it all! As a high school student, I was a private math tutor for a 5th grader, and I also regularly helped students in our
school's after school tutoring program. I have a lot of experience helping students of all levels!
27 Subjects: including prealgebra, chemistry, calculus, physics
...My students have all been very happy with the significant shift in test scores and grades that often follow my lessons. I strongly believe in an approach that is tailored to an individual
student's needs, and have seen that such an approach can boost a student's confidence and level of comfort w...
26 Subjects: including prealgebra, chemistry, physics, French
...I have also taught and tutored many students in this area of Discrete Mathematics. I am able to understand their difficulties and find a way to help them grasp the concepts. I have MS degree
in Computer Engineering from Case Western Reserve University.
23 Subjects: including prealgebra, calculus, statistics, physics
Related Pleasanton, CA Tutors
Pleasanton, CA Accounting Tutors
Pleasanton, CA ACT Tutors
Pleasanton, CA Algebra Tutors
Pleasanton, CA Algebra 2 Tutors
Pleasanton, CA Calculus Tutors
Pleasanton, CA Geometry Tutors
Pleasanton, CA Math Tutors
Pleasanton, CA Prealgebra Tutors
Pleasanton, CA Precalculus Tutors
Pleasanton, CA SAT Tutors
Pleasanton, CA SAT Math Tutors
Pleasanton, CA Science Tutors
Pleasanton, CA Statistics Tutors
Pleasanton, CA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Berkeley, CA prealgebra Tutors
Concord, CA prealgebra Tutors
Danville, CA prealgebra Tutors
Dublin, CA prealgebra Tutors
Fremont, CA prealgebra Tutors
Hayward, CA prealgebra Tutors
Livermore, CA prealgebra Tutors
Oakland, CA prealgebra Tutors
Palo Alto prealgebra Tutors
San Jose, CA prealgebra Tutors
San Leandro prealgebra Tutors
San Ramon prealgebra Tutors
Santa Clara, CA prealgebra Tutors
Sunnyvale, CA prealgebra Tutors
Union City, CA prealgebra Tutors | {"url":"http://www.purplemath.com/pleasanton_ca_prealgebra_tutors.php","timestamp":"2014-04-16T07:22:19Z","content_type":null,"content_length":"24136","record_id":"<urn:uuid:60f98391-e31f-4c72-add6-9179377dce19>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Stream is like a list except that its elements are computed lazily. Because of this, a stream can be infinitely long. Only those elements requested are computed. Otherwise, streams have the same
performance characteristics as lists.
Whereas lists are constructed with the :: operator, streams are constructed with the similar-looking #::. Here is a simple example of a stream containing the integers 1, 2, and 3:
scala> val str = 1 #:: 2 #:: 3 #:: Stream.empty
str: scala.collection.immutable.Stream[Int] = Stream(1, ?)
The head of this stream is 1, and the tail of it has 2 and 3. The tail is not printed here, though, because it hasn't been computed yet! Streams are specified to compute lazily, and the toString
method of a stream is careful not to force any extra evaluation.
Below is a more complex example. It computes a stream that contains a Fibonacci sequence starting with the given two numbers. A Fibonacci sequence is one where each element is the sum of the previous
two elements in the series.
scala> def fibFrom(a: Int, b: Int): Stream[Int] = a #:: fibFrom(b,
a + b)
fibFrom: (a: Int,b: Int)Stream[Int]
This function is deceptively simple. The first element of the sequence is clearly a, and the rest of the sequence is the Fibonacci sequence starting with b followed by a + b. The tricky part is
computing this sequence without causing an infinite recursion. If the function used :: instead of #::, then every call to the function would result in another call, thus causing an infinite
recursion. Since it uses #::, though, the right-hand side is not evaluated until it is requested.
Here are the first few elements of the Fibonacci sequence starting with two ones:
scala> val fibs = fibFrom(1, 1).take(7)
fibs: scala.collection.immutable.Stream[Int] = Stream(1, ?)
scala> fibs.toList
res9: List[Int] = List(1, 1, 2, 3, 5, 8, 11)
Next: Vectors | {"url":"http://www.scala-lang.org/docu/files/collections-api/collections_14.html","timestamp":"2014-04-16T19:10:53Z","content_type":null,"content_length":"6018","record_id":"<urn:uuid:81151cb2-9565-482b-89ca-c828f11154ea>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pre- and Postdictions of the NCG Standard Model
Posted by Urs Schreiber
At HIM this week there is a Noncommutative Geometry Conference.
Just heard Thomas Schücker talk about The noncommutative standard model and its post- and predictions, which, as it turns out, closely followed his entry for the Encyclopedia of Mathematical Physics:
Noncommutative geometry and the standard model
The setup
Recall from our discussion here that the “noncommutative standard model”, due to Alain Connes and collaborators, is a Kaluza-Klein model – a model of particle physics where all observed forces on a
pseudo-Riemannian spacetime $X$ are derived from pure gravity on a spacetime $X \times Y$ for $Y$ a compact Riemannian space with “essentially vanishing volume” – where now the crucial ingredient is
that noncommutative geometry is used to give the idea of “essentially vanishing volume” a precise meaning:
$Y$ is taken to be a noncommutative space which is of dimension 0 as seen by heat diffusing on it. Only its dimension as seen by gauge theory, it’s KO-dimension, is higher – namely 6 mod 8. So $Y$ is
like a manifold shrunk to a point that still remembers some of its inner structure. In particuar its Riemannian structure.
Using spectral triples, the Riemannian geometry of the 4+[6]-dimensional spacetime $X \times Y$ is entirely encoded in how it is probed by the dynamics of a spinning quantum particle roaming around
in it. Algebraically this is given, essentially, by a Hilbert space $H$ of states of that particle, by an algebra $A$ of position observables of that particle, acting on $H$ and, crucially, by the
Dirac operator $D$ of that particle, whose eigenvalues are the essentially the possible energies that the particle can acquire while zipping through $X \times Y$. The Riemannian structure of $X \
times Y$ is encoded in these energy eigenvalues.
Given such a quantum particle, one would want to see what its second quantization is, which would be a quantum field theory describing many such particles propagating on $X \times Y$ and interacting
with each other. Such a quantum field theory would traditionally been given by a functional – the action functional – depending on the Riemannian metric on $X\times Y$ as well as on the “condensate”
fields of these particles. All these quantities are supposed to be encoded in a pair consisting of the spectral triple and a vector $\psi$ in the Hilbert space.
Connes gave an argument that there is an essentially unique functional $S_{\mathrm{spec}} : PointedSpectralTriples \to \mathbb{R}$ on such pairs which satisfies the obvious requirement that it be
additive under disjoint unions of Riemannian spaces. This he called the spectral action functional.
Evaluate such functional on spectral triples describing Kaluza-Klein models $X \times Y$ as above. One finds that, as in ordinary commutative Kaluza-Klein theory, the Riemannian structure on such
products can be interpreted as a Riemannian structure on $X$ together with a connection on a principal bundle over $X$ – the gauge bundle. Restrict attention to the subset
$Con \subset PointedSpectralTriples$
of all spectral triples which describe $\mathbb{R}^4 \times Y$ with the standard flat metric on $\mathbb{R}^4$ and such that the gauge group of the induced gauge bundle is that observed in the
standard model and such the metric on $Y$ has certain fixed values, which later one identifies with Yukawa coupling terms. On this subset the spectral action
$S_{\mathrm{spec}} : Con \to \mathbb{R}$
restricts to a functional of the connection on that gauge bundle and of a section of a spinor bundle over $\mathbb{R}^4 \times Y$ (the element in the Hilbert space).
The standard model action functional is precisely a functional of such a kind. See table 2 on page 10. So then the task is to adjust the remaining details of the spectral triple (in particular the
metric on the [6]-dimensional compact $Y$) such that $S_{\mathrm{spec}}|_{Con}$ coincides entirely with the standard model action (as far as that is fixed).
When that is achived, one has found a noncommutative Kaluza-Klein realization of the standard model.
How to get predictions
There is a list of axioms about the precise interdependence of the three ingredients in a spectral triple. The statement is that there is a choice $Con$ such that $S_{\mathrm{spec}}|_{Con}$ does
yield the standard model. There is a bit of wiggle room then, but not much, due to the various axioms on a spectral triple. Correspondingly, not all parameters of the standard model are entirely
known at the moment. Most notably, the mass of the Higgs particle is yet to be measured, hopefully by LHC.
As a result, after identifying in the landscape of all spectral triples those regions which are compatible with the known parameters of the standard model under the above procedure – see figure 3 on
p. 9 for a cartoon of these landscape regions – one can check what the remaining, unknown, parameters of the standard model derived from spectral triples in these regions would be. Doing so yields
the desired predictions deriving from the noncommutative approach.
The concrete predictions
According to Thomas Schücker’s review, the main post- and predictions are the following (see his review article for more details):
Higgs sector: there is a single Higgs and its mass is $m = 171.6\pm 5 GeV \,.$ The presence of the single Higgs is derived from some representation theoretic arguments for the spectral triple. I
don’t know how that works. The mass of the Higgs is obtained as follows:
the spectral model demands strong relations between the gauge coupling. Namely $g_2 = g_3 = 3\lambda$ for the $su(2)$ and $su(3)$ gauge couplings $g_2$ and $g_3$ and the Higgs self-coupling $\lambda$
, respectively. Then use the ordinary renormalization flow to run the couplings by increasing the energy scale until this identification is achieved. See the figure on p. 11
This assumes the usual “big desert” hypothesis is true, that no new physics appears up to this point. Take the resulting energy scale $\Lambda$ to be the fundamental scale of the NCG model.
Fundamental NCG scale: This $\Lambda$ is predicted to be $\Lambda = 10^{17} GeV$
(At this energy scale, so the idea, should one expect also the $X$-factor in $X \times Y$ to begin to look non-commutative.) The spectral action then expresses the Higgs mass somehow as a function of
the gauge couplings (I am not sure I recall how). So this fixes the Higgs mass at scale $\Lambda$. Then run the couplings back to the observed energy scale to obtain the above prediction.
(Notice a couple of crucial assumptions here: the “big desert” and that ordinary renormalization flow makes sense up to the scale $\Lambda$ where some more fundamental theory is expected to take
over. Also the number of generations enters this computation, which is not predicted by the model but set to $N_c = 3$ by hand.).
Top quark mass. From a similar computation apparently the top quark mass is “postdicted” to be $m_t \lt 186 GeV \,.$ The observed value is apparently $m_t = 174.3 \pm 5.1 GeV$.
$\rho_0$ I forget the details of this. But there is that parameter $\rho_0$ (which is one over the $cos^2$ of some angle which the inclined reader will surely remind me of) and which is measured to
be $\rho_0 = 1.0002 \pm something.$ The NCG model predicts exactly $\rho_0 = 1 \,.$
Posted at July 30, 2008 11:29 AM UTC
Re: Pre- and Postdictions of the NCG Standard Model
If that Higgs prediction were correct, would LHC be expected to see it?
Posted by: David Corfield on July 30, 2008 2:05 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
If that Higgs prediction were correct, would LHC be expected to see it?
Yes. I am hoping somebody more expert than me will chime in, but this is the expectation that one usually hears.
For instance Sabine Hossenfelder mentions it in her blog entry The Higgs mass (last sentence) and gives more literature.
The generally accepted current experimental bounds are apparently
$114 GeV \lt m_{Higgs} \lt 182 GeV \,.$
Posted by: Urs Schreiber on July 30, 2008 2:28 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Now checked with an expert:
If that Higgs prediction were correct, would LHC be expected to see it?
Answer: yes, within two years after the beam is set up.
Information about higher mass regions will be available before some of the lower mass regions, due to differences in cross sections.
Posted by: Urs Schreiber on July 30, 2008 2:48 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Urs wrote:
Answer: yes, within two years after the beam is set up.
I was going to say that any $m_H\gt 160\, \text{GeV}$ will be easy to see. The “difficult range” is something like $125\, \text{GeV}\lesssim m_H\lesssim 160\, \text{GeV}$.
But Urs beat me to it.
So let me, instead, ask three questions about the other predictions.
1. You say, that at the high scale, one has $g_2 = g_3 = 3\lambda$. What about $g_1$? Why do only two of the three SM gauge couplings unify? (In their model, somehow or other, $\lambda$ is also
interpreted as a gauge coupling — which sort of motivates the idea that it might unify with the SM gauge couplings.)
2. Where does their prediction of the top mass come from? The matrix of Yukawa couplings is the greatest mystery of the SM. The top Yukawa coupling is very nearly precisely equal to 1. There are
other eigenvalues which are as small as $O(10^{-6})$, and there is a complicated structure of mixing angles. To “predict” the top mass, one has to say something about this matrix of Yukawa
couplings. What is it that they are able to say?
3. I assume that the statement that the $\rho$ parameter is 1 (and, presumably that the other Peskin-Takeuchi parameters also vanish) is just the statement that the scale of new physics is $10^{17}
\, \text{GeV}$ (the “desert” to which you referred). What about neutrino masses? One usually says that this requires new physics at a lower ($\sim 10^{14}\, \text{GeV}$) scale. That still means
vanishing Peskin-Takeuchi parameters, but it does indicate that the “desert” is not completely barren.
Posted by: Jacques Distler on July 30, 2008 3:26 PM | Permalink | PGP Sig | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Part of the motivation for posting this was that it would force me to learn more details about the NCG model in order to be able to answer comments here. :-)
I suppose I should open Chamseddine, Connes, Marcolli at this point, spend some time reading and then get back to you.
But let me see how far I get:
First of all
You say, that at the high scale, one has $g_2 = g_3 = 3 \lambda$.
Ah, as you noticed, that was a typo. It should be $g_2^2 + g_3^3 = 3\lambda \,.$
What about $g_1$?
Thomas Schücker in his talk emphasized that the $U(1)$ factor enters on a different footing than the $SU(2)$ and $SU(3)$ factors. The gauge group has to arise in this model as the automorphism group
of an algebra and we have $Aut(\mathbb{H}) = SU(2)/\mathbb{Z}_2$ and $Aut(M_3(\mathbb{C})) = SU(3)/\mathbb{Z}_3 \,.$ On the other hand, the $U(1)$-part appears later as a central extension somewhere.
I think because the rep on the Hilbert space is gonna be projective.
Anyway, for reasons of that sort in the talk $g_1$ was suppressed. But on page 52 of “$m c^2$” (Chamseddine, Connes, Marcolli) it has $g_2^2 = g_3^2 = \frac{5}{3}g_1^2 \,.$
Where does their prediction of the top mass come from?
Apparently there is an estimate for the sum of squares of all the masses. If I read my hastily written notes correctly, the above equation actually extends to
$g_3^2 = g_2^2 = 3\lambda = \frac{1}{4}\sum g_Y^2 \,,$
where I think $g_Y$ are the Yukawa terms, I suppose. That sum will be highly dominated by the large top mass, which is where the estimate comes from.
But let me try to check that again.
What about neutrino masses?
I am being told that they can be accomodated for. But I don’t know any further details at the moment.
Posted by: Urs Schreiber on July 30, 2008 4:34 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Anyway, for reasons of that sort in the talk $g_1$ was suppressed. But on page 52 of “$m c^2$” (Chamseddine, Connes, Marcolli) it has $g_2^2=g_3^2=\frac{5}{3}g_1^2\, .$
But we already know that relation fails! The couplings don’t actually unify, with just the SM degrees of freedom.
That’s one of the pieces of evidence for supersymmetry: adding the contributions of the superpartners to the RG running does cause the couplings to unify (at around $10^{16}$-$10^{17}$ GeV).
If the prediction is $g_2^2=g_3^2={\color{purple}\frac{5}{3}g_1^2} = {\color{red} 3\lambda}$ then I would say that prediction has already been falsified (even before one gets to the red part of the
If I read my hastily written notes correctly, the above equation actually extends to
(1)$g_3^2=g_2^2=3\lambda=\tfrac{1}{4}\sum g_Y^2$
That’s an even more mysterious relation.
I can imagine — if the Higgs is something like a gauge field — that the Yukawa couplings are something like gauge couplings. But why do we get only a constraint on the sum of the squares of the
Yukawa couplings, and not on the individual Yukawa couplings themselves?
What about neutrino masses?
I am being told that they can be accomodated for.
It’s not that hard to generate the requisite dimension-5 operator. The question is: why does it have such a large coefficient, if it is generated at the scale $\Lambda= 10^{17}\, \text{GeV}$, as
opposed to at some lower scale?
Posted by: Jacques Distler on July 30, 2008 5:32 PM | Permalink | PGP Sig | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
I had posted this yesterday but there was a server error and parts of it was lost. Then I had to run to catch a train. Here is what survived. A part c) on the interpretation of the internal metric as
the Yukawa couplings is missing.
Concerning gauge coupling unification:
I am not sure what precisely the attitude towards this is among the NCG model builders. From what I have read I get the impression that there is the vague hope along the lines: that experimentally
there is almost gauge coupling unification shows that the NCG model is on the right track, that it fails to be exactly a unifications shows that the big desert hypothesis is oversimplifying. On p. 5
of CCM it says:
Naturally, one does not really expect the “big desert” hypothesis to be satisfied. The fact that the experimental values show that the coupling constants do not exactly meet at unification scale
is an indication of the presence of new physics. A good test for the validity of the [NCG] approach will be whether a fine tuning of the finite geometry can incorporate additional experimental
data at higher energies.
You write:
The couplings don’t actually unify, with just the SM degrees of freedom.
That’s one of the pieces of evidence for supersymmetry:
It is clear from the talks that I have heard that there is a certain inclination to dislike supersymmtric extensions of the SM among the NCG model builders. But that seems to be more a matter of
taste than of principle. I don’t see an a priori reason why susy versions of this model should not exist.
Concerning Higgs as a gauge field:
In the general setup of NCG, given a generalized Dirac operator $D$ a gauge field is given by an expression of the form
$\sum_i a_i [D,b_i]$
where $\{a_i,b_i\}$ are elements of the algebra. Clearly, for the special case of $D = \gamma^\mu \partial_\mu$ the standard Dirac operator on flat space, this gives the usual slashed gauge
potentials $\gamma^\mu A_\mu \,.$
So Connes builds his Dirac operator which encodes the gravitational and gauge field background by letting $D = D^{(1,0)} \otimes Id + \gamma^5 \otimes D^{(0,1)}$ be the external Dirac operator $D^
{(1,0)}$ on flat $\mathbb{R}^4$ and some internal Dirac operator $D^{(0,1)}$ on that $Y$ factor, which eventually encodes the Yukawa couplings.
Then graviton and gauge boson fields are “turned on” by addind “fluctuations” of the above sort, i.e. roughly
$D \mapsto D_{A,\phi} := D + \sum_i a_i [D,b_i] \,.$
That sum decomposes into two sums. The external one which yields the usual gauge bosons $A := \sum_i a_i [D^{(1,0)},b_i]$ and the internal one $H := \sum_i a_i [D^{(0,1)},b_i] \,.$ This internal
gauge boson gets identified with the Higgs field.
that this $H$ really can be interpreted as the Higgs is proposition 3.5 on p. 23 combined with the way this term shows up in the spectral action, p. 31 and following.
Posted by: Urs Schreiber on July 31, 2008 9:03 AM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Just heard a talk by Ali Chamseddine on the NCG standard model (or standard model model, rather).
He said that they are working on trying to see if the following can help to deal with the gauge coupling unification issue:
the gauge and gravity part $S_{gauge-gravity}$ of the spectral action is “unique” only up to a choice of function $f$ which enters $S_{gauge,gravity} : SpectralTriple \to \mathbb{R}$$(A,D,H) \mapsto
Tr_H \exp (f(D/\Lambda))$ for $\Lambda$ the constant later to be identified with the energy scale where the gauge coupling unification is required/predicted/assumed. (p. 3)
There are some standard formulas for such traces of exponentials which say that this depends only on the even moments of $f$ and that the $2 n$th moment controls the coefficient of ${\Lambda}^n$ or
the like.
The standard model + Einstein gravity action functional is obtained by truncating this expansion after the first three contributions, i.e. at including $\Lambda^4$, as on the top of page 31.
But really one should take all furher terms into account, too. That introduces one free choice of parameter, the moment of $f$ at the given order, per order. these higher order corrections would
modify the RG flow, I suppose. So maybe there is a choice of the higher moments of $f$ such that with the modified RG flow one gets precise gauge coupling unification.
This is what Ali Chamseddine said they have started to look at.
Posted by: Urs Schreiber on July 31, 2008 12:20 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
I can imagine — if the Higgs is something like a gauge field — that the Yukawa couplings are something like gauge couplings. But why do we get only a constraint on the sum of the squares of the
Yukawa couplings, and not on the individual Yukawa couplings themselves?
I have now asked Ali Chamseddine about how this works. Something like this:
when computing the spectral action $Tr \exp(f(D/\Lambda))$ one finds terms with scalar coefficients given by traces of various parts of $D$. In particular there is the trace $Tr( (D^{0,1})^2 )$ of
the square of the Dirac operator on the internal space $Y$, equation 3.14, p. 24. This in turn involves a trace over products of the Yukawa coupling matrices, the quantities denoted $a$ and $c$ in
equation 3.16, p. 25.
This factor $a$ then turns out to appear as the coefficient of the Higgs kinetic energy term (in 3.41, p. 31 ) multiplied with the 0th moment $f_0$ of that function used in the definition of the
spectral action.
To normalize, one divides the entire spectral action by a corresponding factor to remove these factors in front of the Higgs kinetic action. But since the fermionic part “$\langle \psi , D \psi \
rangle$” is part of the spectral action, this will be changed by a global prefactor. But the internal part of the $D$ here encodes the Yukawa couplings. With that prefactor now, it turns out that the
sum of the square of the fermion masses is fixed to some value.
Posted by: Urs Schreiber on July 31, 2008 1:24 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
To normalize, one divides the entire spectral action by a corresponding factor to remove these factors in front of the Higgs kinetic action.
Why the heck do you do that? Why not just rescale the Higgs field to absorb this factor?
In fact, if you look at their equation (3.41), none of the bosonic kinetic terms are canonically-normalized. So you have to do a bunch of field redefinitions to get the canonically normalized field.
As you are well-aware, rescaling the whole action does precisely nothing at the classical level. (Such a rescaling can be absorbed in a redefinition of $\hbar$.)
Looking at the fifth line of their (3.41), we should rescale the Higgs field by a factor of $\sqrt{a f_0/2\pi^2}$ to give it a canonical kinetic term. If the original matrix of Yukawa couplings is $\
lambda'$ (I refuse to use their deliberately godawful notation), $a = tr ((\lambda')^\dagger\lambda')$ and the Yukawa coupling for the rescaled field is $\lambda = \frac{\lambda'}{\sqrt{a f_0/2\pi^
Similarly, if the original gauge coupling (for any factor in the gauge group) was $g'$, we need to rescale the corresponding gauge field by $g'\sqrt{2f_0}/\pi$ to give it a canonically-normalized
kinetic term. The gauge coupling for the canonically-normalized gauge field is $g = \frac{\pi}{\sqrt{2f_0}}$
So, we have a relation like $g^2 = \tfrac{1}{4} tr(\lambda^\dagger \lambda)$
Carrying through the same rescaling on the quartic Higgs self-coupling, one finds that its coefficient is $4 g^2 tr(\lambda^\dagger\lambda)^2$ which is not quite the result you quoted, but I’m not
going to worry about piddly grade-school algebra at this point.
the gauge and gravity part $S_{\text{gauge−gravity}}$ of the spectral action is “unique” only up to a choice of function $f$ which enters $\begin{gathered} S_{\text{gauge−gravity}}: \
text{SpectralTriple}\to\mathbb{R}\\ (A,D,H)\mapsto Tr_H \exp(f(D/\Lambda)) \end{gathered}$ for $\Lambda$ the constant later to be identified with the energy scale where the gauge coupling
unification is required/predicted/assumed.
But really one should take all furher terms into account, too.
Egads! Then you have an infinite number of coupling constants to specify (and no a priori prescription for specifying them, as $f$ is an arbitrary function).
Are these really supposed to affect the running at scales well below $\Lambda$ (I don’t really see how)? Or are they simply supposed to introduce large threshold corrections, which explain why the
coupling constants don’t actually meet at the unification scale? If the latter, then I don’t see which part of (1) (and the “predictions” that would follow from it) is supposed to survive these
threshold corrections.
Of course, staring at their action, (3.14), for more than 30 seconds makes you wonder why the electroweak symmetry-breaking scale isn’t the unification scale.
In a sense, this is just the usual fine-tuning problem, except that, in their prescription, one is not free to fine-tune the coefficient of the quadratic term in the Higgs potential. The only
parameters one is allowed to play with are the moments, $f_k$. Having fixed the gauge coupling at the unification scale, and the value of Newton’s constant, there is no further freedom to tune the
electroweak symmetry-breaking scale.
This seems like a rather bigger problem than the (lack of) coupling constant unification.
Posted by: Jacques Distler on July 31, 2008 7:34 PM | Permalink | PGP Sig | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
we should rescale the Higgs field
I suppose that’s what I should have said.
Egads! Then you have an infinite number of coupling constants to specify (and no a priori prescription for specifying them, as $f$ is an arbitrary function).
That’s true. This is swept under the rug a bit in most presentations. I am not sure what the general idea about this is among the practitioners. For instance how $f$ is supposed to be related to
Are these really supposed to affect the running at scales well below $\Lambda$
Well, as I said this was what I think Ali Chamseddine said is an idea that they are working on. But since no details have been made public, maybe it is vain to speculate about what exactly he has in
This seems like a rather bigger problem than the (lack of) coupling constant unification.
Hm, it would be good to get someone more expert than me to reply to this. I’ll see if I can find someone…
Posted by: Urs Schreiber on August 1, 2008 12:25 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
by the way, I would enjoy to hear your opinion on the following opinion of mine.
This is how I think about the Connes-NCG standard model model:
Maybe it fails to pass the detailed test in the end. But it comes pretty close with relatively little conceptual input, repackaging an impressive amount of structure into a simpler structure.
Whether or not this particular model will turn out to be phenomenologically viable, I think it shows one thing: the potential importance of “non-geometric phases”.
There is the work by Roggenkamp and Wendland Limits and Degenerations of Unitary Conformal Field Theories and the unpublished work by Soibelman which shows that for any 2d CFT there is a systematic
precise way to take a point particle limit and extract a spectral triple describing the effective target space geometry of the CFT regarded as a string background.
So I am thinking of Connes’ NCG as a way to talk about effective target space geometries of point particle limits of general 2dCFTs (“general” meaning that they need not come from $\sigma$-models,
i.e. can be non-geometric). Notice the constraint on the dimension to be 4+[6 mod 8] for the NCG model.
From that point of view the main message I am getting here is: non-geometric string backgrounds may have good chances to be phenomenologically viable. And that it might be worthwhile to concentrate
more effort into understanding these than into understanding geometric flux compactifications. We know that every string background is approximated by a spectral triple and Connes’ work shows what
happens when searching for the standard model in that space of spectral triples.
I am not sure what the general attitude is towards non-geometric points in the space of string backgrounds. My impression has always been that they are unduly ignored. I can imagine good reasons
(namely practical reasons) for ignoring them, but I would like to see a discussion of whether or not this is a good thing. The closest to such a discussion that I ever found (but that need not mean
much, I’d be happy to be educated) is
Fernando Marchesano, Progress in D-brane model building, where in section 5.3 on p. 22 it says
As stated before, the mirror of a type IIB flux background may not have a geometric description. While intuitively less clear, one can still make sense of these backgrounds as string theory
compactifications […]. Finally, duality arguments suggest that the different kinds of ‘non-geometric fluxes’ that one may introduce in type II theories form a much larger class than the geometric
ones [140]. While our knowledge of these non-geometric constructions is still quite poor, and mainly based on mirrors of toroidal compactifications, the above results have led many to believe
that non-geometric backgrounds correspond to the largest fraction of the ‘type II landscape’. A fraction which has so far been unexplored.
(my emphasis).
I suppose this must be clear to anyone who ever thought about it, but it is rarely discussed: concentrating on geometric Calabi-Yau flux compactifications (either individually or as an ensemble)
means making a huge restriction on the a priori possibilities without any guarantee that the solution is to be found there. No?
Posted by: Urs Schreiber on July 31, 2008 3:47 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Maybe it fails to pass the detailed test in the end. But it comes pretty close with relatively little conceptual input, repackaging an impressive amount of structure into a simpler structure.
I completely fail to appreciate the “economy” of their way of rewriting the Standard Model.
What is interesting (perhaps) is that they find relations among the coupling constants over and above those guaranteed by gauge invariance and renormalizability (hence their “predictions”).
But there are serious problems (as noted above).
I am not sure what the general attitude is towards non-geometric points in the space of string backgrounds. My impression has always been that they are unduly ignored. I can imagine good reasons
(namely practical reasons) for ignoring them, but I would like to see a discussion of whether or not this is a good thing.
It has been known forever that “most” string vacua are non-geometrical. What’s not understood in that context is moduli stabilization.
That’s one reason for concentrating on the classes of vacua where we do understand (and can compute in some controllable approximation) the stabilization of the moduli.
Even among those vacua, there’s a yet-smaller subset which is of particular interest. These are the vacua which have “decoupling limits” in which 4D gravity (and most of the moduli) decouple from the
“Standard Model” particle physics^1.
These are the vacua which are (at least in principle) predictive about particle physics. That makes them much more attractive objects of study than vacua where the problem of fine-tuning the
cosmological constant and extracting particle physics are inextricably intertwined.
• Are there nongeometrical vacua where the moduli are stabilized?
• Among those, are there vacua with decoupling limits?
The answer to both these questions is surely “yes”. (We know, at least, an existence proof, because there are dualities relating some geometrical and nongeometrical vacua.) But the tools for studying
these questions are much less developed.
And, frankly, they are insufficiently developed in the geometrical case.
^1 The quotes around “Standard Model” are particularly important here, because none of the vacua found to date have fully realistic particle physics.
Posted by: Jacques Distler on July 31, 2008 8:05 PM | Permalink | PGP Sig | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Thanks for your reply!
It has been known forever that “most” string vacua are non-geometrical.
Ah, good to hear that. I had some trouble a while ago convincing somebody that the space of string vacua should be of considerably larger cardinality than the infamous finite number $\sim 10^{\sim
(10^2)}$ of Calabi-Yau flux compactifications.
(By the way, is it even clear that these CY flux vacua all really exist as full CFTs?)
But what exactly does “known” mean here? I certainly find it very plausible that if I pick a random (S)CFT (of given central charge) there are minimal chances that it comes from a $\sigma$-model. But
is there a way to make that intuition more precise? Such as, is there maybe some characteristic quantity associated to a CFT which obstructs its realization as a $\sigma$-model or the like?
What’s not understood in that context is moduli stabilization.
Okay. But I thought there lots of other models which are considered while ignoring the moduli stabilization problem for the time being. Such as pretty much all the intersecting brane models and the
handful of semi-realistic heterotic models. No?
Are there nongeometrical vacua where the moduli are stabilized?
Is it even clear how precisely moduli for nongeometric vacua behave? Can’t it happen that they become discrete parameters, for instance?
And, frankly, they are insufficiently developed in the geometrical case.
Good that you say that, thanks.
Posted by: Urs Schreiber on August 1, 2008 11:12 AM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
I completely fail to appreciate the “economy” of their way of rewriting the Standard Model.
In private discussion I wouldn’t try to convince you, but here in a public forum I want to reply to this in case anyone is eavesdropping on our conversation.
To recap, the claim is that all the structure is encoded in the geometry of a noncommutative space $Y$ fixed by three choices:
a) the algebra of functions on it is $C^\infty(Y) = \mathbb{C} \oplus \mathbb{H}^2 \oplus M_3(\mathbb{C})$;
b) it looks to fermionic particles zipping through it $(dim(Y) = [6])$ -dimensional;
c) the Hilbert space of states of a fermionic particle in this space is the direct sum of all distinct irreducible $C^\infty(Y)$-bimodules.
Take this $Y$ as the internal space of a Kaluza-Klein model, plug the data into the spectral action^1 and out comes something pretty close to the standard model action (with the numerical values of
the Yukawa couplings then still to be identified with the metric on $Y$). I’d think that’s economic, if maybe the result falls short of being a perfect match.
^1 To get more than just data-repackaging one will eventually want to understand what passing to the spectral action means. Not much exists on this at the moment, but Chamseddine once checked to
first order that forming the spectral action of the string’s Dirac-Ramond operator indeed coincides with forming its effective target space action. That makes me think that in as far as spectral
triples are the point particle limit of 2dCFT, the spectral action is the 2dCFT’s effective background action (obtained from the $\beta$-functional, for instance) in that limit.
Posted by: Urs Schreiber on August 1, 2008 11:42 AM | Permalink | Reply to this
I don’t see why this description is any more economical than saying that the 4D gauge group is $SU(3)\times SU(2)\times U(1)$, and that the fermions form a certain chiral, but anomaly-free
representation of the gauge group.
In fact, even on the level of their story, surely the automorphism group of the algebra they cook up includes $U(1)_{B-L}$. Why isn’t that gauged^2?
Moreover, once one gets to the point of writing down an action, the conventional prescription is:
Write down the most general renormalizable, gauge-invariant action with the specified field content.
In their case, the spectral action is neither the most general action compatible with the symmetries, nor is there (so far) a prescription for specifying it, as it involves a choice of an arbitrary
function $f(D/\Lambda)$.
Nor, of course, is it clear how one is supposed to quantize the theory in their approach.
Even though it is nonlocal, is the theory, in some sense, renormalizable? That is, can divergences be absorbed in a redefinition of the function, $f$?
Presumably, the answer is “no.” When one does a heat-kernel expansion of the spectral action, and treats the result as a standard effective field theory (at scales well below $\Lambda$), it’s clear
that no miracles occur, and the original form of the action is not respected. (Indeed, that’s the whole point of their RG analysis, and is clearly required if one wants to make contact with observed
low-energy physics.)
^2 $U(1)_{B-L}$ is not gauged in the Standard Model, because it would be anomalous but
1. They have a right-handed neutrino.
2. They’re only writing down a classical Lagrangian, so the anomaly should not be an impediment to them.
Posted by: Jacques Distler on August 1, 2008 3:37 PM | Permalink | PGP Sig | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Oh, and one more question about the “prediction” of
(1)$g_2 = g_3 = 3\lambda$
This, supposedly, comes from thinking of the Higgs as another “gauge boson” on the internal space. I don’t follow the logic, but the ensuing relation seems a little surprising, nonetheless.
$\lambda$ is the coefficient of a quartic self-coupling. For the gauge bosons, the coefficient of the quartic self-coupling is $g^2$, not $g$. So I might have expected a relation like (1), but
involving the squares of the SM gauge couplings, rather than the gauge couplings themselves.
Posted by: Jacques Distler on July 30, 2008 3:39 PM | Permalink | PGP Sig | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
And if the result came in that the mass was close to 171 GeV, how much credence does that give to Connes’ model being onto something? Are there rival theories predicting in that range?
Posted by: David Corfield on July 30, 2008 10:30 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
And if the result came in that the mass was close to 171 GeV, how much credence does that give to Connes’ model being onto something?
It would mean that the model would still be alive in the water rather than ruled out. More cannot seriously be said, I’d think.
Are there rival theories predicting in that range?
Plenty. Thomas Schücker spent the last ten minutes or so of his talk having fun with slides that summarized a literature search he made in collecting all existing predictions of Higgs masses. The
bulk of of them come from various susy theories, since in them you have many knobs that can be turned to modify the value. But there are plenty of other predictions.
He entertained the audience with telling us in which energy inervals no predictions have so been made, in case anyone felt like making a new one.
Posted by: Urs Schreiber on July 31, 2008 12:28 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
He entertained the audience with telling us in which energy inervals no predictions have so been made, in case anyone felt like making a new one.
This is really tempting. Can you divulge some of these ranges, and are there any gambling sites available?
Posted by: Bruce Bartlett on August 2, 2008 1:29 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
He entertained the audience with telling us in which energy inervals no predictions have so been made, in case anyone felt like making a new one.
This is really tempting.
Ah, discovering the phenomenologist inside? :-)
Can you divulge some of these ranges
Sorry, I forget the precise numbers.
and are there any gambling sites available?
Certainly. The main one is called hep-ph. The bet is to be submitted in LaTeX format embedded in a more or less inspired story about how you came up with it. Among the right bets, the prize of
eternal fame will be distributed according to how good the surrounding story you told is.
Posted by: Urs Schreiber on August 2, 2008 1:47 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
I wrote:
The presence of the single Higgs is derived from some representation theoretic arguments for the spectral triple. I don’t know how that works.
I should make that more precise:
as far as I understand, there is a single Higgs particle because the Higgs arises as the internal gauge boson with respect to the compactified factor $Y$ (that’s at least according to my summary of
the model here).
But the statement in the talk was stronger: the NCG model requires single Higgs and it is fixed to live in the correct representation $H_S = (2,-\frac{1}{2},1)$ (equation 6, page 2)
Posted by: Urs Schreiber on July 30, 2008 2:54 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Amusingly, the predicted Higgs mass is right in the window where the Tevatron would plausibly rule it out in the near future.
Posted by: Matt Reece on July 30, 2008 3:41 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Dorigo described the situation as of March 10 here. My guess is that 4.5 months later, a 160 GeV Higgs is already excluded (at 95% CL), and a 170 GeV Higgs has serious trouble.
Posted by: Thomas Larsson on July 31, 2008 7:59 AM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Yes, the winter ‘08 combination was already getting close to excluding 160 GeV. (They’ve been lucky to get an observed limit better than the expected limit.)
I suggest keeping an eye on the ICHEP talks. There were talks on Higgs searches today but apparently they’re saving the CDF/D0 combined plot to be announced at a plenary talk on Sunday by Matthew
Herndon. I wouldn’t be surprised if that talk announces the first exclusion of some region of SM Higgs near 160 GeV. (If not, then surely by the winter ‘09 conferences.)
Posted by: Matt Reece on August 1, 2008 3:48 AM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
170 GeV Higgs is excluded at 95% confidence, according to Matthew Herndon’s ICHEP talk (PPT file, unfortunately). At 165 GeV, the limit is 1.2 x SM; at 175 GeV, it’s 1.3 x SM. Note that these are
direct searches, unlike the electroweak fits that have been discussed around the blogosphere, so they translate much more directly to scenarios where there is new physics beyond the Standard Model.
In any case, since the “NCG Standard Model” is just the SM with some high-scale RG boundary condition, it’s doubly in trouble, both from the electroweak fits and from this direct search.
Posted by: Matt Reece on August 4, 2008 5:52 AM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Good point, but saying that the NCG SM is in trouble sort of misses the point that it’s no more in trouble than SM, strings and just about everything else. In fact, since NCG offers beautiful new non
stringy techniques, it is probably in better shape than most.
Posted by: Kea on August 4, 2008 6:05 AM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Despite all the fun it makes to fix specific predictions of a model, one should not forget that most every model ever dreamed up usually has a bunch of knobs which can be turned that modify the
predictions made. No theory in the world makes a prediction of one paraneter without first fitting lots of other parameters to known data.
Posted by: Urs Schreiber on August 4, 2008 11:58 AM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
No theory in the world makes a prediction of one parameter without first fitting lots of other parameters to known data.
You really should get out more.
Posted by: Kea on August 4, 2008 9:49 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Dear Urs,
I do not really understand your comment. In the ncg description of the standard model, this is precisely the striking fact: many parameters (as far as I know: hypercharges, Weinberg angle) fit well
with the model. Other are free and are fit to experimental datas (masse of the fermions, CKM matrix, neutrino mixing angles).
Could you explain what you mean ?
Posted by: pierre on August 5, 2008 7:01 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Could you explain what you mean ?
Sure: To make any prediction from this NCG model or pretty much every other model, you first need to fit parts of its structure to known data.
For instance the requirement of KO-dimension 4+[6] arises by fitting to data: only in that dimension do we have the ncg analog of Majorana-Weyl fermions and which guarantees the observed chiral
fermionic particle content.
Or the particle content: Connes and Chamseddine showed that given some assumptions the algebraic structure quickly begins to constrain the remaining assumptions, but in general the choice of internal
algebra and the choice of representation of it, hence the choice of particle content of the model, are parameters fitted to the data before one starts predicting anything.
And this particle content is apparently, as we see now, a knob that one might have to dial a bit more, should the current hints against the “big desert” hypothesis be further reinforced by coming
Then there is this “cutoff” function $f$ entering the spectral action: in principle this involves infinitely many parameters to choose.
Alain Connes wrote in his comment # that this function is thought to have to be chosen constant in a neighbourhood of 0. On the other hand Ali Chamseddine told me a few days ago (before the 170 GeV
exclusion measurement result was publicly known (to us at least)) that they are trying to see if choosing the higher moments of this function can be used to better fit the model to the fact that the
otherwise “predicted” gauge coupling unification is not quite observed.
It could well be that the current NCG model is wrong, but that a supersymmetric extension of it is right. That will introduce many more paranmeters that will have to be fitted.
Still, the model can make predictions. The point of good models is not that they predict everything in the world uniquely, but that they put constraints on the possible combinations of parameters
that are allowed. Such constraints come in the NCG model mainly from the spectral action, which forces a bunch of otherwise independently variable coefficients to be equal.
Posted by: Urs Schreiber on August 5, 2008 7:16 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Such constraints come in the NCG model mainly from the spectral action, which forces a bunch of otherwise independently variable coefficients to be equal.
This is a virtue, but the actual predictions that emerge are deeply problematic. I emphasized (but perhaps not sufficiently) in my blog post that the most problematic relation that emerges is the one
that governs
• the Planck scale
• the GUT scale
• the mass of the right-handed neutrino
• and the Electroweak scale.
These four parameters are governed by $\Lambda$, $f_2$, and $M$^1. Even if you allow for fine-tuning it is impossible to adjust the latter in such a way as to give sensible values for all of the
As I explained, I don’t think one should take the “prediction” of the Higgs mass terribly seriously, so I don’t think it’s a big deal that that prediction has been falsified.
This issue seems to me to be the much more serious one, and changing the matter content a bit (say, by supersymmetrizing the construction) doesn’t seem likely to fix it.
^1 They also depend on various traces of powers of the Yukawa couplings. In all analyses, including my discussion, of the NCG SM, these are assumed to be O(1). If, instead, they are large, then the
theory is not weakly coupled at the GUT scale, which introduces a host of other problems.
Posted by: Jacques Distler on August 5, 2008 7:59 PM | Permalink | PGP Sig | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
A report by Tommaso Dorigo.
Posted by: David Corfield on July 31, 2008 5:05 PM | Permalink | Reply to this
dimension 10.
I still believe that the most interesting postdiction of the new model is that the dimension of space time is, mod 8, equal to 10.
More interestingly, I wonder if it could be possible to move the coupling of the higgs field to reach the extreme limits of completely broken (M_W, M_Z = infinity) and completely unbroken (M_W, M_Z =
zero) standard model. Does the dimension jumps to 11, in the later case? And if so, what about chirality?
And, in the opposite limit, does the dimension jump to 9? Remember that 9 is the minimum dimension for a SU(3)xU(1) group to live in, while 11 is the minimum for SU(3)xSU(2)xU(1).
Posted by: Alejandro Rivero on August 1, 2008 1:08 AM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
David didn’t actually state it here, so I will: the latest report says that m=171 is RULED OUT.
Posted by: Kea on August 2, 2008 1:38 AM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Hey, good point! This is much clearer from Dorigo’s new post, entitled New bounds for the Higgs: 115-135 GeV!
That’s fairly far from Schucker’s noncommutative geometry prediction, $171.6 \pm 5$ GeV.
Does this mean we don’t need to learn noncommutative geometry?
(Of course we’ve already learned it; indeed I sleep with Connes’ book under my pillow every night.)
Posted by: John Baez on August 2, 2008 7:03 AM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
I believe the correct statement of what his post tells us is that if the only corrections to the $\rho$-parameter are SM radiative corrections, then a Higgs heavier than 135 GeV is excluded at the 1$
\sigma$ level (and, I’d guess, a 171 GeV Higgs is excluded at the 3$\sigma$ level or better).
Now, of course, if there are other beyond-the-SM corrections to the $\rho$-parameter, then the Higgs could be heavier. But then you’re not in the model of Connes et al.
The bottom line is: the Higgs is very light and/or there is significant beyond-the-SM physics lurking just around the corner.
Far more exciting than spectral triples, if you ask me.
Posted by: Jacques Distler on August 2, 2008 8:54 AM | Permalink | PGP Sig | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Could you explain this $\rho$ parameter? And more generally, the concept of ‘Peskin–Takeuchi parameter’? Sounds like part of some ‘parametrized post-Standard-Model physics’ scheme.
Posted by: John Baez on August 2, 2008 10:20 AM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
At tree-level, in the SM, the W and Z masses are related by
(1)$M_W^2 = M_Z^2 \cos^2(\theta_w)$
where $\tan(\theta_w) = \frac{g_1}{g_2}$ is the Weinberg angle.
The $\rho$-parameter is defined as the ratio of the LHS and the RHS of (1). It’s 1 at tree-level, but, of course, even in the SM, it receives radiative corrections.
Among the corrections are Higgs loops $\begin{svg} <svg xmlns="http://www.w3.org/2000/svg" width="12em" height="4em" viewBox="0 0 150 50"> <g fill="none" stroke="black"> <path stroke-width="1" d="M 5
45 c 10 10 10 -10 20 0 s 10 -10 20 0 s 10 -10 20 0 s 10 -10 20 0 s 10 -10 20 0 s 10 -10 20 0 s 10 -10 20 0"/> <path stroke-width="2" stroke-dasharray="2" d="M 45 45 a 29 29 0 0 1 58 0"/> </g> </svg>
\end{svg}$ which correct $\rho = 1 - \frac{11 g_2^2}{96\pi^2} \tan^2(\theta_w) \log\left(\frac{m_h}{M_W}\right)$ Top/bottom loops produce a similar correction.
I’ll recommend the perfectly fine Wikipedia article for an explanation of the Peskin-Takeuchi parameters.
More fundamentally, I think it’s best to think about them in an effective Lagrangian approach to the SM. Instead of a Higgs, one has a gauged nonlinear $\sigma$-model.
When writing an effective Lagrangian, one should not stop at the 2-derivative terms. One should also write down all possible 4-derivative terms (with arbitrary coefficients). There are 11 or so such
terms, and you can find them described in section 9.2 of Sally Dawson’s review.
The Peskin-Takeuchi parameters are particular linear combinations of these 11 coupling constants.
Posted by: Jacques Distler on August 2, 2008 4:36 PM | Permalink | PGP Sig | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
[Alain Connes sent the following reply to the above discussion by email. With his kind explicit permission I forward it here. -Urs ]
Dear Urs,
The only relevant talk on that subject is the talk by Chamseddine #. I had the misfortune to look at the blog discussion and am too busy and sensitive to indulge into that, but I just want to give
you the answer to several questions:
1) The choice of the function $f$ is that of a “cutoff” function (flat constant =1 near 0 and then gently decreasing to 0) and thus all the coefficients of the higher terms ($a_6$, $a_8$, etc) all
vanish since they are the Taylor expansion of $f$ at $x=0$.
2) The algebra $\mathbb{C} + \mathbb{H} + M_3(\mathbb{C})$ is no longer an “input” but we have shown with Ali in the paper Why the standard model how to derive both the algebra and the representation
from basic classification results.
3) The idea about “predictions” is simple: that the spectral action is an effective model at a specific scale (of the order of unification scale) and that one runs it down using the Wilsonian
approach. Writing precise figures for the Higgs mass as Schucker does is ridiculous, what one has is, assuming the “big desert” an order of magnitude for the Higgs mass and it is at the upper side of
the allowed interval.
All the comments of Distler are to the point and in particular the relation between the Higgs quartic coupling and the gauge couplings is the equation (5.6) page 53 of the paper with Chamseddine and
Marcolli which indeed involves the square of the gauge coupling.
4) There is no “a priori” reason of incompatibility between ncg and susy all the more since in both cases one makes small corrections to the algebra of functions on space-time. However one has to
wait until there is some real experimental evidence for susy before embarking in what promises to be a quite complicated stuff.
Recent precision data fit seem to prefer the lower part of the allowed interval for the Higgs mass, in case this was experimentally confirmed it would be a clear motivation to go ahead and try to
merge susy with the ncg picture.
Hope this helps a bit to clarify the mess.
Posted by: Alain Connes on August 2, 2008 2:02 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Writing precise figures for the Higgs mass […] is ridiculous
Maybe for completeness one could add here what kind of precision is meant. I see that on page 55 of CCM it gives
a Higgs mass of order 170 GeV
assuming some numerical input justified by the “big desert” assumption. Which is a risky assumption, possibly, in light of the recent data which is apparently not favoring a mass even roughly of
order 170 GeV.
In his talk here at HIM Ali Chamseddine said several times (in reply to questions, mostly) that supersymmetric extensions of the NCG modle are thinkable, but that he is hesitant to invest work into
them as long as experimental indication is missing. Maybe an indication of a light Higgs as we have now (a few days after his talk) changes the situation?
Posted by: Urs Schreiber on August 4, 2008 3:24 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Urs wrote:
Y is taken to be a noncommutative space which is of dimension 0 as seen by heat diffusing on it
meaning ?? heat doesn’t diffuse??
Posted by: jim stasheff on August 5, 2008 2:18 AM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Urs wrote:
$Y$ is taken to be a noncommutative space which is of dimension 0 as seen by heat diffusing on it
meaning ??
The starting point of spectral geometry (often called noncommutative geometry) is that one observes that all the metric geometry of a Riemannian spin manifold, in particular its dimension, can be
reconstructed from the algebraic properties of the Dirac operator $D$ on that manifold. (Meaning: one can hear the shape of a drum – if one uses spinors as sensors.)
The idea is then that every operator which satisfies a handful of properties characteristic of Dirac operators on Riemannian spin manifold can be interpreted as encoding, in this algebraic way, some
kind of generalized geometry.
In this context the dimension of a space is encoded in the spectral growth of the Dirac operator: the growth of the number of eigenvalues of the square of the Dirac operator operator below a given
$N_{D^2} : (\lambda \in \mathbb{R}) \mapsto number of eigenvalues of D^2 smaller than \lambda$
For $D$ an ordinary Dirac operator on a Riemannian spin manifold this number asymptotically grows as a power of $\lambda$
$N_{D^2}(\lambda) \to const \;\lambda^{n/2}$ as $\lambda \to \infty$. The exponent $n$ here is the dimension of the underlying manifold.
So given any generalized Dirac operator, we say that it defines a generalized Riemannian geometry whose metric dimension is twice the exponent of the asymptotic spectral growth of the operator.
For more details see for instance
Joseph Várilly, Dirac operators and spectral geometry, in particular around page 39,40.
We have to distinguish here “metric dimension” form other kinds of dimensions. For ordinary riemannian manifolds the dimension of the manifold affects not just the spectral growth of its Dirac
operator, but also other things, such as the representation theory of the spinors on that manifold. It turns out that this representation theory of spinors is entirely encoded in three choices of
signs which contrl the way the Dirac operator and other operators in the game commute with an operator called a “real structure”. These three choices of signs yield 8 different cases, numbered 0 to 7
and called the KO-dimension encoded by the Dirac operator. For ordinary Riemannian manifolds KO dimension is the ordinary dimension modulo 8. But for more general generalized spaces encoded by Dirac
operators, KO-dimension and metric dimension need no longer be related this way. In particular, the metric dimension may disappear while the KO-dimension remains different from 0.
A KO-dimension of [6] is important in the discussion we had, because this is the KO-dimension which allows to have Majorana-Weyl spinors.
Finally: there is a well known physical interpretation of the eigenvalues of the square of a Dirac operator, i.e. of Laplace operators $\Delta$: a heat distribution $T : \Sigma \to \mathbb{R}$ on a
Riemannian manifold $\Sigma$ changes in time $t$, due to diffusion, according to the heat equation
$\frac{\partial}{\partial t} T = \Delta T \,.$
therefore the eigenvalues of $\Delta$ control the way heat diffuses on $\Sigma$.
Of course the heat equation is just the Schrödinger equation in imaginary time. So the eigenvalues of $\Delta$ can also be regarded as controlling the energy and time propagation of quantum particles
propagating on $\Sigma$.
And that’s precisely the point of spectral triples. A spectral triple is an axiomatization of quantum mechanics of spinorial particles (aka superparticles) coupled to gauge forces (= connections) and
gravity (= Riemannian curvature). The statement of spectral geometry is that from the dynamics of these particles alone can one read off the properties of the geometry in which they propagate.
Posted by: Urs Schreiber on August 5, 2008 10:27 AM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Another piece of the puzzle is that the heat equation is also the “time” component of exact 1-form
$dT = dx^\mu (\partial_\mu T) + dt (\partial_t T - g^{\muu} \partial_\mu \partial_u T)$
that arises natural when one introduces noncommutativity between space and time components via
$[dx^\mu,t] = [dt,t] = 0$
$[dx^\mu,x^u] = g^{\muu} dt$
as outlined in Equation 55 here.
I called this a “right martingale” as opposed to a “left martingale”. The concept of right and left martingales were introduced because of the relation
\begin{aligned} dT &= dx^\mu (\partial_\mu T) + dt (\partial_t T - g^{\muu} \partial_\mu \partial_u T) \\ &= (\partial_\mu T) dx^\mu + (\partial_t T - g^{\muu} \partial_\mu \partial_u T) dt, \end
where, due to the noncommutativity, it matters on which side of the basis elements you write the components.
Note: I am aware that there is a term involving the connection coefficients missing from the Laplace-Beltrami operator, but I haven’t yet figured out if there is a way to get it in there naturally
via the defining commutative relations.
Since this naturally describes stochastic processes on manifolds, I fully expect there to be some beautiful little trick to arrive at something like
$dT = dx^\mu (\partial_\mu T) + dt (\partial_t T - \Delta T).$
If anyone got bored, it might be a fun little exercise for a back of a napkin or something to work this out.
Posted by: Eric on August 5, 2008 5:04 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Oops! Some bad cut and paste there. Swapping from right to left components changes the sign in the heat equation. It should have been
\begin{aligned} dT &= dx^\mu (\partial_\mu T) + dt (\partial_t T - g^{\muu} \partial_\mu\partial_u T) \\ &= (\partial_\mu T) dx^\mu + (\partial_t T + g^{\muu} \partial_\mu\partial_u T) dt. \end
It is kind of neat. Switching coefficients from before the time component to after the time component has the effect of reversing time in the heat equation.
Posted by: Eric on August 5, 2008 5:10 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
More than six years after writing that paper, I finally got it.
I “guessed” Equation 48
$\stackrel{\leftarrow}{\partial_t} = \partial_t + \frac{1}{2} g^{\muu} \partial_\mu\partial_u$
simply because it was fairly obvious that it satisfied the required expression
$\stackrel{\leftarrow}{\partial_t}(fg) = (\stackrel{\leftarrow}{\partial_t} f) g + f (\stackrel{\leftarrow}{\partial_t} g) + g^{\muu} (\partial_\mu f)(\partial_u g).$
Checking wikipedia, I see that the Laplace-Beltrami operator satisfies
$\Delta(fg) = (\Delta f)g + f(\Delta g) + g^{\muu} (\partial_\mu f)(\partial_u g).$
Therefore, I could have and should have defined
$\stackrel{\leftarrow}{\partial_t} = \partial_t + \frac{1}{2} \Delta.$
Now it is obvious that the required properties are satisfied and we get the much nicer version (as I guessed we should have)
$dT = dx^\mu (\partial_\mu T) + dt (\partial_t T - \Delta T).$
The connection (no pun intended) between stochastic process on manifolds and noncommutative geometry is established (if it wasn’t already). Woohoo! :)
That was fun for napkin math.
Posted by: Eric on August 5, 2008 7:09 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
Therefore, I could have and should have defined […]
In local coordinates such that $g^{\muu}$ are constants the Laplace operator is of course $\g^{\muu}\partial_\mu \partial_u$. That equation you cite tells us that even in other cases, this is still
the symbol of the Laplace operator, i.e. its component in second order differential operators.
Posted by: Urs Schreiber on August 5, 2008 7:28 PM | Permalink | Reply to this
Re: Pre- and Postdictions of the NCG Standard Model
To (temporarily) conclude the debate on the Higgs mass from ncg, just have a look on ncg blog http://noncommutativegeometry.blogspot.com/
Nature is cruel…
Posted by: Pierre on August 5, 2008 6:31 PM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2008/07/pre_and_postdictions_of_the_nc.html","timestamp":"2014-04-16T15:58:49Z","content_type":null,"content_length":"185798","record_id":"<urn:uuid:98efee38-ab52-4193-9b1c-64608673c16d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proposition Logic problem
October 29th 2008, 07:39 AM
Proposition Logic problem
Anyone good with proposition logics? that can help out?
im stuck on one question and can't figure out the next step.
The question states : Use the rules of inference to prove the following.
(p -> q) ^ (r -> s) ^ [t -> ~(q V s) ^ t => (~p ^ ~r)
note : the " ~ " sign means NOT,
the " ^ " sign means AND,
the " -> " sign means implies/conditional
the " V " sign means OR
the " => " sign means resultant or conclusion is.
I've set up my working up to look like this.
P -> q H1(Hypothesis 1)
r -> s H2(Hypothesis 2)
t -> ~(q V s) H3(Hypothesis 3)
t H4(Hypothesis 4)
~p ^ ~r C (Conclusion)
Step one. I got H4 with H3 and AND both of them >> H4 ^ H3 which gives:
t ^ [t -> ~(q V s)] and since t ^ t cancels out with the modus ponens rules it leaves us with ~(q V s).
So now that H4 and H3 is gone we have H1 and H2 left. This is where i got stuck.
I decided to use de morgan's law to simplify which gave me >> ~(q V s) => ~q ^ ~s and we can call this theory one (1).
So far i've only got up to this part and now i'm stuck could you help me figured out what i need to do next?
Working : t ^ [t -> ~(q V s)] <=> ~(q V s) H4 ^ H3
<=> ~q ^ ~s (1).
Next step = ? please help
October 29th 2008, 08:23 AM
Your next step would be to use the Law of Contrapositives to change p -> q and r -> s into ~q -> ~p and ~s -> ~r.
Since you know ~q ^ ~s already, you're a short step away from your goal.
Nicely done so far!
October 29th 2008, 08:42 AM
oh ok thnx alot let me try out the next step.
October 29th 2008, 08:53 AM
Damn... i suck bad...
is (~p -> ~q) ^ (~q ^ ~s) possible? using the Hypothecial Syllogism concept. i know it works for if both of them are conditional but just wondering if this method works?
if it does would it look something like ~p->~s or ~p^~s ._.
if ~p->~s works out then... it would be alot easier because then i would just use Hypothetical syllgism law again for ~p->~s ^ ~s->~r which gives me ~p->~r but... then again it doesnt work out...
sorry im really bad with this topic. a little more help please?
October 29th 2008, 08:58 AM
opps sorry its suppose to be ~q -> ~p not ~p -> ~q my bad.
so that would mean :
(~q -> ~p) ^ (~q ^ ~s) using exportation... is it possible?
gives (~p ^ ~s) -> ~q ? ._. i don't know sorry =/
October 29th 2008, 10:08 AM
Ok i think i got it but could someone check the answer for me? if i'm wrong let me know where i got wrong.
t ^ [t -> ~(q V s)] <=> ~(q V s) ... H4 ^ H3 Modus ponens
<=> ~q ^ ~s (1) ... de morgan's law
p -> q => ~q -> ~p (2) ... Contrapositive
r -> s => ~s -> ~r (3) ... Contrapositive
(~q -> ~p) ^ (~q ^ ~s) .... (2) ^ (1)
=> ~p ^ ~s ... Disjunuctive Syllogism
=> ~s ^ ~p (4) ... Commutativity
(~s -> ~r) ^ (~s ^ ~p) (3) ^ (4)
=> ~r ^ ~p ... Disjunctive Syllogism
=> ~p ^ ~r ... Commutativity <--- Conclusion
October 29th 2008, 10:16 AM
opps i think i have the laws wrong... but could someone please check my answer? and let me know if i got it right or wrong... if wrong please show me where i went wrong. | {"url":"http://mathhelpforum.com/discrete-math/56345-proposition-logic-problem-print.html","timestamp":"2014-04-19T11:37:13Z","content_type":null,"content_length":"7802","record_id":"<urn:uuid:d58ce91b-3721-448f-a94c-f32f375d2706>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
This preview has intentionally blurred parts. Sign up to view the full document
View Full Document
• Showing pages 1 - 3 of 6
• Word Count: 1290
Unformatted Document Excerpt
PHYS 1004 INTRODUCTORY ELECTROMAGNETISM AND WAVE MOTION 2013 Winter Term Assigned Problems for Tutorial #6: Faradays Law & Inductance, AC Circuits 1. The loop in the figure below has a width of w = 5
. 0 cm and is being pushed into the B = 0 . 20 T magnetic field at a speed of v = 50 m/s. The resistance of the loop is R = 0 . 10 . a. What is the direction of the induced current in the loop? b.
Find an expression for the magnitude of the induced current in the loop in terms of the given variables. c. What is the value of the magnitude of the induced current in the loop? 2. The maximum
allowable potential difference across a L = 200 mH inductor is V max = 400 V. You need to raise the current through the inductor from I 1 = 1 . 0 A to I 2 = 3 . 0 A. a. Find an expression for the
minimum time you should allow for changing the current. b. What is the value of the minimum time you should allow for changing the current? 1 3. The figure below shows a square of side length a = 10
cm bent at a 90 angle. A uniform B = 0 . 050 T magnetic field points downward at a 45 angle. a. Find an expression for the magnetic flux through the loop. b. What is the value of the magnetic flux
through the loop? 4. A square loop of side length a = 10 cm lies in the xy-plane. The magnetic field in this region of space is ~ B = (0 . 30 t i + 0 . 50 t 2 k ) T, where t is in seconds. a. Find an
expression for the emf induced in the loop as a function of time.... View Full Document
End of Preview
Sign up now to access the rest of the document | {"url":"http://www.coursehero.com/file/8303749/1004W13test5v3/","timestamp":"2014-04-18T13:09:37Z","content_type":null,"content_length":"33867","record_id":"<urn:uuid:6ecd3e5c-0957-4e48-b3fe-7ed6b13fe547>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
recursive sequences!
January 24th 2008, 11:05 AM #1
hey guys...I'm having a big trouble with Recursive Sequences(Sequences and Series in Majority)! I just failed on my first exam because of not having a clear idea about them(their convergence,
thnx for your time
what about your text? did you try google or wikipedia?
remember, you can also ask specific questions here and we will help you
Yeah I tried google but I'm not finding the 'core'.
The problem is that my professor didn't make much excersises on this particular issue! On the other hand there are such a probs. on tests!
for e.x. like this one:
or another example:
too messy
show that the sequence is monotonically decreasing and that it is bounded below by 0. this will show it converges. you can do this using induction
for the second part, call the limit $L$ and note that $\lim x_n = \lim x_{n + 1}$
so, $\lim x_{n + 1} = \lim \frac {\sqrt{x_n + 1} - 1}{x_n} \implies L = \frac {\sqrt{L^2 + 1} - 1}L$
now solve for $L$
indeed. as i said javax, induction is the way i usually see this done. and you are to show the sequence is monotonic and bounded, because then, a theorem tells us it converges (by the way, it is
in proving the limit exist that you realize which of the answers in the second part to disregard)
yes. so when we take the limit, all the $y_n$'s go to $L$, and we get the expression i have there
now we must solve for $L$. begin by expanding the right hand side, and then try to get all the $L$'s on one side to solve for $L$
indeed. as i said javax, induction is the way i usually see this done. and you are to show the sequence is monotonic and bounded, because then, a theorem tells us it converges (by the way, it is
in proving the limit exist that you realize which of the answers in the second part to disregard)
yes. so when we take the limit, all the $y_n$'s go to $L$, and we get the expression i have there
now we must solve for $L$. begin by expanding the right hand side, and then try to get all the $L$'s on one side to solve for $L$
Thanks, I think I got your point...I'll see a bit more about induction
Thanks again!
ahhh, I meant 'something'
January 24th 2008, 11:11 AM #2
January 24th 2008, 03:22 PM #3
January 24th 2008, 03:48 PM #4
January 24th 2008, 03:52 PM #5
January 24th 2008, 04:04 PM #6
January 24th 2008, 04:06 PM #7
January 24th 2008, 04:10 PM #8
January 24th 2008, 04:17 PM #9
January 24th 2008, 04:21 PM #10
January 24th 2008, 04:26 PM #11
January 24th 2008, 04:33 PM #12
January 24th 2008, 05:27 PM #13
January 24th 2008, 05:39 PM #14 | {"url":"http://mathhelpforum.com/calculus/26752-recursive-sequences.html","timestamp":"2014-04-16T06:43:52Z","content_type":null,"content_length":"84664","record_id":"<urn:uuid:93150628-589a-4cc5-a880-67e046f7a2b1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
The unrestricted blocking number in convex geometry
Sezgin, S.; (2010) The unrestricted blocking number in convex geometry. Doctoral thesis, UCL (University College London).
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Let K be a convex body in \mathbb{R}^n. We say that a set of translates \left \{ K + \underline{u}_i \right \}_{i=1}^{p} block K if any other translate of K which touches K, overlaps one of K + \
underline{u}_i, i = 1, . . . , p. The smallest number of non-overlapping translates (i.e. whose interiors are disjoint) of K, all of which touch K at its boundary and which block any other translate
of K from touching K is called the Blocking Number of K and denote it by B(K). This thesis explores the properties of the blocking number in general but the main purpose is to study the unrestricted
blocking number B_\alpha(K), i.e., when K is blocked by translates of \alpha K, where \alpha is a fixed positive number and when the restrictions that the translates are non-overlapping or touch K
are removed. We call this number the Unrestricted Blocking Number and denote it by B_\alpha(K). The original motivation for blocking number is the following famous problem: Can a rigid material
sphere be brought into contact with 13 other such spheres of the same size? This problem was posed by Kepler in 1611. Although this problem was raised by Kepler, it is named after Newton since Newton
and Gregory had a dispute over the solution which was eventually settled in Newton’s favour. It is called the Newton Number, N(K) of K and is defined to be the maximum number of non-overlapping
translates of K which can touch K at its boundary. The well-known dispute between Sir Isaac Newton and David Gregory concerning this problem, which Newton conjectured to be 12, and Gregory thought to
be 13, was ended 180 years later. In 1874, the problem was solved by Hoppe in favour of Newton, i.e., N(\beta^3) = 12. In his proof, the arrangement of 12 unit balls is not unique. This is thought to
explain why the problem took 180 years to solve although it is a very natural and a very simple sounding problem. As a generalization of the Newton Number to other convex bodies the blocking number
was introduced by C. Zong in 1993. “Another characteristic of mathematical thought is that it can have no success where it cannot generalize.” C. S. Pierce As quoted above, in mathematics
generalizations play a very important part. In this thesis we generalize the blocking number to the Unrestricted Blocking Number. Furthermore; we also define the Blocking Number with negative copies
and denote it by B_(K). The blocking number not only gives rise to a wide variety of generalizations but also it has interesting observations in nature. For instance, there is a direct relation to
the distribution of holes on the surface of pollen grains with the unrestricted blocking number.
Type: Thesis (Doctoral)
Title: The unrestricted blocking number in convex geometry
Open access status: An open access version is available from UCL Discovery
Language: English
Additional information: The abstract contains LaTeX text. Please see the attached pdf for rendered expressions
UCL classification: UCL > School of BEAMS > Faculty of Maths and Physical Sciences > Mathematics
View download statistics for this item
Archive Staff Only: edit this record | {"url":"http://discovery.ucl.ac.uk/19509/","timestamp":"2014-04-16T07:23:43Z","content_type":null,"content_length":"35464","record_id":"<urn:uuid:012318a4-3fc7-420d-a407-dacf13302884>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Norton's Theorem
Norton's Theorem states that it is possible to simplify any linear circuit, no matter how complex, to an equivalent circuit with just a single current source and parallel resistance connected to a
load. Just as with Thevenin's Theorem, the qualification of “linear” is identical to that found in the Superposition Theorem: all underlying equations must be linear (no exponents or roots).
Contrasting our original example circuit against the Norton equivalent: it looks something like this:
. . . after Norton conversion . . .
Remember that a current source is a component whose job is to provide a constant amount of current, outputting as much or as little voltage necessary to maintain that constant current.
As with Thevenin's Theorem, everything in the original circuit except the load resistance has been reduced to an equivalent circuit that is simpler to analyze. Also similar to Thevenin's Theorem are
the steps used in Norton's Theorem to calculate the Norton source current (I[Norton]) and Norton resistance (R[Norton]).
As before, the first step is to identify the load resistance and remove it from the original circuit:
Then, to find the Norton current (for the current source in the Norton equivalent circuit), place a direct wire (short) connection between the load points and determine the resultant current. Note
that this step is exactly opposite the respective step in Thevenin's Theorem, where we replaced the load resistor with a break (open circuit):
With zero voltage dropped between the load resistor connection points, the current through R[1] is strictly a function of B[1]'s voltage and R[1]'s resistance: 7 amps (I=E/R). Likewise, the current
through R[3] is now strictly a function of B[2]'s voltage and R[3]'s resistance: 7 amps (I=E/R). The total current through the short between the load connection points is the sum of these two
currents: 7 amps + 7 amps = 14 amps. This figure of 14 amps becomes the Norton source current (I[Norton]) in our equivalent circuit:
Remember, the arrow notation for a current source points in the direction opposite that of electron flow. Again, apologies for the confusion. For better or for worse, this is standard electronic
symbol notation. Blame Mr. Franklin again!
To calculate the Norton resistance (R[Norton]), we do the exact same thing as we did for calculating Thevenin resistance (R[Thevenin]): take the original circuit (with the load resistor still
removed), remove the power sources (in the same style as we did with the Superposition Theorem: voltage sources replaced with wires and current sources replaced with breaks), and figure total
resistance from one load connection point to the other:
Now our Norton equivalent circuit looks like this:
If we re-connect our original load resistance of 2 Ω, we can analyze the Norton circuit as a simple parallel arrangement:
As with the Thevenin equivalent circuit, the only useful information from this analysis is the voltage and current values for R[2]; the rest of the information is irrelevant to the original circuit.
However, the same advantages seen with Thevenin's Theorem apply to Norton's as well: if we wish to analyze load resistor voltage and current over several different values of load resistance, we can
use the Norton equivalent circuit again and again, applying nothing more complex than simple parallel circuit analysis to determine what's happening with each trial load.
• REVIEW:
• Norton's Theorem is a way to reduce a network to an equivalent circuit composed of a single current source, parallel resistance, and parallel load.
• Steps to follow for Norton's Theorem:
• (1) Find the Norton source current by removing the load resistor from the original circuit and calculating current through a short (wire) jumping across the open connection points where the load
resistor used to be.
• (2) Find the Norton resistance by removing all power sources in the original circuit (voltage sources shorted and current sources open) and calculating total resistance between the open
connection points.
• (3) Draw the Norton equivalent circuit, with the Norton current source in parallel with the Norton resistance. The load resistor re-attaches between the two open points of the equivalent circuit.
• (4) Analyze voltage and current for the load resistor following the rules for parallel circuits.
Related Links | {"url":"http://www.allaboutcircuits.com/vol_1/chpt_10/9.html","timestamp":"2014-04-17T12:30:19Z","content_type":null,"content_length":"16452","record_id":"<urn:uuid:b9738ac7-c6ed-49f9-bde0-7f91ae7ae4ee>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solve for x.
Thank for the help. - Homework Help - eNotes.com
Solve for x.
Thank for the help.
There are a few ways to solve for x in this problem. I will use a way that involves the formula for the area of a triangle, `A = 1/2 bh` .
Here b is one of the sides of the triangle ("b" stands for base) and h is the height, or segment perpendicular to this side, dropped from the opposite vertex to this side.
Since the triangle in the image is right triangle, the sides with lengths 5 and 12 are perpendicular to each other, so one can be considered a height and another one a base. So the area of this
triangle is
`A = 1/2 * 5 * 12 = 30` .
On the other hand, the segment of length x is pependicular to the side of length 13, so segment of length x is the height and the side of length 13 is the base. So the area of the same triangle is
`A = 1/2 *x *13` .
Setting this area equal to 30, the number obtained above, we get
`1/2*x*13 = 30`
From here `x = (30*2)/13 = 60/13`
So `x = 60/13` .
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/solve-x-thank-help-453314","timestamp":"2014-04-18T00:54:00Z","content_type":null,"content_length":"26592","record_id":"<urn:uuid:a131a3d9-c255-4377-b647-3ca58873ff2a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
American Canyon Statistics Tutor
...I would love to be a teacher but unfortunately, the teaching profession is not as secure as I would like it to be. Instead, I have worked as a trainer in a pharmaceutical manufacturing
environment where I strived to provide training in the most effective and creative ways possible. In order to be closer to home, I now work as a program manager for a vocational school in Napa.
29 Subjects: including statistics, Spanish, chemistry, reading
...Students who are currently “making the grade” will be more confident regarding their abilities and will be fully prepared for “honors” level instruction. High School students will be prepared
for pre-calculus and calculus instruction, and will be in position to confidently pass their AP tests. ALL students are fully capable of high academic achievement.
13 Subjects: including statistics, calculus, physics, algebra 2
...I have an undergraduate degree in biology and math and have worked many years as a data analyst in a medical environment. I have a PhD in economics and have taken 6 PhD level classes in
econometrics. I have years of experience using STATA.
49 Subjects: including statistics, calculus, physics, geometry
...I have been a Teaching Assistant (TA) for a number of probability courses, both at Caltech and at Cal. As an undergrad at Caltech, I was a TA for the intro to probability and statistics course
required for all undergrad students. As a masters student at Caltech, I was a TA for a graduate course on probability and random processes.
27 Subjects: including statistics, chemistry, calculus, physics
...Because of my expertise in so many subjects, I show them what they need to know for each test as well. I also teach time management and organization skills so that students can use their time
more efficiently. Teaching these study skills to my students has helped them dramatically increase their grades.
59 Subjects: including statistics, chemistry, reading, physics | {"url":"http://www.purplemath.com/American_Canyon_statistics_tutors.php","timestamp":"2014-04-16T16:43:49Z","content_type":null,"content_length":"24408","record_id":"<urn:uuid:f215f292-c045-4e49-95d2-da7f43f56b2b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
A 2-category of chain complexes, chain maps, and chain homotopies?
up vote 7 down vote favorite
First-time here... I hope my question isn't silly or anything... anyway...
Consider the category of chain complexes and chain maps. We can also define chain homotopies between chain maps. Does this form a 2-category? I am able to construct vertical and horizontal
composition of chain homotopies but am unable to prove that the horizontal composition is associative and that the interchange law holds.
I like to include a lot of details in case the reader isn't familiar with everything, so the following is the problem in detail.
To be a bit more concrete, let's let $(C_n, \partial_n), (C'_n, \partial'_n),$ and $(C''_n, \partial''_n)$ be chain complexes (sorry, my differential is going up in degree) and let $f_n, g_n, h_n:
C_n \to C'_n$ and $f'_n, g'_n, h'_n : C'_n \to C''_n$ be chain maps. Suppose that $\sigma : f \Rightarrow g,$ $\tau : g \Rightarrow h,$ $\sigma' : f' \Rightarrow g',$ and $\tau' : g' \Rightarrow h'$
are chain homotopies, where $\sigma_n, \tau_n : C_n \to C'_{n-1}$ and similarly for $\sigma'$ and $\tau'.$ Recall that these definitions say
$f_n \circ \partial_{n-1} = \partial'_{n-1} \circ f_{n-1}$
and similarly for primes, $g$'s, $h$'s, and
$\sigma_{n+1} \circ \partial_{n} + \partial'_{n-1} \circ \sigma_{n} = f_{n} - g_{n},$
$\tau_{n+1} \circ \partial_{n} + \partial'_{n-1} \circ \tau_{n} = g_{n} - h_{n},$
and similarly for primes.
We can define the vertical composition of $\sigma : f \Rightarrow g$ with $\tau : g \Rightarrow h$ by
$(\tau \diamond \sigma)_{n} := \tau_n + \sigma_n$
(I denote vertical composition with a diamond, $\diamond$).
We can also define the horizontal composition of $\sigma : f \Rightarrow g$ with $\sigma' : f' \Rightarrow g'$ by
$(\sigma' \circ \sigma)_n := \partial''_{n-2} \circ \sigma'_{n-1} \circ \sigma_{n} - \sigma'_{n} \circ \sigma_{n+1} \circ \partial_{n} + \sigma'_{n} \circ g_{n} + f'_{n-1} \circ \sigma_{n}.$
Now we just verify that these are indeed chain homotopies. The vertical composition is easy:
$(\tau \diamond \sigma)_{n+1} \circ \partial_{n} + \partial'_{n-1} \circ (\tau \diamond \sigma)_{n} = f_n - g_n + g_n - h_n = f_n - h_n$
and the horizontal composition is a bit more challenging but doable (I won't include the derivation here).
The identity for vertical composition is the zero map and similarly for the horizontal composition (note that for the vertical composition, the zero map is a chain homotopy between any two chain maps
provided they are the same while for the horizontal composition, the chain maps are both the identities). The vertical composition is easily seen to be associative. However, the horizontal
composition satisfies
$( \sigma'' \circ ( \sigma' \circ \sigma ) )_{n} - ( ( \sigma'' \circ \sigma' ) \circ \sigma )_n$
$= ( f''_{n-1} - g''_{n-1} ) \circ ( f'_{n-1} - g'_{n-1} ) \circ \sigma_{n} - \sigma''_{n} \circ ( f'_{n} - g'_{n} ) \circ ( f_n - g_n ).$
The interchange law doesn't hold either and the difference is given by
$( ( \tau' \diamond \sigma' ) \circ ( \tau \diamond \sigma ) )_{n} - ( ( \tau' \circ \tau ) \diamond ( \sigma' \circ \sigma ) )_{n}$
$= ( f'_{n-1} - g'_{n-1} ) \circ \tau_{n} + (g'_{n-1} - h'_{n-1} ) \circ \sigma_{n} - \tau'_{n} \circ ( f_{n} - g_{n} ) - \sigma'_{n} \circ (g_{n} - h_{n} )$
So I'm not sure if chain complexes, chain maps, and chain homotopies are supposed to form a 2-category, but I would've liked this result to be true. Does anyone know what the correct categorical
structure of this category is? I have not yet considered chain homotopies of chain homotopies, so if the answer requires all such higher morphisms, then an answer in that direction would also be
Any thoughts? Thanks in advance.
higher-category-theory homological-algebra
4 I think you need to use homotopy classes of chain homotopies. – Qiaochu Yuan Jan 28 '12 at 23:08
Do you mean "... and $(C''_n, \partial''_n)$ be ..." in the fourth paragraph ? – Ralph Jan 29 '12 at 13:51
yes, thank you! – Arthur Jan 29 '12 at 14:44
add comment
3 Answers
active oldest votes
Part of the problem is that you're not using a convenient definition of chain homotopy. I'll use a differential going down in degree. Let $I$ be the interval object in the category of
chain complexes; that is, $I$ is $\text{span}(0, 1)$ in degree $0$ and $\text{span}(e)$ in degree $1$ ($k$ the underlying commutative ring), and the differential sends $e$ to $1 - 0$. If
$C, D$ are two complexes, I claim that a chain homotopy between two chain maps $f, g : C \to D$ is precisely a chain map $$H : I \to \text{Hom}(C, D)$$
up vote 5 such that the restriction of the map to $0$ is $f$ and the restriction of the map to $1$ is $g$, where $\text{Hom}$ is the hom chain complex. With this definition you can work guided by
down vote analogy to the topological situation (there the 2-category is the 2-category of topological spaces, continuous functions, and homotopy classes of homotopies between such functions); all
accepted of the maps you need have obvious topological definitions, although I admit I have never worked out the details. I think everything reduces to working with fairly concrete maps between
some chain complexes constructed from $I$.
2 Equivalently, a homotopy is a map $H: C \otimes I \rightarrow D$, and again you can be guided by the topology :). And $I$ didn't come from nowhere: it is the simplicial chain complex
associated to the unit interval. It is also equivalent to what you get when you take the normalized complex of the simplicial abelian group $\mathbb{Z}\Delta^1$... etc. – Dylan Wilson
Jan 29 '12 at 1:04
Yeah, I completely forgot to input the homotopy condition. Thanks for the reminder and also the quick response. Perhaps this viewpoint will allow me to find this condition as well as
the conditions for higher chain homotopies. I can see that it works for the 1-category description (namely, such an $H$ produces the $f,g,$ and $\sigma$ by considering the restriction
to 1,0, and $\mathrm{span}(e)$ respectively). This should take care of those leftover terms. For the higher homotopies the idea is very similar--restrict to the appropriate corners,
boundaries, and faces. Awesome! – Arthur Jan 29 '12 at 1:08
Yeah, this is pretty beautiful. You immediately get the algebraic condition for when two $(n-1)$-chain homotopies are homotopy equivalent by applying a chain map $I \otimes \cdots \
otimes I \otimes C \to D$ ($n$ interval objects) to the ``face'' of the $n$-cube and viewing that as an $n$-chain homotopy between the two. This gives a very elegant algebraic
expression that looks just like the usual chain homotopy condition for the first level. The differences between the two chain homotopies I wrote above satisfy this requirement, so
this solves the problem! Thanks again! :) – Arthur Jan 29 '12 at 3:49
1 Strictly speaking, what you get this way isn't a bicategory, it is a $(\infty,1)$-category, as they call it in nLab. Your multiplication of 2-cells is defined and associative only up
to a coherent action of 3-cells. And multiplication of 3-cells is ok only up to 4-cells. This is just the same in topology: if you have a homotopy from $f$ to $g$ and from $g$ to $h$,
you don't have a uniquely defined homotopy from $f$ to $h$! There are numerous ways to contract $[0;2]$-interval into a $[0;1]$-interval. – Anton Fetisov Jan 29 '12 at 22:38
1 @Anton: isn't the point of taking homotopy classes of homotopies to quotient out by the action of those 3-cells? I am pretty sure we get an honest bicategory, perhaps even a (strict)
2-category, this way. – Qiaochu Yuan Jan 29 '12 at 23:20
show 4 more comments
Viewing homotopy via interval objects analogously to topological homotopy as done in Qiaochu Yuan's answer is certainly the best way to consider the problem. Nevertheless it's also
possible to get along directly with your definition of homotopy.
In this point of view the problem is that your horizontal composition isn't appropriate: First note that we have $\sigma: f \Rightarrow g,\; \sigma': f' \Rightarrow g'$. Thus (I just
write $f'f$ for $f'\circ f$) $$f'f - g'g = f'f - f'g + f'g - g'g = ...=\partial^{''}(f'\sigma+\sigma'g) + (f'\sigma+\sigma'g)\partial. $$
up vote 4 Therefore $\sigma' \circ \sigma:= f'\sigma+\sigma'g: f'f \Rightarrow g'g$ is a homotopy.
down vote
Let $\sigma'': f'' \Rightarrow g''$ be another homotopy. By setting in the definition of the former homotopy, one finds that $$(\sigma'' \circ \sigma') \circ \sigma = f''f'\sigma + f'' \
sigma' g + \sigma'' g'g = \sigma'' \circ (\sigma' \circ \sigma)$$ is a homotopy $(f'' f')f = f''(f'f) \Rightarrow g''(g'g) = (g''g')g$.
Hence the horizontal composition is associative.
Thanks. And yes, I believe your definition of horizontal composition is the same as mine under equivalence of higher chain homotopies now that I know what that condition is. – Arthur
Jan 29 '12 at 23:43
add comment
I think the right context for this question is that of monoidal closed categories with a unit interval object, and probably the first place to explore this is the ncat lab.
I mention that we set up a number of monoidal closed categories in our book
R. Brown, P.J. Higgins, R. Sivera, Nonabelian algebraic topology: filtered spaces, crossed complexes, cubical homotopy groupoids, EMS Tracts in Mathematics Vol. 15, 703 pages. (August
2011). http://www.bangor.ac.uk/r.brown/nonab-a-t.html pdf available there.
for example:
up vote 3
down vote crossed complexes, chain complexes with a groupoid of operators,
and explore the relations between them. As explained in the above answer and comments, the cubical approach has some advantages when dealing with homotopies and higher homotopies,
basically because of the formula $I^m \otimes I^n \cong I^{m+n}$.
Added as edit: the other point about the cubical formulation is that there is a notion of cubical set with compositions (and also extra structure such as connections) but we do not have a
similar notion simplicially, or not so easily. Globular notions have compositions, though multiple compositions are awkward, and tensor products are not so easy.
add comment
Not the answer you're looking for? Browse other questions tagged higher-category-theory homological-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/86932/a-2-category-of-chain-complexes-chain-maps-and-chain-homotopies","timestamp":"2014-04-16T14:07:34Z","content_type":null,"content_length":"75026","record_id":"<urn:uuid:b7ac768f-f3dc-4082-8bf7-6c65e90431e3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Austell Algebra 2 Tutor
Find an Austell Algebra 2 Tutor
...Quantitatively and qualitatively, the student will describe the process of solutions and characteristics of solutions. Thermodynamic relationships will be investigated. Students will explore
the factors that affect the rates of a reaction and apply them to the theory of dynamic equilibrium.
14 Subjects: including algebra 2, chemistry, physics, SAT math
...I solved my own problem by going to study hall and help others in chemistry, which I did for at least a year. The school gave me an award for my efforts I did that year. When I tutored someone
in chemistry, I devoted my time to one person at a time to focus on their problems, to see what they were struggling on.
17 Subjects: including algebra 2, chemistry, calculus, geometry
...She went through college at an accelerated pace of 3 years instead of 4, while maintaining her HOPE scholarship. She even studied abroad in Ireland during those three years! She's been
tutoring for over 5 years in many different environments that include one-on-one tutoring in-person and online, as well as tutoring in a group environment.
22 Subjects: including algebra 2, reading, writing, calculus
...I will help you find the proper place where it belongs so that you will see how it connects to lower and upper math concepts. Let's connect these pieces together. "Putting one foot in front of
the other", is a simple motion but crucial if one is to move forward from one point to another under their own power. Algebra, is this step that carries one from a basic math concept to higher
26 Subjects: including algebra 2, English, reading, geometry
...You will be charged for at least an hour for each session, regardless of the time elapsed. You will be charged for the time booked or however long the session lasts, depending on whichever is
longest. I offer online and in-person tutoring.
10 Subjects: including algebra 2, calculus, physics, ASVAB
Nearby Cities With algebra 2 Tutor
Acworth, GA algebra 2 Tutors
Carrollton, GA algebra 2 Tutors
Clarkdale, GA algebra 2 Tutors
Doraville, GA algebra 2 Tutors
Douglasville algebra 2 Tutors
East Point, GA algebra 2 Tutors
Forest Park, GA algebra 2 Tutors
Hiram, GA algebra 2 Tutors
Kennesaw algebra 2 Tutors
Lithia Springs algebra 2 Tutors
Mableton algebra 2 Tutors
Powder Springs, GA algebra 2 Tutors
Smyrna, GA algebra 2 Tutors
Union City, GA algebra 2 Tutors
Villa Rica, PR algebra 2 Tutors | {"url":"http://www.purplemath.com/Austell_algebra_2_tutors.php","timestamp":"2014-04-19T07:29:16Z","content_type":null,"content_length":"24051","record_id":"<urn:uuid:9d20f0d9-36af-4515-8208-4142e625ec80>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
Seasonal transmission potential and activity peaks of the new influenza A(H1N1): a Monte Carlo likelihood analysis based on human mobility
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Seasonal transmission potential and activity peaks of the new influenza A(H1N1): a Monte Carlo likelihood analysis based on human mobility
Duygu Balcan
^#^1,^2 Hao Hu
^#^1,^2,^3 Bruno Goncalves
^#^1,^2 Paolo Bajardi
^#^4,^5 Chiara Poletto
^#^4 Jose J Ramasco
^4 Daniela Paolotti
^4 Nicola Perra
^1,^6,^7 Michele Tizzoni
^4,^8 Wouter Van den Broeck
^4 Vittoria Colizza
Alessandro Vespignani^^1,^2,^4
On 11 June the World Health Organization officially raised the phase of pandemic alert (with regard to the new H1N1 influenza strain) to level 6. As of 19 July, 137,232 cases of the H1N1 influenza
strain have been officially confirmed in 142 different countries, and the pandemic unfolding in the Southern hemisphere is now under scrutiny to gain insights about the next winter wave in the
Northern hemisphere. A major challenge is pre-empted by the need to estimate the transmission potential of the virus and to assess its dependence on seasonality aspects in order to be able to use
numerical models capable of projecting the spatiotemporal pattern of the pandemic.
In the present work, we use a global structured metapopulation model integrating mobility and transportation data worldwide. The model considers data on 3,362 subpopulations in 220 different
countries and individual mobility across them. The model generates stochastic realizations of the epidemic evolution worldwide considering 6 billion individuals, from which we can gather information
such as prevalence, morbidity, number of secondary cases and number and date of imported cases for each subpopulation, all with a time resolution of 1 day. In order to estimate the transmission
potential and the relevant model parameters we used the data on the chronology of the 2009 novel influenza A(H1N1). The method is based on the maximum likelihood analysis of the arrival time
distribution generated by the model in 12 countries seeded by Mexico by using 1 million computationally simulated epidemics. An extended chronology including 93 countries worldwide seeded before 18
June was used to ascertain the seasonality effects.
We found the best estimate R[0 ]= 1.75 (95% confidence interval (CI) 1.64 to 1.88) for the basic reproductive number. Correlation analysis allows the selection of the most probable seasonal behavior
based on the observed pattern, leading to the identification of plausible scenarios for the future unfolding of the pandemic and the estimate of pandemic activity peaks in the different hemispheres.
We provide estimates for the number of hospitalizations and the attack rate for the next wave as well as an extensive sensitivity analysis on the disease parameter values. We also studied the effect
of systematic therapeutic use of antiviral drugs on the epidemic timeline.
The analysis shows the potential for an early epidemic peak occurring in October/November in the Northern hemisphere, likely before large-scale vaccination campaigns could be carried out. The
baseline results refer to a worst-case scenario in which additional mitigation policies are not considered. We suggest that the planning of additional mitigation policies such as systematic antiviral
treatments might be the key to delay the activity peak in order to restore the effectiveness of the vaccination programs.
Estimating the transmission potential of a newly emerging virus is crucial when planning for adequate public health interventions to mitigate its spread and impact, and to forecast the expected
epidemic scenarios through sophisticate computational approaches [1-4]. With the current outbreak of the new influenza A(H1N1) strain having reached pandemic proportions, the investigation of the
influenza situation worldwide might provide the key to the understanding of the transmissibility observed in different regions and to the characterization of possible seasonal behavior. During the
early phase of an outbreak, this task is hampered by inaccuracies and incompleteness of available information. Reporting is constrained by the difficulties in confirming large numbers of cases
through specific tests and serological analysis. The cocirculation of multiple strains, the presence of asymptomatic cases that go undetected, the impossibility to monitor mild cases that do not seek
health care and the possible delays in diagnosis and reporting, all worsen the situation. Early modeling approaches and statistical analysis show that the number of confirmed cases by the Mexican
authorities during the early phase was underestimated by a factor ranging from one order of magnitude [5] to almost three [6]. The Centers for Disease Control (CDC) in the US estimate a 5% to 10%
case detection, similar to other countries facing large outbreaks, with expected heterogeneities due to different surveillance systems. Even within the same country, the setup of enhanced monitoring
led to improved notification with respect to the earlier phase of the pandemic, later relaxed as reporting requirements changed [7].
By contrast, the effort put in place by the World Health Organization (WHO) and health protection agencies worldwide is providing an unprecedented amount of data and, at last, the possibility of
following in real time the pandemic chronology on the global scale. In particular, the border controls and the enhanced surveillance aimed at detecting the first cases reaching uninfected countries
appear to provide more reliable and timely information with respect to the raw count of cases as local transmission occurs, and this data has already been used for early assessment of the number of
cases in Mexico [5]. Moreover, data on international passenger flows from Mexico was found to display a strong correlation with confirmed H1N1 importations from Mexico [8]. Here we present an
estimate of the reproduction number, R[0], (that is, the average number of secondary cases produced by a primary case [9]) of the current H1N1 epidemic based on knowledge of human mobility patterns.
We use the GLEaM (for GLobal Epidemic and Mobility) structured metapopulation model [10] for the worldwide evolution of the pandemic and perform a maximum likelihood analysis of the parameters
against the actual chronology of newly infected countries. The method is computationally intensive as it involves a Monte Carlo generation of the distribution of arrival time of the infection in each
country based on the analysis of 10^6 worldwide simulations of the pandemic evolution with the GLEaM model. The method shifts the burden of estimating the disease transmissibility from the incidence
data, suffering notification/surveillance biases and dependent on country specific surveillance systems, to the more accurate data of the early case detection in newly affected countries. This is
achieved through the modeling of human mobility patterns on the global level obtained from high quality databases. In other words, the chronology of the infection of new countries is determined by
two factors. The first is the number of cases generated by the epidemic in the originating country. The second is the mobility of people from this country to the rest of the world. The mobility data
are defined from the outset with great accuracy and we can therefore find the parameters of the disease spreading as those that provide the best fit for the time of infection of new countries. This
method also allows for uncovering the presence of a seasonal signature in the observed pattern, not hindered or effectively caused by notification and reporting changes in each country's influenza
monitoring. The obtained values for the reproduction numbers are larger than the early estimates [5], though aligned with later works [11-13]. The simulated geographic and temporal evolution of the
pandemic based on these estimates shows the possibility of an early epidemic activity peak in the Northern hemisphere as soon as mid October. While the simulations refer to a worst-case scenario,
with no intervention implemented, the present findings pertain to the timing of the vaccination campaigns as planned by many countries. For this reason we also present an analysis of scenarios in
which the systematic use of antiviral drug therapy is implemented with varying effectiveness, according to the national stockpiles, and study their effect on the epidemic timeline.
The GLEaM structured metapopulation model is based on a metapopulation approach [4,14-22] in which the world is divided into geographical regions defining a subpopulation network where connections
among subpopulations represent the individual fluxes due to the transportation and mobility infrastructure. GLEaM integrates three different data layers [10]. The population layer is based on the
high-resolution population database of the 'Gridded Population of the World' project of the SocioEconomic Data and Applications Center (SEDAC) [23] that estimates the population with a granularity
given by a lattice of cells covering the whole planet at a resolution of 15 × 15 minutes of arc. The transportation mobility layer integrates air travel mobility obtained from the International Air
Transport Association (IATA [24]) and Official Airline Guide (OAG [25]) databases that contain the list of worldwide airport pairs connected by direct flights and the number of available seats on any
given connection [26]. The combination of the population and mobility layers allows the subdivision of the world into georeferenced census areas defined with a Voronoi tessellation procedure [27]
around transportation hubs. These census areas define the subpopulations of the metapopulation modeling structure (see Figure Figure1).1). In particular, we identify 3,362 subpopulations centered
around IATA airports in 220 different countries (see [10] and Additional file 1 for more details). GLEaM integrates short scale mobility between adjacent subpopulations by considering commuting
patterns worldwide as obtained from the data collected and analyzed from more than 29 countries in 5 continents across the world [10]. Superimposed on these layers is the epidemic layer that defines
the disease and population dynamics. The model simulates the mobility of individuals from one subpopulation to another by a stochastic procedure in which the number of passengers of each compartment
traveling from a subpopulation j to a subpopulation l is an integer random variable defined by the actual data from the airline transportation database (see Additional file 1). Short range commuting
between subpopulations is modeled with a time scale separation approach that defines the effective force of infections in connected subpopulations [10,28,29]. The infection dynamics takes place
within each subpopulation and assumes the classic influenza-like illness compartmentalization in which each individual is classified by a discrete state such as susceptible, latent, infectious
symptomatic, infectious asymptomatic or permanently recovered/removed [9,30]. The model therefore assumes that the latent period is equivalent to the incubation period and that no secondary
transmissions occur during the incubation period (see Figure Figure11 for a detailed description of the compartmentalization). All transitions are modeled through binomial and multinomial processes
to preserve the discrete and stochastic nature of the processes (see Additional file 1 for the full description). Asymptomatic individuals are considered as a fraction p[a ]= 33% of the infectious
individuals [31] generated in the model and assumed to infect with a relative infectiousness of r[β ]= 50% [5,30,32]. Change in traveling behavior after the onset of symptoms is modeled with the
probability 1 - p[t], set to 50%, that individuals would stop traveling when ill [30]. The spreading rate of the disease is ultimately governed by the basic reproduction number R[0]. Once the disease
parameters and initial conditions based on available data are defined, GLEaM allows the generation of stochastic realizations of the worldwide unfolding of the epidemic, with mobility processes
entirely based on real data. The model generates in silico epidemics for which we can gather information such as prevalence, morbidity, number of secondary cases, number of imported cases and other
quantities for each subpopulation and with a time resolution of 1 day. While global models are generally used to produce scenarios in which the basic disease parameters are defined from the outset,
here we use the model to provide a maximum likelihood estimate of the transmission potential by finding the set of disease parameters that best fit the data on the arrival time of cases in different
countries worldwide. It is important to stress that the model is not an agent-based model and does not include additional structure within a subpopulation, therefore it cannot provide detailed
information at the level of households or workplaces. The projections for the winter season in the northern hemisphere are also assuming that there will be no mutation of the virus with respect to
the spring/summer of 2009. Furthermore, while at the moment the novel H1N1 influenza is accounting for 75% of the influenza cases worldwide, the model does not consider the cocirculation of different
influenza strains and cannot provide information on cocirculation data.
Schematic illustration of the GLobal Epidemic and Mobility (GLEaM) model. Top: census and mobility layers that define the subpopulations and the various types of mobility among those (commuting
patterns and air travel flows). The same resolution is used ...
The initial conditions of the epidemic are defined by setting the onset of the outbreak near La Gloria in Mexico on 18 February 2009, as reported by official sources [33] and analogously to other
works [5]. We tested different localizations of the first cases in census areas close to La Gloria without observing relevant variations with respect to the observed results. We also performed
sensitivity analysis on the starting date by selecting a seeding date anticipated or delayed by 1 week with respect to the date available in official reports [33]. The arrival time of infected
individuals in the countries seeded by Mexico is clearly a combination of the number of cases present in the originating country (Mexico) and the mobility network, both within Mexico and connecting
Mexico with countries abroad. For this reason we integrated into our model the data on Mexico-US border commuting (see Figure Figure2a),2a), which could be relevant in defining the importation of
cases in the US, along with Mexican internal commuting patterns (see Figure Figure1)1) that are responsible for the diffusion of the disease from rural areas as La Gloria to transportation hubs such
as Mexico City. In addition, we used a time-dependent modification of the reproductive number in Mexico as in [6] to model the control measures implemented in the country starting 24 April and ending
10 May, as those might affect the spread to other countries.
Illustration of the model's initialization and the results for the activity peaks in three geographical areas. (a) Intensity of the commuting between US and Mexico at the border of the two countries.
(b) The 12 countries infected from Mexico used in the ...
In order to ascertain the effect of seasonality on the observed pattern, we explored different seasonality schemes. The seasonality is modeled by a standard forcing that rescales the value of the
basic reproductive number into a seasonally rescaled reproductive number, R(t), depending on time. The seasonal rescaling is time and location dependent by means of a scaling multiplicative factor
generated by a sinusoidal function with a total period of 12 months oscillating in the range α[min ]to α[max], with α[max ]= 1.1 days (sensitivity analysis in the range 1.0 to 1.1) and α[min ]a free
parameter to be estimated [17]. The rescaling function is in opposition in the Northern and Southern hemispheres (see Additional file 1 for details). No rescaling is assumed in the Tropics. The value
of R[0 ]reported in the Tables and the definition of the baseline is the reference value in the Tropics. In each subpopulation the R(t) relative to the corresponding geographical location and time of
the year is used in the simulations.
The seasonal transmission potential of the H1N1 strain is assessed in a two-step process that first estimates the reproductive number in the Tropics region, where seasonality is assumed not to occur,
by focusing on the early international seeding by Mexico, and then estimates the degree of seasonal dumping factor by examining a longer time period of international spread to allow for seasonal
changes. The estimation of the reproductive number is performed through a maximum likelihood analysis of the model fitting the data of the early chronology of the H1N1 epidemic. Given a set of values
of the disease parameters, we produced 2 × 10^3 stochastic realizations of the pandemic evolution worldwide for each R[0 ]value. Our model explicitly takes into account the class of symptomatic and
asymptomatic individuals (see Figure Figure1)1) and allows the tracking of the importation of each symptomatic individual and of the onset of symptoms of exposed individuals transitioning to the
symptomatic class, as observables of the simulations. This allows us to obtain numerically with a Monte Carlo procedure the probability distribution P[i](t[i]) of the importation of the first
infected individual or the first occurrence of the onset of symptoms for an individual in each country i at time t[i]. Asymptomatic individuals do not contribute to the definition of t[i]. With the
aim of working with conditional independent variables we restrict the likelihood analysis to 12 countries seeded from Mexico (see Figure Figure2b)2b) and for which it is possible to know with good
confidence the onset of symptoms and/or the arrival date of the first detected case (see Tables and data sources in Additional file 1). This allows us to define a likelihood function L = Π[i ]P[i](t
[i]*), where t[i]* is the empirical arrival time from the H1N1 chronological history in each of the selected countries. This methodology assumes the prompt detection of symptomatic cases at the very
beginning of the outbreak in a given country, and for this reason we have also provided a sensitivity analysis accounting for a late/missed detection of symptomatic individuals as reported in the
next section. The transmission potential is estimated as the value of R[0 ]that maximizes the likelihood function L, for a given set of values of the disease parameters. In Table Table11 we report
the reference values assumed for some of the model parameters and the range explored with the sensitivity analysis. So far there are no precise clinical estimates of the basic model parameters ε and
μ defining the inverse average exposed and infectious time durations [34-36]. The generation interval G[t ][37,38] used in the literature is based on the early estimate of [5] and values obtained for
previous pandemic and seasonal influenza [4,30-32,39,40], with most studies focusing on values ranging from 2 to 4 days [5,11-13]. We have therefore assumed a short exposed period value ε^-1 = 1.1 as
indicated by early estimates [5] and compatible with recent studies on seasonal influenza [31,41] and performed a sensitivity analysis for values as large as ε^-1 = 2.5 days. The maximum likelihood
procedure is performed by systematically exploring different values of the generation time aimed at providing a best estimate and confidence interval for G[t], along with the estimation of the
maximum likelihood value of R[0].
Best Estimates of the epidemiological parameters
The major problem in the case of projections on an extended time horizon is the seasonality effect that in the long run is crucial in determining the peak of the epidemic. In order to quantify the
degree of seasonality observed in the current epidemic, we estimate the minimum seasonality scaling factor α[min ]of the sinusoidal forcing by extending the chronology under study and analyzing the
whole data set composed of the arrival dates of the first infected case in the 93 countries affected by the outbreak as of 18 June. We studied the correlation between the simulated arrival time by
country and its corresponding empirical value, by measuring the regression coefficient between the two datasets. Given the extended time frame under observation, the arrival times considered in this
case are expected to provide a signature of the presence of seasonality. They included the seeding of new countries from outbreaks taking place in regions where seasonal effects might occur, as for
example in the US or in the UK. For the simulated arrival times we have considered the median and 95% confidence interval (CI) emerging from the 2 × 10^3 stochastic runs. The regression coefficient
is found to be sensitive to variations in the seasonality scaling factor, allowing discrimination of the α[min ]value that best fits the real epidemic. A detailed presentation of this analysis is
reported in Additional file 1. The full exploration of the phase space of epidemic parameters and seasonality scenarios reported in Additional file 1 required data from 10^6 simulations; the
equivalent of 2 million minutes of PowerPC 970 2.5 GHz CPU time.
Results and Discussion
Table Table11 reports the results of the maximum likelihood procedure and of the correlation analysis on the arrival times for the estimation of α[min]. In the following we consider as the baseline
case the set of parameters defined by the best estimates: G[t ]= 3.6 days, μ^-1 = 2.5 days, R[0 ]= 1.75.
The best estimates for G[t ]and R[0 ]are higher than those obtained in early findings but close to subsequent analysis on local outbreaks [11-13]. The R[0 ]we report is the reference value for Mexico
and the tropical region, whereas in each country we have to consider the R(t) due to the seasonality rescaling depending on the time of the year, as shown in Table Table2.2. This might explain the
lower values found in some early analysis in the US. The transmission potential emerging from our analysis is close to estimates for previous pandemics [14,42]. In Additional file 1 we provide
supplementary tables for the full sensitivity analysis concerning the assumptions used in the model. Results show that larger values of the generation interval provide increasing estimates for R[0].
Fixing the latency period to ε^-1 = 1.1 days and varying the mean infectious period in the plausible range 1.1 to 4.0 days yields corresponding maximum likelihood estimates for R[0 ]in the range 1.4
to 2.1. Variations in the latency period from ε^-1 = 1.1 to ε^-1 = 2.5 days provide corresponding best estimates for R[0 ]in the range 1.9 to 2.3, if we assume an infectious period of 3 days. We
tested variations of the compartmental model parameters p[a], and p[t ]up to 20% and explored the range r[β ]= 20% to 80%, and sensitivity on the value of the maximum seasonality scaling factor α[max
]in the range 1.0 to 1.1. The obtained estimates lie within the confidence intervals of the best estimate values.
Seasonality time-dependent reproduction number in the Northern hemisphere
The empirical arrival time data used for the likelihood analysis are necessarily an overestimation of the actual date of the importation of cases as cases could go undetected. If we assume a shift of
7 days earlier for all arrival times available from official reports, the resulting maximum likelihood is increasing the best estimate for R[0 ]to 1.87 (95% CI 1.73 to 2.01), as expected since
earlier case importation necessitates a larger growth rate of the epidemic. The official timeline used here therefore provides, all other parameters being equal, a lower estimate of the transmission
potential. We have also explored the use of a subset of the 12 countries, always generating results within the confidence interval of the best estimate.
The best estimates reported in Table Table11 do not show any observable dependence on the assumption about the seasonality scenario (as reported in Additional file 1). The analysis is restricted to
the first countries seeded from Mexico to preserve the conditional independence of the variables and it is natural to see the lack of any seasonal signature since these countries receive the disease
from a single country, mostly found in the tropical region where no seasonal effects are expected.
In order to find the minimum seasonality scaling factor α[min ]that best fits the empirical data, we performed a statistical correlation analysis of the arrival time of the infection in the 93
countries infected as of 18 June, as detailed in the Methods section and Additional file 1. By considering a larger number of countries and a longer period for the unfolding of the epidemic worldwide
as seasons change, the correlation analysis for the baseline scenario provides clear statistical indications for a minimum rescaling factor in the interval 0.6 < α[min ]< 0.7. In the full range of
epidemic parameters explored, the correlation analysis yields values for α[min ]in the range 0.4 to 0.9. This evidence for a mild seasonality rescaling is consistent with the activity observed in the
months of June and July in Europe and the US where the epidemic progression has not stopped and the number of cases keeps increasing considerably (see also Table Table22 for the corresponding values
of R(t) in those regions during summer months).
This analysis allows us to provide a comparison with the epidemic activity observed so far, and most importantly an early assessment of the future unfolding of the epidemics. For each set of
parameters the model generates quantities of interest such as the profile of the epidemic behavior in each subpopulation or the number of imported cases. Each simulation generates a stochastic
realization of the process and the curves are the statistical aggregate of at least 2 × 10^3 realizations. In the following we report the median profiles and where indicated the 95% CI. For the sake
of clarity data are aggregated at the level of country or geographical region. Additional file 1 reports a detailed comparison of the simulated number of cases in Australia, US, UK with the reported
cases from official sources in the period May to July. Results are in good agreement with the reported temporal evolution of the epidemic and highlight a progressive decrease of the monitoring
activity caused by the increasing number of cases, as expected [7]. The same information is also available for each single subpopulation defined in the model. We have therefore tested the model
results in four territories of Australia. Interestingly, the model is able to recover the different timing observed in the four territories. A detailed discussion of this comparison is reported in
Additional file 1.
In Figure 2c-d we report the predicted baseline case profiles for countries in the Southern hemisphere. It is possible to observe in the figure that in this case, the effect of seasonality is not
discriminating between different waves, as the short time interval from the start of the outbreak to the winter season in the Southern hemisphere does not allow a large variation in the rescaling of
the transmissibility during these months. Therefore we predict a first wave that occurs between August and September in phase with the seasonal influenza pattern, and independently of the seasonality
parameter α[min]. The situation is expected to be different in the Northern hemisphere where different seasonality parameters might progressively shift the peak of the epidemic activity in the winter
months. Figure Figure2e2e reports the predicted daily incidence profiles for the Northern hemisphere and the 95% CI for the activity peaks of the pandemic with the best-fit seasonality scenario
(that is, the range 0.6 < α[min ]< 0.7). Table Table33 reports the same information for different continental areas. The general evidence clearly points to the occurrence of an autumn/winter wave in
the Northern hemisphere strikingly earlier than expected, with peak times ranging from early October to the middle of November. The peak estimate for each geographical area is obtained from the
epidemic profile summing up all subpopulations belonging to the region. The activity peak estimate for each single country can be noticeably different from the overall estimate of the corresponding
geographical region as more populated areas may dominate the estimate for a given area. For instance Chile has a pandemic activity peak in the interval 1 July - 6 August, one month earlier than the
average peak estimate for the Lower South America geographical area it belongs to. It is extremely important to remark that in the whole phase space of parameters explored the peak time for the
epidemic activity in the Northern hemisphere lies in the range late September to late November, thus suggesting that the early seasonal peak is a genuine feature induced by the epidemic data
available so far.
In Table Table44 we report the new number of cases at the activity peak and the epidemic size as of 15 October for a selected number of countries. As shown by the results in the table, the
implementation of a massive vaccination campaign starting in October or November, with no additional mitigation implemented, would be too late with respect to the epidemic evolution, and could
therefore be expected to be rather ineffective in reducing transmission. This makes a strong case for prioritized vaccination programs focusing on high-risk groups and healthcare and social
infrastructure workers. In order to assess the amount of pressure on the healthcare infrastructure, in Table Table55 we provide the expected number of hospitalizations at the epidemic peak according
to different hospitalization rate estimates. The assessment of the hospitalization rate is very difficult as it depends on the ratio between the number of hospitalizations and the actual number of
infected people. As discussed previously, the number of confirmed cases released by official agencies is always a crude underestimate of the actual number of infected people. We consider three
different methods along the lines of those developed for the analysis of fatalities due to the new virus [43]. The first assumes the average value of hospitalization observed during the regular
seasonal influenza season. The second is a multiplier method in which the hospitalization rate is obtained as the ratio between the WHO number of confirmed hospitalizations and the cases confirmed by
the WHO multiplied by a factor 10 to 30 to account for underreporting. The third method is given by the ratio of the total number of confirmed hospitalizations and the total number of confirmed
cases. This number is surely a gross overestimation of the hospitalization rate [43,44]. It has to be noted that hospitalizations are often related to existing health conditions, age and other risk
factors. This implies that hospitalizations will likely not affect the population homogenously, a factor that we cannot consider in our model.
Daily new number of cases and epidemic sizes in several countries
Number of hospitalizations per 100,000 persons at the activity peak in several countries
The number of hospitalized at peak times in the selected countries range between 2 and 40 per 100,000 persons, for a hospitalization rate typical of seasonal influenza and for an assumed 1% rate,
respectively, yielding a quantitative indication of the potential burden that the health care systems will likely face at the peak of the epidemic activity in the next few months. It is worth noting
that the present analysis considers a worst-case scenario in which no effective containment measures are introduced. This is surely not the case in that pandemic plans and mitigation strategies are
considered at the national and international level. Guidelines aimed at increasing social distancing and the isolation of cases will be crucial in trying to mitigate and delay the spread in the
community, thus reducing the overwhelming requests on the hospital systems. Most importantly, the mass vaccination of a large fraction of the population would strongly alter the presented picture. By
contrast, any mass vaccination campaign is unlikely to start before the middle of October [45,46]. The potential for an early activity peak of the pandemic in October/November puts at risk the
effectiveness of any mass vaccination program that might take place too late with respect to the pandemic wave in the Northern hemisphere. In this case it is natural to imagine the use of other
mitigation strategies aimed at delaying the activity peak so that the maximum benefit can be gained with the vaccination program. As an example, we studied the implementation of systematic antiviral
(AV) treatment and its effect in delaying the activity peak [19,30,32,39,47-50]. The resulting effects are clearly country specific in that each country will experience a different timing for the
epidemic peak (with a local transmissibility increasing in value as we approach the winter months) and will count on antiviral stockpiles of different sizes. Here we consider the implementation of
the AV treatment in all countries in the world that have drugs stockpiles available (source data from [51,52] and national agencies), until the exhaustion of their stockpiles [4]. We have modeled
this mitigation policy with a conservative therapeutic successful use of drugs for 30% of symptomatic infectious individuals. The efficacy of the AV is accounted in the model by a 62% reduction in
the transmissibility of the disease of an infected person under AV treatment when AV drugs are administered in a timely fashion [30,32]. We assume that the drugs are administered within 1 day of the
onset of symptoms. We also consider that the AV treatment reduces the infectious period by 1 day [30,32]. In Figure Figure33 we show the delay obtained with the implementation of the AV treatment
protocol in a subset of countries with available stockpiles. As an example, we also show the incidence profiles for the cases of Spain and Germany, where it is possible to achieve a delay of about 4
weeks with the use of 5 million and 10 million courses of AV, respectively. The results of this mitigation might be extremely valuable in providing the necessary time for the implementation of the
mass vaccination program.
Delay effect induced by the use of antiviral drugs for treatment with 30% case detection and drug administration. (a) Peak times of the epidemic activity in the worst-case scenario (black) and in the
scenario where antiviral treatment is considered (red), ...
We have defined a Monte Carlo likelihood analysis for the assessment of the seasonal transmission potential of the new A(H1N1) influenza based on the analysis of the chronology of case detection in
affected countries at the early stage of the epidemic. This method allows the use of data coming from the border controls and the enhanced surveillance aimed at detecting the first cases reaching
uninfected countries. This data is, in principle, more reliable than the raw count of cases provided by countries during the evolution of the epidemic. The procedure provides the necessary input to
the large-scale computational model for the analysis of the unfolding of the pandemic in the future months. The analysis shows the potential for an early activity peak that strongly emphasizes the
need for detailed planning for additional intervention measures, such as social distancing and antiviral drugs use, to delay the epidemic activity peak and thus increase the effectiveness of the
subsequent vaccination effort.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
DB, HH, BG, PB and CP contributed to conceiving and designing the study, performed numerical simulations and statistical analysis, contributed to the data integration and helped to draft the
manuscript. JJR contributed to conceiving and designing the study, data tracking and integration, statistical analysis and helped draft the manuscript. NP and MT contributed to data tracking and
integration, statistical analysis and helped draft the manuscript. DP contributed to data integration and management and helped draft the manuscript. WVdB contributed to visualization and data
management. AV and VC conceived, designed and coordinated the study, contributed to the analysis and methods development and drafted the manuscript. All authors read and approved the final
Pre-publication history
The pre-publication history for this paper can be accessed here:
Supplementary Material
Additional file 1:
Additional information. The file provides details on the model and all the statistical and sensitivity analysis carried out in the preparation of this work. The file also contains references to all
data sources used in the preparation of this work.
The authors thank IATA and OAG for providing their databases. The authors are grateful to the Staff of the Big Red Computer and the Computational Facilities at Indiana University. The authors would
like to thank Ciro Cattuto for his support with computational infrastructure at ISI Foundation. The authors are partially supported by the NIH R21-DA024259 award, the Lilly Endowment grant 2008
1639-000, the DTRA-1-0910039 grant, the ERC project EpiFor and the FET projects Epiwork and Dynanets.
• Eubank S, Guclu H, Kumar VS, Marathe MV, Srinivasan A, Toroczkai Z, Wang N. Modelling disease outbreaks in realistic urban social networks. Nature. 2004;429:180–184. doi: 10.1038/nature02541. [
PubMed] [Cross Ref]
• Ferguson NM, Cummings DA, Fraser C, Cajka JC, Cooley PC, Burke DS. Strategies for mitigating an influenza pandemic. Nature. 2006;442:448–452. doi: 10.1038/nature04795. [PubMed] [Cross Ref]
• Germann TC, Kadau K, Longini IM, Jr, Macken CA. Mitigation strategies for pandemic influenza in the United States. Proc Natl Acad Sci USA. 2006;103:5935–5940. doi: 10.1073/pnas.0601266103. [PMC
free article] [PubMed] [Cross Ref]
• Colizza V, Barrat A, Barthelemy M, Valleron A-J, Vespignani A. Modeling the worldwide spread of pandemic influenza: baseline case and containment interventions. PloS Medicine. 2007;4:e13. doi:
10.1371/journal.pmed.0040013. [PMC free article] [PubMed] [Cross Ref]
• Fraser C, Donnelly CA, Cauchemez S, Hanage WP, Van Kerkhove MD, Hollingsworth TD, Griffin J, Baggaley RF, Jenkins HE, Lyons EJ, Jombart T, Hinsley WR, Grassly NC, Balloux F, Ghani AC, Ferguson
NM, Rambaut A, Pybus OG, Lopez-Gatell H, Alpuche-Aranda CM, Chapela IB, Zavala EP, Guevara DM, Checchi F, Garcia E, Hugonnet S, Roth C, WHO Rapid Pandemic Assessment Collaboration Pandemic
potential of a strain of influenza A(H1N1): early findings. Science. 2009;324:1557–1561. doi: 10.1126/science.1176062. [PMC free article] [PubMed] [Cross Ref]
• Cruz-Pacheco G, Duran L, Esteva L, Minzoni A, Lopez-Cervantes M, Panayotaros P, Ahued Ortega A, Villasenor Ruiz I. Modelling of the influenza A(H1N1)v outbreak in Mexico City, April-May 2009,
with control sanitary measures. Euro Surveill. 2009;14:19254. [PubMed]
• World Health Organization: Pandemic (H1N1) 2009 briefing note 3 (revised): changes in reporting requirements for pandemic (H1N1) 2009 virus infection http://www.who.int/csr/disease/swineflu/notes
• Khan K, Arino J, Hu W, Raposo P, Sears J, Calderon F, Heidebrecht C, Macdonald M, Liauw J, Chan A, Gardam M. Spread of a novel influenza A(H1N1) virus via global airline transportation. N Engl J
Med. 2009;361:212–214. doi: 10.1056/NEJMc0904559. [PubMed] [Cross Ref]
• Anderson RM, May RM. Infectious diseases in humans. Oxford, UK: Oxford University Press; 1992.
• Balcan D, Colizza V, Gonçalves B, Hu H, Ramasco JJ, Vespignani A. Multiscale mobility networks and the large scale spreading of infectious diseases. arXiv. 2009. 0907.3304.
• Boelle PY, Bernillon P, Desenclos JC. A preliminary estimation of the reproduction ratio for new influenza A(H1N1) from the outbreak in Mexico, March-April 2009. Euro Surveill. 2009;14:19205. [
• Nishiura H, Castillo-Chavez C, Safan M, Chowell G. Transmission potential of the new influenza A(H1N1) virus and its age-specificity in Japan. Euro Surveill. 2009;14:19227. [PubMed]
• Nishiura H, Wilson NM, Baker MG. Estimating the reproduction number of the novel influenza A virus (H1N1) in a Southern Hemisphere setting: preliminary estimate in New Zealand. NZ Med J. 2009;122
:1–5. [PubMed]
• Rvachev LA, Longini IM. A mathematical model for the global spread of influenza. Math Biosci. 1985;75:3–22. doi: 10.1016/0025-5564(85)90064-1. [Cross Ref]
• Grais RF, Hugh Ellis J, Glass GE. Assessing the impact of airline travel on the geographic spread of pandemic influenza. Eur J Epidemiol. 2003;18:1065–1072. doi: 10.1023/A:1026140019146. [PubMed]
[Cross Ref]
• Hufnagel L, Brockmann D, Geisel T. Forecast and control of epidemics in a globalized world. Proc Natl Acad Sci USA. 2004;101:15124–15129. doi: 10.1073/pnas.0308344101. [PMC free article] [PubMed]
[Cross Ref]
• Cooper BS, Pitman RJ, Edmunds WJ, Gay N. Delaying the international spread of pandemic influenza. PloS Medicine. 2006;3:e12. doi: 10.1371/journal.pmed.0030212. [PMC free article] [PubMed] [Cross
• Epstein JM, Goedecke DM, Yu F, Morris RJ, Wagener DK, Bobashev GV. Controlling pandemic flu: the value of international air travel restrictions. PLoS ONE. 2007;2:e401. doi: 10.1371/
journal.pone.0000401. [PMC free article] [PubMed] [Cross Ref]
• Flahault A, Vergu E, Coudeville L, Grais R. Strategies for containing a global influenza pandemic. Vaccine. 2006;24:6751–6755. doi: 10.1016/j.vaccine.2006.05.079. [PubMed] [Cross Ref]
• Viboud C, Bjornstad O, Smith DL, Simonsen L, Miller MA, Grenfell BT. Synchrony, waves, and spatial hierarchies in the spread of influenza. Science. 2006;312:447–451. doi: 10.1126/science.1125237.
[PubMed] [Cross Ref]
• Flahault A, Valleron A-J. A method for assessing the global spread of HIV-1 infection based on air travel. Math Popul Stud. 1991;3:1–11. [PubMed]
• Colizza V, Barrat A, Barthélemy M, Vespignani A. The role of airline transportation network in the prediction and predictability of global epidemics. Proc Natl Acad Sci USA. 2006;103:2015–2020.
doi: 10.1073/pnas.0510525103. [PMC free article] [PubMed] [Cross Ref]
• Socioeconomic Data and Applications Center (SEDAC), Columbia University http://sedac.ciesin.columbia.edu/gpw
• International Air Transport Association http://www.iata.org
• Official Airline Guide http://www.oag.com/
• Barrat A, Barthélemy M, Pastor-Satorras R, Vespignani A. The architecture of complex weighted networks. Proc Natl Acad Sci USA. 2004;101:3747–3752. doi: 10.1073/pnas.0400087101. [PMC free article
] [PubMed] [Cross Ref]
• Okabe A, Boots B, Sugihara K, Chiu S-N. Spatial Tessellations - Concepts and Applications of Voronoi Diagrams. 2. John Wiley; 2000.
• Keeling MJ, Rohani P. Estimating spatial coupling in epidemiological systems: a mechanistic approach. Ecol Lett. 2002;5:20–29. doi: 10.1046/j.1461-0248.2002.00268.x. [Cross Ref]
• Sattenspiel L, Dietz K. A structured epidemic model incorporating geographic mobility among regions. Math Biosci. 128:71–91. doi: 10.1016/0025-5564(94)00068-B. [PubMed] [Cross Ref]
• Longini IM, Halloran ME, Nizam A, Yang Y. Containing pandemic influenza with antiviral agents. Am J Epidemiol. 2004;159:623–633. doi: 10.1093/aje/kwh092. [PubMed] [Cross Ref]
• Carrat F, Vergu E, Ferguson NM, Lemaitre M, Cauchemez S, Leach S, Valleron AJ. Time lines of infection and disease in human influenza: a review of volunteer challenge studies. Am J Epidemiol.
2008;167:775–785. doi: 10.1093/aje/kwm375. [PubMed] [Cross Ref]
• Longini IM, Nizam A, Xu S, Ungchusak K, Hanshaoworakul W, Cummings DAT, Halloran ME. Containing pandemic influenza at the source. Science. 2005;309:1083–1087. doi: 10.1126/science.1115717. [
PubMed] [Cross Ref]
• Brote de infeccion respiratoria aguda en La Gloria, Municipio de Perote, Mexico. Secretaria de Salud, Mexico http://portal.salud.gob.mx/contenidos/noticias/influenza/estadisticas.html
• World Health Organization WHO Weekly. Epidemiol Rec. 2009;84:197–202.
• CDC Interim guidance for clinicians on identifying and caring for patients with swine-origin influenza A(H1N1) virus infection (2009) http://www.cdc.gov/h1n1flu/identifyingpatients.htm
• Novel Swine-Origin Influenza A (H1N1) Virus Investigation Team. Dawood FS, Jain S, Finelli L, Shaw MW, Lindstrom S, Garten RJ, Gubareva LV, Xu X, Bridges CB, Uyeki TM. Emergence of a Novel
Swine-origin Influenza A(H1N1) Virus in Humans. N Engl J Med. 2009;360:2605–2615. doi: 10.1056/NEJMoa0903810. [PubMed] [Cross Ref]
• Roberts MJ, Heesterbeek JAP. Model-consistent estimation of the basic reproduction number from the incidence of an emerging infection. J Math Biol. 2007;55:803–816. doi: 10.1007/
s00285-007-0112-8. [PMC free article] [PubMed] [Cross Ref]
• Wallinga J, Lipsitch M. How generation intervals shape the relationship between growth rates and reproductive numbers. Proc R Soc B. 2007;274:599–604. doi: 10.1098/rspb.2006.3754. [PMC free
article] [PubMed] [Cross Ref]
• Gani R, Hughes H, Fleming D, Griffin T, Medlock J, Leach S. Potential impact of antiviral drug use during influenza pandemic. Emerg Infect Dis. 2005;11:1355–1362. [PMC free article] [PubMed]
• Elveback LR, Fox JP, Ackerman E, Langworthy A, Boyd M, Gatewood L. An influenza simulation model for immunization studies. Am J Epidemiol. 1976;103:152–165. [PubMed]
• Lessler J, Reich NG, Brookmeyer R, Perl TM, Nelson KE, Cummings DA. Incubation periods of acute respiratory viral infections: a systematic review. Lancet Infect Dis. 2009;9:291–300. doi: 10.1016/
S1473-3099(09)70069-6. [PubMed] [Cross Ref]
• Mills CE, Robins JM, Lipsitch M. Transmissibility of 1918 pandemic influenza. Nature. 2004;432:904–906. doi: 10.1038/nature03063. [PubMed] [Cross Ref]
• Wilson N, Baker MG. The emerging influenza pandemic: estimating the case fatality ratio. Euro Surveill. 2009;14:19255. [PubMed]
• Garske T, Legrand J, Donnelly CA, Ward H, Cauchemez S, Fraser C, Ferguson NM, Ghani AC. Assessing the severity of the novel A/H1N1 pandemic. BMJ. 2009;339:b2840. doi: 10.1136/bmj.b2840. [PubMed]
[Cross Ref]
• Novartis successfully demonstrates capabilities of cell-based technology for production of A(H1N1) vaccine http://www.novartis.com/newsroom/media-releases/en/2009/1322241.shtml
• CDC: Novel H1N1 influenza vaccine http://www.cdc.gov/h1n1flu/vaccination/public/vaccination_qa_pub.htm
• Ferguson NM, Cummings DA, Cauchemez S, Fraser C, Riley S, Meeyai A, Iamsirithaworn S, Burke DS. Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature. 2005;437
:209–214. doi: 10.1038/nature04017. [PubMed] [Cross Ref]
• Germann TC, Kadau K, Longini IM, Macken CA. Mitigation strategies for pandemic influenza in the United States. Proc Natl Acad Sci USA. 2006;103:5935–5940. doi: 10.1073/pnas.0601266103. [PMC free
article] [PubMed] [Cross Ref]
• Arinaminpathy N, McLean AR. Antiviral treatment for the control of pandemic influenza: some logistical constraints. J R Soc Interface. 2008;5:545–553. doi: 10.1098/rsif.2007.1152. [PMC free
article] [PubMed] [Cross Ref]
• Wu JT, Riley S, Fraser C, Leung GM. Reducing the impact of the next influenza pandemic using household-based public health interventions. PLoS Med. 2006;3:e361. doi: 10.1371/journal.pmed.0030361.
[PMC free article] [PubMed] [Cross Ref]
• Roche: update on current developments around Tamiflu http://www.roche.com
• Singer AC, Howard BM, Johnson AC, Knowles CJ, Jackman S, Accinelli C, Caracciolo AB, Bernard I, Bird S, Boucard T, Boxall A, Brian JV, Cartmell E, Chubb C, Churchley J, Costigan S, Crane M,
Dempsey MJ, Dorrington B, Ellor B, Fick J, Holmes J, Hutchinson T, Karcher F, Kelleher SL, Marsden P, Noone G, Nunn MA, Oxford J, Rachwal T, et al. Meeting report: risk assessment of Tamiflu use
under pandemic conditions. Environ Health Perspect. 2008;116:1563–1567. [PMC free article] [PubMed]
Articles from BMC Medicine are provided here courtesy of BioMed Central
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2755471/?tool=pubmed","timestamp":"2014-04-18T22:22:41Z","content_type":null,"content_length":"143745","record_id":"<urn:uuid:3aa57707-fcc5-424b-86d6-bf92209208dd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method for Detecting Targets Using Space-Time Adaptive Processing
Patent application title: Method for Detecting Targets Using Space-Time Adaptive Processing
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A method for detecting a target in a non-homogeneous environment using a space-time adaptive processing of a radar signal includes normalizing training data of the non-homogeneous environment to
produce normalized training data; determining a normalized sample covariance matrix representing the normalized training data; tracking a subspace represented by the normalized sample covariance
matrix to produce a clutter subspace matrix; determining a test statistic representing a likelihood of a presence of the target in the radar signal based on the clutter subspace matrix and a steering
vector; and comparing the test statistic with a threshold to detect the target.
A method for detecting a target in a non-homogeneous environment using space-time adaptive processing of a radar signal, comprising the steps of: normalizing training data of the non-homogeneous
environment to produce normalized training data; representing the normalized training data as a normalized sample covariance matrix; tracking a subspace represented by the normalized sample
covariance matrix to produce a clutter subspace matrix; determining a test statistic representing a likelihood of a presence of the target in the radar signal based on the clutter subspace matrix and
a steering vector; and comparing the test statistic with a threshold to detect the target, wherein the steps are performed by a processor.
The method of claim 1, wherein the normalizing is according to x ~ k = x k x k H x k / K , ##EQU00010## wherein x
is a k
training vector, K is a total number of training vectors, H is a Hermitian transpose operation, {tilde over (χ)}
is a k
normalized training data vector.
The method of claim 2, wherein the normalized sample covariance matrix {tilde over (R)} is determined based on a sub-space tracking method according to R ~ = 1 K k = 1 K x ~ k x ~ k H . ##EQU00011##
The method of claim 2, wherein the normalized sample covariance matrix is determined according to a sub-space tracking method.
The method of claim 3, wherein the sub-space tracking uses a clutter subspace tracking according to {tilde over (R)}=
, wherein λ
×r is a diagonal matrix with most important r eigenvalues of the clutter subspace along the diagonal, σ
is a noise variance, I is an identity matrix,
×r is the clutter subspace matrix, M is the number of antenna elements, N is the number of pulses and r is a rank of estimated the clutter subspace covariance matrix
The method of claim 5, wherein a method for the clutter subspace tracking is selected from a group including a projection approximation subspace tracker (PAST), an orthogonal projection approximation
subspace tracker (OPAST), a projection approximation subspace tracker with deflation (PASTd), a fast approximate power iteration (FAPI), and a modified fast approximate power iteration (MFAPI).
The method of claim 5, further comprising: determining the test statistic according to T invention = | s H ( I - U ~ U ~ H ) x 0 | 2 ( s H ( I - U ~ U ~ H ) s ) ( x 0 H ( I - U ~ U ~ H ) x 0 H ) , ##
EQU00012## wherein x
is a data vector to be tested for a presence of a target, s is a steering vector for a given Doppler frequency and angle of arrival.
The method of claim 1, further comprising: acquiring the training data of the non-homogeneous environment using a phased-array antenna.
The method of claim 1, wherein the clutter is compound Gaussian distributed.
A method for detecting a target in a radar signal of a non-homogeneous environment using a space-time adaptive processing, comprising the steps of: normalizing training data according to x ~ k = x k
x k H x k / K , ##EQU00013## to produce normalized training data; determining a normalized sample covariance matrix representing the normalized training data according to R ~ = 1 K k = 1 K x ~ k x ~
k H ; ##EQU00014## tracking a subspace represented by the normalized sample covariance matrix uses a clutter subspace tracking according to {tilde over (R)}=
to produce a clutter subspace matrix U; determining a test statistic representing a likelihood of a presence of the target in the radar signal based on the clutter subspace matrix and a steering
vector according to T invention = | s H ( I - U ~ U ~ H ) x 0 | 2 ( s H ( I - U ~ U ~ H ) s ) ( x 0 H ( I - U ~ U ~ H ) x 0 H ) ; ##EQU00015## and comparing the test statistic with a threshold to
detect the target, wherein {tilde over (R)} is a normalized sample covariance matrix, λ
×r is a diagonal matrix with most important r eigenvalues of the clutter subspace along the diagonal, σ
is a noise variance, I is an identity matrix,
×r is an estimated clutter subspace, x
is a target data vector under a test for target presence, s is a steering vector for a given Doppler and angle of arrival, wherein the steps are performed by a processor.
A system for detecting a target in a radar signal of a non-homogeneous environment using a space-time adaptive processing, comprising: a phased-array antenna with multiple spatial channels for
acquiring training data; a processor for normalizing the training data and for determining a normalized sample covariance matrix representing the normalized training data; and a tracking subspace
estimator for tracking the normalized sample covariance matrix to produce a clutter subspace matrix, wherein the processor determines a test statistic representing a likelihood of a presence of the
target in the radar signal based on the clutter subspace matrix and a steering vector and compares the test statistic with a threshold to detect the target.
RELATED APPLICATION [0001]
This Patent Application claims priority to Provisional Application 61/471,407, "Method for Detecting Targets Using Space-Time Adaptive Processing," filed by Pun et al. on Apr. 4, 2011, incorporated
herein by reference.
FIELD OF THE INVENTION [0002]
This invention relates generally to signal processing, and in particular to space-time adaptive processing (STAP) for detecting a target using radar signals.
BACKGROUND OF THE INVENTION [0003]
Space-time adaptive processing (STAP) is frequently used in radar systems to detect a target, e.g., a car, or a plane. STAP has been known since the early 1970's. In airborne radar systems, STAP
improves target detection when interference in an environment, e.g., ground clutter and jamming, is a problem. STAP can achieve order-of-magnitude sensitivity improvements in target detection.
Typically, STAP involves a two-dimensional filtering technique applied to signals acquired by a phased-array antenna with multiple spatial channels. Generally, the STAP is a combination of the
multiple spatial channels with time dependent pulse-Doppler waveforms. By applying statistics of interference of the environment, a space-time adaptive weight vector is formed. Then, the weight
vector is applied to the coherent signals received by the radar to detect the target.
A number of non-adaptive and adaptive STAP detectors are available for detecting moving targets in non-Gaussian distributed environments. Due to the additional time-correlated texture component, the
optimum detection in the compound-Gaussian yields an implicit form, in most cases. The solution to the optimum detector usually resorts to an expectation-maximization procedure. On the other hand,
sub-optimal detectors in the compound-Gaussian case are expressed in closed-form. Among these detectors are the normalized adaptive matched filter (NAMF) with the standard sample covariance matrix,
and the NAMF with the normalized sample covariance matrix.
Speckle in a compound-Gaussian distributed environment has a low-rank structure. A speckle pattern is a random intensity pattern produced by mutual interference of a set of wavefronts. Therefore, an
adaptive eigen value/singular-value decomposition (EVD/SVD) is used, where, instead of using the inverse of the sample covariance matrix, a projection of the received signal and steering vector into
the null space of the clutter subspace is used to obtain the detection statistics. The EVD/SVD--based method is able to reduce the training requirement to O(2r), where r is the rank of the
disturbance covariance matrix. However, the computational complexity of this method remains high as O(M
), where M is the number of spatial channels and N is the number of pulses. If MN becomes large, then the high computational complexity of the EVD/SVD--based methods are impractical for real-time
FIG. 1 shows a block diagram of the conventional STAP method. When no target is detected, acquired signals 101 include a test signal x
110 and a set of training signals X
k=1, 2, . . . , K, 120, wherein K is a total number of training signals, which are independent and identically distributed (i.i.d.). The target signal can be expressed as a product of a known
steering vector s 130 and unknown amplitude a.
That method normalizes 140 the training signals x
120, and then computes the normalized sample covariance matrix 150 using the normalized training data 140. Then, eigenvalue decomposition 160 is applied to the normalized sample covariance matrix 150
to produce a matrix U 165 representing the clutter subspace. Next, the method determines a test statistics 170 describing a likelihood of presence of the target in a test signal 110 as shown in (1).
T prior
_art = | s H ( I - UU H ) x 0 | 2 ( s H ( I - UU H ) s ) ( x 0 H ( I - UU H ) x o H ) , ( 1 ) ##EQU00001##
where s is a known steering vector for a particular Doppler frequency and
angle of arrival, I is an identity matrix, x
is the data vector to be tested for target presence, and H is the Hermitian transpose operation.
The resulting test statistic T
-.sub.art 170 is compared to a threshold 180 to detect 190 whether a target is present, or not
The EVD/SVD based STAP method works well for compound-Gaussian distributed, i.e., non-homogeneous environments. However, this method is computationally expensive. Accordingly there is a need in the
art to provide a low complexity STAP method for detecting a target in non-homogeneous environments
SUMMARY OF THE INVENTION [0011]
The embodiments of the invention provide a system and a method for detecting targets in radar signals using space-time adaptive processing (STAP). To address high complexity of the adaptive eigen
value/singular-value decomposition (EVD/SVD) based clutter subspace estimation, some embodiments uses a subspace tracking (ST) method.
Accordingly, some embodiments use a low-complexity STAP strategy via subspace tracking in non-homogeneous compound-Gaussian distributed environments. Specifically, various embodiments use ST-based
low-complexity STAP detectors to track the subspace of a speckle component and mitigate the effect of the time-varying texture component.
The ST-based STAP for compound-Gaussian etc. environments is training-efficient, due to its exploitation of the low-rank structure of the speckle component. Also, ST-based STAP method is
computationally more efficient than SVD/EVD--based subspace approaches, due to its tracking subspace ability.
Accordingly, one embodiment of the invention provides a method for detecting a target in a non-homogeneous environment using a space-time adaptive processing of a radar signal. The method includes
normalizing training data of the non-homogeneous environment to produce normalized training data; determining a normalized sample covariance matrix representing the normalized training data; tracking
a subspace represented by the normalized sample covariance matrix to produce a clutter subspace matrix; determining a test statistic representing a likelihood of a presence of the target in the radar
signal based on the clutter subspace matrix and a steering vector; and comparing the test statistic with a threshold to detect the target.
The normalized sample covariance matrix can be determined according to a sub-space tracking method, wherein a method for the clutter subspace tracking can be selected from a group including
projection approximation subspace tracker (PAST), orthogonal projection approximation subspace tracker (OPAST), projection approximation subspace tracker with deflation (PASTd), fast approximate
power iteration (FAPI), and modified fast approximate power iteration (MFAPI).
Another embodiment discloses a method for detecting a target in a non-homogeneous environment using a space-time adaptive processing of a radar signal. The method includes normalizing training data
according to
~ k = x k x k H x k / K , ##EQU00002##
to produce normalized training data
; determining a normalized sample covariance matrix representing the normalized training data according to
~ = 1 K k = 1 K x ~ k x ~ k H ; ##EQU00003##
tracking a subspace represented by the normalized sample covariance
matrix uses a clutter subspace tracking according to
{tilde over (R)}=
[n]^2 to produce a clutter subspace matrix U
; determining a test statistic representing a likelihood of a presence of the target in the radar signal based on the clutter subspace matrix and a steering vector according to
T invention
= | s H ( I - U ~ U ~ H ) x 0 | 2 ( s H ( I - U ~ U ~ H ) s ) ( x 0 H ( I - U ~ U ~ H ) x 0 H ) ; ##EQU00004##
and comparing the test statistic with a threshold to detect the target
, wherein {tilde over (R)} is a normalized sample covariance matrix, λ
×r is a diagonal matrix with most important r eigenvalues of the clutter subspace along the diagonal, σ
is a noise variance, I is an identity matrix,
×r is an estimated clutter subspace, x
is a target data vector under a test for target presence, s is a steering vector for a given Doppler and angle of arrival.
Yet another embodiment discloses a system for detecting a target in a radar signal of a non-homogeneous environment using a space-time adaptive processing. The system includes a phased-array antenna
with multiple spatial channels for acquiring training data; a processor for normalizing the training data and for determining a normalized sample covariance matrix representing the normalized
training data; and a tracking subspace estimator for tracking the normalized sample covariance matrix to produce a clutter subspace matrix, wherein the processor determines a test statistic
representing a likelihood of a presence of the target in the radar signal based on the clutter subspace matrix and a steering vector and compares the test statistic with a threshold to detect the
BRIEF DESCRIPTION OF THE DRAWINGS [0018]
FIG. 1 is a block diagram of prior art space-time adaptive processing (STAP) for detecting targets; and
FIG. 2 is a block diagram of a system and a method of STAP method via subspace tracking according to some embodiments of an invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS [0020]
FIG. 2 shows a block diagram of a system and a method for detecting a target in a non-homogeneous environment using space-time adaptive processing of a radar signal. In one embodiment, the system
includes a phased-array antenna 205 for acquiring normalizing training data via multiple spatial channels, and a processor 201 for normalizing 240 the training data and for determining 250 a
normalized sample covariance matrix representing the normalized training data. Also, the system includes a tracking subspace estimator 260 for tracking the normalized sample covariance matrix to
produce a clutter subspace matrix 265. The tracking subspace estimator can be implemented using the processor 201 or an equivalent external processor. Also, the processor determines 270 a test
statistic representing a likelihood of a presence of the target in the radar signal based on the clutter subspace matrix and a steering vector 230 and compares 280 the test statistic with a threshold
to detect 290 the target.
Various embodiments of the invention use a low-rank structure of a speckle covariance matrix to simplify its tracking by some subspace tracking technique. Some embodiments are based on a realization
that direct application of the subspace tracking (ST) to the compound-Gaussian distributed environment fails to take into account the power oscillation over range bins. To address this problem,
normalization at the training signal level and at the test statistic level are described to adapt the ST to the compound-Gaussian environment. Specifically, the subspace tracking based low complexity
STAP uses test signal {x
×1} 220 and training signals {x
=1 210 and the steering vector {sεC
×1} 230 as inputs.
The compound-Gaussian clutter is a product of a positive scalar λ
and a multi-dimensional complex Gaussian vector with mean zero and covariance matrix R as in (2)
×1 (2)
where z[k]
˜CN(0, R). The conditional distribution of x
is x
R), which implies power oscillations over range bins.
Because the clutter data have different powers over range bins, a normalization of the clutter data is preferred for precisely tracking the subspace R. One simple solution is to perform instantaneous
power normalization of the clutter data before applying the ST techniques as
~ k = x k x k H x k / K 240. ##EQU00005##
, a normalized sample covariance matrix 250 is computed using the normalized training data 240. The clutter subspace estimator 260 can use various methods such as projection approximation subspace
tracker (PAST), orthogonal projection approximation subspace tracker (OPAST), projection approximation subspace tracker with deflation (PASTd), fast approximate power iteration (FAPI), and modified
fast approximate power iteration (MFAPI).
Accordingly, one embodiment uses a normalized ST-based STAP detector 270 according to
T invention
= | s H ( I - U ~ U ~ H ) x 0 | 2 ( s H ( I - U ~ U ~ H ) s ) ( x 0 H ( I - U ~ U ~ H ) x 0 H ) ( 2 ) ##EQU00006##
265 is the estimated clutter subspace from the instantaneously normalized signals 240 and 250, using some subspace tracking techniques 260. The test statistic 270 is used for testing whether a target
is presence. The resulting test statistic T
270 is compared to a threshold 280 to detect 290 whether a target is present, or not.
Accordingly, a method for detecting a target in a non-homogeneous environment using a space-time adaptive processing of a radar signal, can include normalizing training data of the non-homogeneous
environment to produce normalized training data; determining a normalized sample covariance matrix representing the normalized training data; tracking a subspace represented by the normalized sample
covariance matrix to produce a clutter subspace matrix; determining a test statistic representing a likelihood of a presence of the target in the radar signal based on the clutter subspace matrix and
a steering vector; and comparing the test statistic with a threshold to detect the target.
For example, in one embodiment the method includes normalizing 240 training data according to
~ k = x k x k H x k / K , ##EQU00007##
to produce normalized training data
; determining 250 a normalized sample covariance matrix representing the normalized training data according to
~ = 1 K k = 1 K x ~ k x ~ k H ; ##EQU00008##
260 a subspace represented by the normalized sample covariance matrix uses a clutter subspace tracking according to
{tilde over (R)}=
to produce a clutter subspace matrix U 265; determining 270 a test statistic representing a likelihood of a presence of the target in the radar signal based on the clutter subspace matrix and a
steering vector according to
T invention
= | s H ( I - U ~ U ~ H ) x 0 | 2 ( s H ( I - U ~ U ~ H ) s ) ( x 0 H ( I - U ~ U ~ H ) x 0 H ) ; ##EQU00009##
and comparing
280 the test statistic with a threshold to detect 290 the target, wherein {tilde over (R)} is a normalized sample covariance matrix, λ
×r is a diagonal matrix with most important r eigenvalues of the clutter subspace along the diagonal, σ
is a noise variance, I is an identity matrix,
×r is an estimated clutter subspace, x
is a target data vector under a test for target presence, s is a steering vector for a given Doppler frequency and angle of arrival.
Effect of the Invention [0028]
The embodiments of the invention provide a method for detecting targets. A low complexity STAP via subspace tracking is provided for compound Gaussian distributed environment, which models the power
oscillation between the test and the training signals.
The above-described embodiments of the present invention can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof.
When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.
Such processors may be implemented as integrated circuits, with one or more processors in an integrated circuit component. Though, a processor may be implemented using circuitry in any suitable
Further, it should be appreciated that a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, minicomputer, or a tablet
computer. Such computers may be interconnected by one or more networks in any suitable form, including as a local area network or a wide area network, such as an enterprise network or the Internet.
Such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks.
Although the invention has been described by way of exes of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of
the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
Patent applications by Zafer Sahinoglu, Cambridge, MA US
Patent applications in class CLUTTER ELIMINATION
Patent applications in all subclasses CLUTTER ELIMINATION
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20120249361","timestamp":"2014-04-17T06:56:59Z","content_type":null,"content_length":"50294","record_id":"<urn:uuid:2dd97881-49aa-48a8-b873-2b50a2634c00>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Scipy-tickets] [SciPy] #620: scipy.stats.distributions.binom.pmf returns incorrect values
SciPy scipy-tickets@scipy....
Mon Oct 27 12:40:19 CDT 2008
#620: scipy.stats.distributions.binom.pmf returns incorrect values
Reporter: robertbjornson | Owner: somebody
Type: defect | Status: new
Priority: normal | Milestone: 0.7.0
Component: scipy.stats | Version:
Severity: normal | Resolution:
Keywords: |
Comment (by josefpktd):
Currently, the pmf is calculated from the discrete difference in the cdf,
which uses a formula in scipy.special.
I briefly looked at the case with using the above pmf formula:
It increases the precision of the calculated probabilities when the number
of trials is small, e.g. n = 10, in the range of 1e-16 (or maybe 1e-13) in
the example.
However, the formula with comb, cannot handle large number of trials e.g.
n > 1100 and produces mostly nans:
>>> pmfn = bpmf(np.arange(5001),5000, 0.99)
>>> np.sum(np.isnan(pmfn))
>>> pmfn = bpmf(np.arange(10001),10000, 0.99)
>>> np.sum(np.isnan(pmfn))
>>> pmfn[2000:2010]
array([ NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN])
I don't think 1e-16 or so change in accuracy is important enough to make
this change, since it does not work as a full substitute to the current
version, unless there is a special formula somewhere in scipy that can
calculate the pmf correctly for both small and large numbers of trials.
Ticket URL: <http://scipy.org/scipy/scipy/ticket/620#comment:1>
SciPy <http://www.scipy.org/>
SciPy is open-source software for mathematics, science, and engineering.
More information about the Scipy-tickets mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-tickets/2008-October/001705.html","timestamp":"2014-04-21T03:09:53Z","content_type":null,"content_length":"4845","record_id":"<urn:uuid:403caa56-8673-4781-bc71-80c67b11b0b8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: marginal effects in biprobit and average treatment effect in swi
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: marginal effects in biprobit and average treatment effect in switching probit
From Austin Nichols <austinnichols@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: marginal effects in biprobit and average treatment effect in switching probit
Date Fri, 15 Jun 2012 12:27:53 -0400
Monica Oviedo <monica.oviedo@uab.cat>:
The relevant code appears on slide 14 of 46 and is prefaced by the text:
How do we calculate the marginal effect of treatment after biprobit? Three
"obvious" approaches: use -margins-, use -predict- to get probabilities, or use
binormal() with predicted linear indices. The last is more correct, but all
should give essentially the same answer.
Evidently, the -margins- approach is the least correct.
If results differ, you should prefer the other approaches, but
read the references on interpretation of various ATE estimates.
Also, read the Statalist FAQ on not replying to an old thread to
initiate a new query.
I doubt -biprobit- will work well for the model with an endogenous interaction.
You have a single instrument z for two endogenous variables y2 and x*y2.
Instead of
biprobit (y1=x y2 xy2) (y2=x z)
ivreg2 y1 x (y2 xy2=z xz)
(ivreg2 is on SSC) and read the references on weak instruments
diagnostics in the -ivreg2- help file.
On Thu, Jun 14, 2012 at 6:45 AM, Monica Oviedo <monica.oviedo@uab.cat> wrote:
> Dear Statalist:
> I'm estimating the effect of an endogenous dichotomous variable y2 on a
> dichotomous variable y1 using a recursive biprobit model:
> biprobit (y1=x y2) (y2=x z)
> Where z is the exclusion restriction. I'm interested in the marginal effect
> of y2 on y1, which I think is:
> E[y1/y2=1] - E[y1/y2=0]
> I did what Austin Nichols suggested in this thread (namely, the conditional
> prob of Y1=1 given y2=1 less the conditional probability of Y1=1 given y2=0,
> letting y2=1 and y2=0 in turn for each observation, and then averaging over
> observations). In addition, I followed the procedures sugested by him in
> this file:
> http://www.stata.com/meeting/chicago11/materials/chi11_nichols.pdf
> This is:
> margins, dydx(y2) predict(pmarg1) force
> I think the latter is correct for estimating what I need. However, I get
> very different results from both procedures (in the first case a marginal
> effect of 0.08 vs a marginal effect of 0.45 using the second way).
> What is the difference between both procedures? Is it supposed that they
> estimate the same effect?
> A final question is if biprobit is well suited for estimate the following:
> biprobit (y1=x y2 x*y2) (y2=x z)
> This is, if there is any problem when an interaction term between the
> endogenous variable y2 and a continous x is added.
> Regards,
> Monica Oviedo
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-06/msg00757.html","timestamp":"2014-04-17T21:52:38Z","content_type":null,"content_length":"10356","record_id":"<urn:uuid:fa6f9629-e26f-46f4-8c5c-8a0753824e13>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the length of the hypotenuse of the right triangle shown below? If necessary, round your answer to the nearest hundredth.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
can you determine the lengths of the sides?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
the lengths of the sides means how long each side is
Best Response
You've already chosen the best response.
i'm confused on how to get the answer
Best Response
You've already chosen the best response.
do you know how to determine a length when you have two points ?
Best Response
You've already chosen the best response.
not one bit,
Best Response
You've already chosen the best response.
there are a few ways to do this. but the easiest is just use the distance equation \[distance = \sqrt{(x_{2} - x_{1})^{2} + (y_{2} - y_{1})^{2}}\] the points on the hypotenuse of that triangle
are (4, 2) and (-5, -3) it looks like so just calculate the distance between those two points
Best Response
You've already chosen the best response.
are you in conexus
Best Response
You've already chosen the best response.
that's the easy way. the other way is to calculate the lengths of the sides separately just by looking at the picture, the length of the top side is 9 and the length of the left side is 5 9^2 + 5
^2 = c^2
Best Response
You've already chosen the best response.
i don't know what conexus is but it called graphing data for my work?
Best Response
You've already chosen the best response.
|dw:1359037431903:dw| maybe that will help ?
Best Response
You've already chosen the best response.
25+81? = the answer? or is there more?
Best Response
You've already chosen the best response.
no you have to solve for 'c'
Best Response
You've already chosen the best response.
25 + 81 = c^2
Best Response
You've already chosen the best response.
106^2 =11236 ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Ha yeah i don't get this
Best Response
You've already chosen the best response.
this is how you solve for c
Best Response
You've already chosen the best response.
hm. you should review Pythagorean theorem
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
yes thats right
Best Response
You've already chosen the best response.
do i just put 10.29 though?
Best Response
You've already chosen the best response.
no you'd have to round to the nearest hundredth. it would be 10.30
Best Response
You've already chosen the best response.
because 10.295... rounds up to 10.30, that's why
Best Response
You've already chosen the best response.
Oh, thanks,
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510140b6e4b0ad57a561f5ba","timestamp":"2014-04-19T02:08:54Z","content_type":null,"content_length":"123742","record_id":"<urn:uuid:f634892e-3688-4b47-a37f-e23471ff3277>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Semantics of Programming Languages
Semantics of Programming Languages
Lecturer: Dr S. Staton
No. of lectures: 12
This course is a prerequisite for the Part II courses Topics in Concurrency, and Types.
The aim of this course is to introduce the structural, operational approach to programming language semantics. It will show how to specify the meaning of typical programming language constructs, in
the context of language design, and how to reason formally about semantic properties of programs.
• Introduction. Transition systems. The idea of structural operational semantics. Transition semantics of a simple imperative language. Language design options. [2 lectures]
• Types. Introduction to formal type systems. Typing for the simple imperative language. Statements of desirable properties. [2 lectures]
• Induction. Review of mathematical induction. Abstract syntax trees and structural induction. Rule-based inductive definitions and proofs. Proofs of type safety properties. [2 lectures]
• Functions. Call-by-name and call-by-value function application, semantics and typing. Local recursive definitions. [2 lectures]
• Data. Semantics and typing for products, sums, records, references. [1 lecture]
• Subtyping. Record subtyping and simple object encoding. [1 lecture]
• Semantic equivalence. Semantic equivalence of phrases in a simple imperative language, including the congruence property. Examples of equivalence and non-equivalence. [1 lecture]
• Concurrency. Shared variable interleaving. Semantics for simple mutexes; a serializability property. [1 lecture]
At the end of the course students should
• be familiar with rule-based presentations of the operational semantics and type systems for some simple imperative, functional and interactive program constructs;
• be able to prove properties of an operational semantics using various forms of induction (mathematical, structural, and rule-based);
• be familiar with some operationally-based notions of semantic equivalence of program phrases and their basic properties.
* Pierce, B.C. (2002). Types and programming languages. MIT Press.
Hennessy, M. (1990). The semantics of programming languages. Wiley. Out of print, but available on the web at http://www.scss.tcd.ie/Matthew.Hennessy/slexternal/reading.php
Winskel, G. (1993). The formal semantics of programming languages. MIT Press. | {"url":"http://www.cl.cam.ac.uk/teaching/1112/CST/node39.html","timestamp":"2014-04-20T16:12:18Z","content_type":null,"content_length":"9742","record_id":"<urn:uuid:8d7b9dec-3d62-4c38-936c-2f993613263c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
Education and Training
Education and Training: Data Sets
Data Sets for Data sets for the following short courses can be viewed from the web.
Selected Short
Data sets for Design NOTE: You probably need to download the macros used for the "10 step analysis" to your "DATAPLOT\MACROS" directory. These can be downloaded as a single tar file (WinZip knows
of Experiments Short how to handle tar files) or as the individual files:
The following data sets are available for the Design of Experiments (DEX) course:
Data sets for The following data sets are available for the Bayesian Analysis course:
Bayesian Analysis
Short Course
Data sets for The first few data sets from the class notes are listed below. The Data Set Name is the name I gave each data set in the notes. The File Name gives the name of the file
Regression Short containig the data set and is often the original name of the data set as well. The column Source lists where I got the data, not necessarily the original source of the data.
Course Data sets I made up are listed as "Simulated" in the Source column. The data sets are ordered chronologically by their first appearance in the notes. I will try to add the rest
of the data sets soon. If there are data sets you would particularly like to use that are not listed here please let me know which ones they are and I will add them first.
Data sets for The following data sets are available for the Analysis of Variance (ANOVA) course:
Analysis of Variance
Short Course Note: you probably need to view the Excel files using Internet Explorer from a Windows platform.
Data sets for The following data sets are available for the Exploratory Data Analysis (EDA) course:
Exploratory Data
Analysis Short
Data sets for The following data sets are available for the Statistical Concepts course:
Statistical Concepts
Short Course
Privacy Policy/Security Notice
Disclaimer | FOIA
NIST is an agency of the U.S. Commerce Department's Technology Administration.
Date created: 2/1/2002
Last updated: 8/28/2003
Please email comments on this WWW page to sedwww@nist.gov. | {"url":"http://itl.nist.gov/div898/education/datasets.htm","timestamp":"2014-04-21T04:32:27Z","content_type":null,"content_length":"21488","record_id":"<urn:uuid:d238a657-8978-4b48-ad8d-5d14ef4dcd51>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can Someone Check My Answer's??? @kelliegirl33
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Angiosperms have organs such as ______________ and produce seeds. The are described as _________________plants. Â (1Â point)flowers; vascular stems; pollinated rhizoids; vascular leaves;
non-vascular @kelliegirl33
Best Response
You've already chosen the best response.
I think it's the first one
Best Response
You've already chosen the best response.
Based on the leaf cross section shown above, what do you know about the plant from which it came? Â (1Â point)It is a vascular plant. It is a nonvascular plant. It is a monocot. It is a
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I agree with you on the first one ...flowers, vascular
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I believe it is a monocot.
Best Response
You've already chosen the best response.
can you tell me why?
Best Response
You've already chosen the best response.
can you tell me how you got that answer??
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
wait....what did you get for the diagram ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
let me check again....
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I am not sure about this one.....Biology is not my best subject. Sorry.
Best Response
You've already chosen the best response.
its okay do you know a person who could help me then?
Best Response
You've already chosen the best response.
let me check who is all on now....hold on
Best Response
You've already chosen the best response.
@dmezzullo .....need a little help here....please
Best Response
You've already chosen the best response.
@Opcode ....help
Best Response
You've already chosen the best response.
@.Sam. ......can you help
Best Response
You've already chosen the best response.
maybe someone will come
Best Response
You've already chosen the best response.
okay thank you
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
he lost me ???
Best Response
You've already chosen the best response.
@ChuckNorris123 Watch the Lang
Best Response
You've already chosen the best response.
hey....none of that ChuckNorris
Best Response
You've already chosen the best response.
@heyheyhelpme Welcome to Openstudy!! But sadly i am not sure about this.
Best Response
You've already chosen the best response.
I started thinking it was a monocot, but now I am not sure
Best Response
You've already chosen the best response.
@ChuckNorris123 Welcome to openstudy!! Now plz be nice or i am gonna have to report u for cussing and being violent.
Best Response
You've already chosen the best response.
okay thanks anyways @dmezzullo
Best Response
You've already chosen the best response.
sorry @heyheyhelpme ......maybe someone will come along who knows more about biology
Best Response
You've already chosen the best response.
@satellite73 Take care of this
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
its okay thank you @kelliegirl33
Best Response
You've already chosen the best response.
@ChuckNorris123 This is called Openstudy. Not Cuss everyone out to Look "Cool". Js
Best Response
You've already chosen the best response.
It might be a dicot....it looks similar to one
Best Response
You've already chosen the best response.
we are trying to help @heyheyhelpme ....concentrate on that
Best Response
You've already chosen the best response.
im trying @ChuckNorris123 could you go off my question please
Best Response
You've already chosen the best response.
good job...he left
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
heres another question use the same pic though Based on the leaf cross section shown above, what part is responsible for helping the plant adapt to land by conserving water? Â (1Â point)F G H J
Best Response
You've already chosen the best response.
@heyheyhelpme ....look at this
Best Response
You've already chosen the best response.
would that be my answer????
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/515d78c4e4b0161ab93d261c","timestamp":"2014-04-20T03:19:11Z","content_type":null,"content_length":"133911","record_id":"<urn:uuid:9bb94c97-9da0-4ed2-b24b-f1c96c5379c6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
Any rigorous way to claim that sums with repeat summands are few?
up vote 3 down vote favorite
Let $B \subset \mathbb{Z}^+$. Define $r_{B,h}(n)$ to be the number of ways of writing $n$ as $h$ elements of $B$ and $R_{B,h}(n)$ the number of ways to write $n$ as the sum of $h$ DISTINCT elements
of $B$. In many applications, such as the Erdos-Tetalli theorem which finds a set $B$ such that $R_{B,h}(n) = \Theta(\log(n))$ it is more convenient to work with $R_{B,h}(n)$. The reason why such
convenience does not affect the results is because in general, the number of elements in $B^h$ that sum to $n$ counted by $r_{B,h}(n)$ but not by $R_{B,h}(n)$, namely those with repeat summands, is
To illustrate with a fairly concrete example, consider the famous Goldbach Conjecture which asserts that every positive even integer larger than 2 can be written as the sum of two primes. In other
words if $B$ is the set of primes, then Goldbach Conjecture is the assertion that $r_{B,2}(2n) > 0$ for all $n > 1$. But the truth is that one expects $r_{B,2}(2n)$ to tend to infinity. In this case
the number of sums with repeat summands are precisely to write $2p = p + p$ for some prime $p$, and if $r_{B,2}(2n)$ does indeed tend to infinity, then this is a complete triviality to replace $r_
{B,2}(2n)$ with $R_{B,2}(2n)$, since $0 \leq r_{B,2}(2n) - R_{B,2}(2n) \leq 1$ for all $n$.
So my question is, is there some general argument for this observation? That is, in a sufficiently general setting, one can essentially assume that the summands are distinct.
I note that in some very trivial cases this assumption is not appropriate at all. For example, consider $B =$ {$3k : k \in \mathbb{N}$} $\cup$ {$0,1$}. Then $B$ is an asymptotic basis of order 3, but
it would NOT be a basis at all if we demanded that the summands be unique. This of course is a somewhat contrived example, since for a set of positive density the number of representations is very
small. So of course our criteria for a 'sufficiently general setting' would have to exclude such trivial cases.
additive-combinatorics nt.number-theory arithmetic-progression
There seems to be a possibly confusing typo: 'it is more convenient to work with $R_{B,h}(n)$.' I believe $r_{B,h}(n)$ is meant instead of $R_{B,h}(n)$. – quid Feb 6 '11 at 16:24
add comment
3 Answers
active oldest votes
Here is a fairly crude first attempt.
First note that for any basis $B$, when $h=2$, we have \[0\leq r_{B,2}(n)-R_{B,2}(n)\leq 1\] for all n, as you note in your question. Hence assume $h\geq 3$. In this case, if $r'_{B,h}
(n)=r_{B,h}(n)-R_{B,h}(n)$ counts the number of representations where some elements are identical, then \[ r'_{B,h}(n)=\sum_{m\in 2\cdot(B\cap[1,n])}r_{B,h-2}(n-m)\ll\lvert B_n\rvert
Hence if we can show that this upper bound is $o(r_{B,h}(n))$, then we have $R_{B,h}(n)\sim r_{B,h}(n)$ as required.
up vote 4 down
vote accepted For example, note that this shows that distinct summands dominate in the case of Waring bases, where $\lvert B_n\rvert\approx n^{1/k}$ and $r_{B,h}(n)\approx n^{h/k-1}$.
Also note that it deals with e.g. the ternary Goldbach case, where $\lvert B_n\rvert\approx n/\log n$ and $r_{B,3}\gg n^2$.
This does not, however, deal with thin bases, where $r_{B,h}(n)\approx\log n$ and $\lvert B_n\rvert\approx n^{1/k}$. For such cases you may need to rely on probabilistic arguments, as
in the proof of the Erdos-Tetalli theorem.
Yes that context is indeed where my question arose. I am reading the Erdos-Tetalli paper and it seems that they worked with what I called $R_{B,h}(n)$ instead of $r_{B,h}(n)$, and
never addressed how one can obtain $r_{B,h}(n)$ from $R_{B,h}(n)$. In Terry Tao and Van Vu's book they did address this issue, but closing the gap in that case seemed to rely on a
lot more machinery than one would expect. – Stanley Yao Xiao Feb 6 '11 at 17:26
add comment
First, I will give a soft answer to one interpretation of your question, then I will (try to) back this answer up with a concrete example (I will focus on a finite analog, as I am more
familiar with it, however I at least know that similiar issues exist in the infinite setting you explicitly asked about).
There is no known method, sufficiently general to work in all or at least most cases one cares about, for passing from the typically more convenient setting of allowing repetitions to that
of distinct summands.
There are various questions that are solved allowing repetitions yet are open for distinct summands; or the proofs in the former setting are much simpler than in the latter setting.
To put it differently that the repetitions setting and the (more difficult) distinct setting are closely related is a useful heuristic that one knows how to make precise in certain cases
(yet, considerable effort, or even a different type of argument, can be required to do so).
An example, as said in a slightly different context, for illustration: Let $p$ be a prime, and let $A$ be a subset of the cyclic group of order $p$. A classical result, the
up vote 1 Cauchy--Davenport Theorem, which is not too difficult to prove, asserts that $|A + A| \ge \min (p, 2|A| - 1),$ where $A + A$ denotes the set of all elements that can be written as a sum of
down vote two possibly equal elements of $A$ (unrestricted setting); in other words the subset of elements $g$ of the group for which $r_{A,2}(g)>0$.
Now, a natural analog is the assertion that $|A \hat{+} A| \ge \min (p, 2|A| - 3)$ where $A \hat{+} A$ denotes the set of all elements that can be written as a sum of two distinct elements
of $A$ (restricted setting); in other words the subset of elements $g$ of the group for which $R_{A,2}(g)>0$. While now this assertion is also known to be true, between Erd{\H o}
s--Heilbronn conjecturing it (mid 1960s) and Dias da Silva--Hamidoune and Alon--Nathanson--Ruzsa proving it about three decades passed.
Moreover, for the Cauchy--Davenport Theorem a certain generalization to arbitrary finite abelian groups is known since decades, known as Kneser's Theorem---there is also a Kneser's Theorem
for subsets of the integers. For the distinct setting such an analog only exists in conjectural form; a detailed discussion of this can be found in the following article: V.F. Lev,
Restricted set addition in Abelian groups: results and conjectures, Journal de théorie des nombres de Bordeaux, 2005, available freely, e.g., at http://jtnb.cedram.org/item?id=
JTNB_2005__17_1_181_0 .
add comment
I've found this assumption (that there's no real difference between $r$ and $R$) in countless places. I believe it to be true (in terms of what the theorems are) in many situations, but
also false (in terms of how the proofs work) in many situations.
Here's the most simple example that I'm certain is unresolved. Let $A_4(n)$ be the maximum cardinality of a subset $B$ of $\{1,2,\dots,n\}$ so that $r_{B,2}(k)\leq 4$ for all $k$, and let
up vote 0 $A_2'(n)$ be the maximum cardinality of a subset of $\{1,\dots,n\}$ so that $R_{B,2}(k)\leq 2$ for all $k$. It is obvious from the definition that $A(n)\leq A_2'(n)$, and the presumption
down vote (the subject of your question) is that $A_2'(n)/A(n) \to 1$. It isn't known, however, that the limit even exists. (footnote: $A_2'(n) = A_5(n)$. It is known that $A_2(n) \sim A_3(n)$, but
not that $A_4(n) \sim A_5(n)$.)
add comment
Not the answer you're looking for? Browse other questions tagged additive-combinatorics nt.number-theory arithmetic-progression or ask your own question. | {"url":"http://mathoverflow.net/questions/54493/any-rigorous-way-to-claim-that-sums-with-repeat-summands-are-few","timestamp":"2014-04-20T16:00:30Z","content_type":null,"content_length":"63785","record_id":"<urn:uuid:64e41c14-8846-4408-b0aa-1b8a8e6a85d8>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Matlab plotting help:peicewise with loops
October 20th 2009, 01:25 PM #1
Oct 2009
[SOLVED] Matlab plotting help:peicewise with loops
so i've tried to plot this funtion multiple times using every trick i can come up with and the loop still only goes through the first ifelse statement in the function file. here is a copy of my
function file:
function y=f(x)
if x<=-1;
elseif -1<=x<=1;
elseif 1<=x<=3;
elseif 3<=x<=4;
elseif 4<=x<=5 ;
using this file I try to plot in my main file which is:
delete g228x07.txt; diary g228x07.txt
clear; clc; close all; echo on
% Gilat 228/7
for i=1:n;
grid on
echo off; diary off
the first three lines are just a reference header and do not impact the problem. I don't know if the loop fails in the for loop or in the ifelse on the function file.
Last edited by CaptainBlack; October 20th 2009 at 07:32 PM.
The problem is here:
when x=2 that condition is still returning true. The way I usually write these expressions is shown below (im sure there is probably a better way). Try paste this code inside a file called
"mhfFoo.m" and run is. It work OK on my computer.
function mhfFoo
for i=1:n;
grid on
function y=f(x)
if x<=-1;
elseif x>=-1 && x<=1;
elseif x>=1 && x<=3;
elseif x>=3 && x<=4;
elseif x>=4 && x<=5 ;
Regards Elbarto
thanks a lot. Its wierd that the code has to be like that but it worked.
so i've tried to plot this funtion multiple times using every trick i can come up with and the loop still only goes through the first ifelse statement in the function file. here is a copy of my
function file:
function y=f(x)
if x<=-1;
elseif -1<=x<=1;
elseif 1<=x<=3;
elseif 3<=x<=4;
elseif 4<=x<=5 ;
using this file I try to plot in my main file which is:
delete g228x07.txt; diary g228x07.txt
clear; clc; close all; echo on
% Gilat 228/7
for i=1:n;
grid on
echo off; diary off
the first three lines are just a reference header and do not impact the problem. I don't know if the loop fails in the for loop or in the ifelse on the function file.
multiple condition ifs should be of the form
if 0<x & x<1
#if body
No problem apocolypto.
Further the CB's reply, MATLAB has support for short circuiting behavior when evaluating statements so writing the expression as follows is better for evaluating logical scalar values (at least
that seems to be the convention mathworks is using).
if 0<x && x<1% Note the "&&" vs "&"
#if body
It is my understand that this means if "0<x" is false, then the whole statement will be false so "x<1" does not need to be evaluated. This is probably pretty trivial but thought I would mention
it as it is starting become more frequent in recent code and it is something that confused me when I was starting out.
Regards Elbarto
October 20th 2009, 06:30 PM #2
October 20th 2009, 07:22 PM #3
Oct 2009
October 20th 2009, 08:27 PM #4
Grand Panjandrum
Nov 2005
October 20th 2009, 09:19 PM #5 | {"url":"http://mathhelpforum.com/math-software/109262-solved-matlab-plotting-help-peicewise-loops.html","timestamp":"2014-04-20T10:20:02Z","content_type":null,"content_length":"44313","record_id":"<urn:uuid:65fd1642-070a-4ec9-9d7e-e7f3eaecd927>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |