content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Recent Trends in Exponential Asymptotics
June 28, Monday
13:30 - 14:30 Ovidiu Costin (Rutgers Univ.)
Existence, uniqueness, asymptotic and Borel summability properties of solutions of nonlinear evolution PDEs in
14:45 - 15:45 Masafumi Yoshino (Hiroshima Univ.)
WKB analysis and Poincaré's theorem
16:00 - 17:00 Adri Olde Daalhuis (Univ. of Edinburgh)
Hyperasymptotics for nonlinear ODEs
June 29, Tuesday
10:00 - 11:00 Tatsuya Koike (Kyoto Univ.) and Yukihiro Nishikawa (Hitachi Ltd.)
On exact WKB analysis for the fourth Painlevé hierarchy
11:15 - 12:15 Chris J. Howls (Univ. of Southampton)
< Introductory Course to Exponential Asymptotics >
Introduction to Exponential Asymptotics, I
14:00 - 15:00 Shun Shimomura (Keio Univ.)
On second-order nonlinear differential equations with quasi-Painlevé property
15:15 - 16:15 Nalini Joshi (Univ. of Sydney)
Analytic results (inspired by asymptotics) for the first Painlevé equation
16:30 - 17:30 Masaki Hibino (Meijou Univ.)
Borel summability of divergent solutions for singular 1st order linear PDEs of nilpotent type
June 30, Wednesday
10:00 - 11:00 Setsuro Fujiie (Tohoku Univ.)
WKB solutions near a regular singular point
11:15 - 12:15 Chris J. Howls (Univ. of Southampton)
< Introductory Course to Exponential Asymptotics >
Introduction to Exponential Asymptotics, II
14:00 - 15:00 André Voros (CEA Saclay)
The general 1D Schrödinger equation as an exactly solvable problem
15:15 - 16:15 Eric Delabaere (Univ. of Angers)
Resurgent deformations for an ordinary differential equation of order 2
16:30 - 17:30 Kunio Ichinobe (Nagoya Univ.)
On the structure of the integral kernel for the Borel sum
July 1, Thursday
10:00 - 11:00 Roberto Tateo (Univ. of Turin)
Aspects of the ODE/IM correspondence
11:15 - 12:15 Chris J. Howls (Univ. of Southampton)
On the higher order Stokes phenomenon
14:00 - 15:00 Takahiro Kawai (RIMS, Kyoto Univ.),Tatsuya Koike (Kyoto Univ.), Yukihiro Nishikawa (Hitachi Ltd.), Shunsuke Sasaki (RIMS, Kyoto Univ.) and Yoshitsugu Takei (RIMS, Kyoto Univ.)
On exact WKB analysis for higher order Painlevé equations, I
15:15 - 16:15 Junji Suzuki (Shizuoka Univ.)
The application of the Bethe ansatz method to certain classes of ODE
16:30 - 17:30 Akira Shudo (Tokyo Metropolitan Univ.)
Stokes geometry for the quantized Henon map
July 2, Friday
9:45 - 10:45 Miloslav Znojil(Nuclear Physics Institute, CR)
Factorized exponential asymptotics of wavefunctions:Hill-determinant method,
quasi-exact bound states and energy-dependent Hamiltonians
11:00 - 12:00 Takashi Aoki (Kinki Univ.),Takahiro Kawai (RIMS, Kyoto Univ.), Tatsuya Koike (Kyoto Univ.) and Yoshitsugu Takei (RIMS, Kyoto Univ.)
Accumulation of turning points for an integral equation that appears in plasma physics
13:30 - 14:30 Takahiro Kawai (RIMS, Kyoto Univ.),Tatsuya Koike (Kyoto Univ.), Yukihiro Nishikawa (Hitachi Ltd.), Shunsuke Sasaki (RIMS, Kyoto Univ.) and Yoshitsugu Takei (RIMS, Kyoto Univ.)
On exact WKB analysis for higher order Painlevé equations, II
14:45 - 15:45 Carl M. Bender (Washington Univ. in St. Louis)
Quantum mechanics based on non-Hermitian Hamiltonians | {"url":"http://www.kurims.kyoto-u.ac.jp/coe21/speaker/takei/index.html","timestamp":"2014-04-20T10:49:41Z","content_type":null,"content_length":"8612","record_id":"<urn:uuid:7607e75e-e6ee-45ea-ada8-833b70fc78c2>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Life of Stefan Banach
Sheldon Axler
This review of Roman Kaluza’s 1996 book The Life of Stefan Banach was published in American Mathematical Monthly 104 (1997), 577-579.
In at least one printing of the current (fifteenth) edition of the Encyclopedia Britannica, the entry on Stefan Banach did not contain the words "Poland" or "Polish". The Britannica called Banach a
"Soviet mathematician". The encyclopedia fixed its error in later printings, but the mathematics community has not yet adequately documented Banach’s life and ideas. A computer search of Mathematical
Reviews reveals more than eleven thousand publications with the word "Banach" in the title; "Hilbert" occurs in only seven thousand titles. Yet no mathematician or historian of mathematics has
produced a book-length biography of Stefan Banach.
The book under review was written neither by a mathematician nor by a historian. The author, a Polish reporter and journalist, writes well about mathematics without using any mathematical symbols.
Professional mathematicians will spot a few technical errors of the type that inevitably creep into exposition at this level. For example, we read that "the only linear transformations" on a
finite-dimensional Euclidean space are "translations, rotations, and reflections". Such small mistakes in mathematical details can easily be forgiven because the author does a good job of capturing
the flavor of early functional analysis and its creators.
The book suffers more from the lack of a historian’s perspective than from an absence of mathematical expertise. Some events described in the book cry out for more explanation. For example, consider
the author’s description of the Nazi efforts to eliminate the intelligentsia in occupied Poland during World War II. Before capturing the Polish university town of Lvov, where Banach lived and
worked, German officials compiled a list of prominent professors, scientists, and writers in Lvov who would be executed. One night shortly after German soldiers had entered Lvov, SS units murdered
forty leading intellectual figures in Lvov without even the pretense of trials. But Banach was untouched by the Nazi death squads. An alert reader will wonder why Banach, who at this time was
President of the Polish Mathematical Society and a Dean at the university, was not among the intellectuals marked down for liquidation. Unfortunately the author does not comment on the apparent
disparity between his description of Nazi plans to crush Polish intellectual life and the survival of Banach, Poland’s most influential mathematician. Was Banach spared because he had too much fame?
Or were the occupying forces so mathematically illiterate that they had never heard of Banach? The author does not even speculate about these questions that beg to be answered.
As another example of a tantalizing tidbit from the book that needs more explanation, consider the following account (page 51) of Banach’s support for the mathematical logician Leon Chwistek:
... when at some point Chwistek applied for a position in logic in Lvov, Banach backed him unequivocally and helped him to obtain the post. The affair scandalized half of intellectual Poland
since Chwistek, in addition to being a respected scholar, also had a well-deserved reputation as being a somewhat strange and very eccentric artist.
Banach himself was "somewhat strange" and "eccentric"; that description surely fits many mathematicians. So why would Banach’s support for such a person have "scandalized half of intellectual
Poland"? Readers will realize that something more must have been involved here, but the author provides no hints to help solve this mystery.
In 1928 Stefan Banach and his colleague Hugo Steinhaus founded Studia Mathematica, which quickly became the most important journal specializing in the then new field of functional analysis. Today’s
mathematics librarians, grappling with budget problems, will be amused to learn that the first volume of Studia Mathematica cost $1.50 outside Poland.
When teaching the graduate course in functional analysis, I always use the Krein-Milman Theorem and its appearance in Studia Mathematica as an excuse to inject a bit of history into the classroom.
The Krein-Milman Theorem states that in a locally convex topological vector space, every compact convex set is the closed convex hull of its extreme points. This result was published (in somewhat
less generality than the version just stated) in the 1940 volume of Studia Mathematica, which also contained two papers written by Banach. That volume of the journal was printed on poor-quality
paper, clearly due to wartime conditions. The most curious feature of the 1940 volume is that each article (they are all written in either English, French, or German) appears with an abstract in
Russian. Obviously Lvov, where Studia Mathematica was published, lay in the Soviet zone of occupation at the time of publication. Two weeks after Germany had invaded Poland from the west in September
1939, the Soviet Union marched into Poland from the east. Poland was partitioned between Germany and the Soviet Union until the summer of 1941, when Germany attacked the Soviet Union and occupied all
of Poland.
The 1940 volume of Studia Mathematica was the last one edited by Banach, who died at age 53 shortly after World War II ended in 1945. After an absence of eight years, Studia Mathematica resumed
publication in 1948 in Wroclaw. Poland’s border had moved westward after World War II, so that Lvov was then in the Soviet Union (no doubt this accounts for the Britannica’s claim that Banach was a
"Soviet mathematician"). A few years ago Lvov again changed countries---it is now part of Ukraine. Today Studia Mathematica, still a fine journal specializing in functional analysis, is published in
Warsaw. The cover of each issue still proudly bears the names of the founding editors Banach and Steinhaus.
In 1932 Banach published his famous book Théorie des Opérations Linéaires, based on his Polish version published a year earlier. Remarkably, Théorie des Opérations Linéaires remains in print today
more than six decades after its original publication, partly because of its historic value as the first monograph on functional analysis but also because of the clean, modern style with which Banach
presents the fundamentals of the subject (as created in good part by him and his collaborators). While a graduate student, I read Théorie des Opérations Linéaires to study for my French exam. I
remember the thrill of seeing functional analysis developed by a legendary hero of twentieth century mathematics and my delight in his extraordinarily clear writing. I also remember my amusement that
what we today call "Banach spaces" are called "spaces of type (B)" in Banach’s book. From the book under review I learned that Banach had previously written several popular high school mathematics
textbooks for use throughout Poland; perhaps writing for a high school audience had honed Banach’s excellent expository skills.
The Life of Stefan Banach left me hungry for more information about this fascinating figure. However, the author has performed a valuable service by uncovering some previously unknown data about
Banach and by interviewing many of the dwindling number of people who knew Banach. This sketchy biography is a good place to start for someone wanting to learn about Banach. | {"url":"http://www.axler.net/Banach.html","timestamp":"2014-04-18T23:16:05Z","content_type":null,"content_length":"8418","record_id":"<urn:uuid:23ed9b80-172e-4d6f-acee-9f5c8d53d0d0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rillito Math Tutor
Find a Rillito Math Tutor
...I feel comfortable tutoring this subject. I am qualified to tutor in study skills because I was a tutor in a school for seven years where one of my main jobs was to keep the students organized
and on task. I was then a full time middle school teacher for a semester and I had to make sure that e...
23 Subjects: including trigonometry, algebra 1, algebra 2, biology
...All these experiences have enriched my understanding of math and my ability to explain math to my students. I want students to rise to the highest level of which they are capable. Learning
math is more than just learning formulas or repeating a procedure.
26 Subjects: including algebra 1, algebra 2, calculus, grammar
...I have taught English in Japan and in the U.S. I love meeting people from different countries, and I look forward to working with you!I received my K - 8 teaching credential from San Jose
State University in 1989. I taught elementary school for 3 years before having children, and since then I have substitute taught and tutored.
25 Subjects: including algebra 2, SAT math, English, algebra 1
...I finally earned a Masters Degree in Computer Science.Algebra is a first class in algebra that usually follows a pre-algebra class. It is important to understand the basic concepts of algebra
before continuing to Algebra II. Students will learn to solve equations and inequalities.
7 Subjects: including precalculus, ACT Math, algebra 1, algebra 2
...I get Golf magazine and I have watched various golf technique shows through the years. I would love to help someone with the game, so I can get out on the course more often. I have studied all
the major religions and I understand the similarities and differences.
37 Subjects: including algebra 1, ACT Math, SAT math, English | {"url":"http://www.purplemath.com/rillito_az_math_tutors.php","timestamp":"2014-04-21T02:50:13Z","content_type":null,"content_length":"23428","record_id":"<urn:uuid:5cbc8609-7b6b-4da6-9785-c095e947e2e1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Steiner's Chain
From Math Images
Steiner's Chain in Third Dimension
Field: Geometry
Created By: fdecomite
Steiner's Chain in Third Dimension
In the image on the right, the Steiner chain consists of a sphere inside another, with a ring-like region in between. This space contains spheres of different diameters but each is tangent to the
previous and succeeding spheres as well as to the two non-intersecting spheres.
Basic Description
We will begin first with the definition of a Steiner chain and follow this description with geometric visuals that will help aid you in the construction of a specific Steiner chain, known as an
Annular Steiner Chain. We will also include instructions on how to construct a Steiner chain using inversion. Lastly, we will provide numerous formulas, all of which represent the algebraic proofs of
the geometric Steiner chain.
A Steiner chain is:
A set, or chain, of $n$ circles that are tangent to each other as well as tangent to two non-intersecting circles.
Many types of Steiner chains exist and below we have included visual representations accompanied with short descriptions. Although a wide range exists, all Steiner chains share specific properties.
For a chain to be considered a Steiner chain, it MUST have:
• Two circles that are NOT tangent to one another
• A region that is made up of additional circles all of which are tangent to the two non-intersecting circles
Types of Steiner Chains
Closed Steiner Chains
Steiner chains are "closed" when the first and last circles in the set (the region between the red and blue circles above) are tangent to one another.
Open Steiner Chains
Steiner chains are "open" when the first and last circles in the set are not tangent to one another. Above you can see two of the black circles intersect each other, but both of these circles still
remain tangent to the two concentric circles.
Multicyclic Steiner Chains
Steiner chains are "multicyclic" when the set of $n$ circles wrap continuously around each other before closing, therefore the first and last circles are not tangent to one another, but overlap. This
is similar to an Open Steiner chain but instead of having only a few intersecting circles in the region, all intersect.
Annular Steiner Chains
Annular Steiner chains are the most simple, they are closed chains consisting of $n$ circles of equal size that surround the inscribed circle. This also means that the inscribed and circumscribed
circles are concentric.
A Steiner chain doesn't always have to be made with one of the non-intersecting (not tangent) circles within the larger as the images above show. Below (We have hidden it due to its necessary size)
is an example where one of the non-intersecting circles is outside the other. It is considered to be a Steiner chain since it follows the two properties that say to be a Steiner chain:
• There must be two circles that are not tangent to each other
• There must be a region where there are additional circles that are all tangent to both the non-tangent circles.
Creating a Steiner Chain From Scratch
Below are the visuals that show you how to construct an Annular Steiner chain starting with an equilateral triangle and ending with circles with Steiner chain properties. By using an equilateral
triangle, we are going to have three tangent circles within the annulus (since there are three vertices on a triangle) If we wanted an annulus with eight symmetrical tangent circles, we would use an
equilateral octagon to construct the Steiner chain. So, the number of tangent circles in the annulus is determined by the equilateral polygon we choose.
1. Construct a regular triangle$\triangle XYZ$ with center $C$.
2. Using the points of $\triangle XYZ$ as centers, construct tangent circles $X,Y,Z$. Each with a radius of $\frac{1}{2}$ the length of a side of the regular triangle.
3. Construct two concentric circles (red and blue in the image below) so that each is tangent to the three circles $X, Y, Z$.
Now that you have seen how to create an Annular Steiner chain, we will show you how to construct a Steiner chain from an already existing one.
Creating A Steiner Chain Using Inversion
A Steiner chain can also be constructed by reflecting, or inverting, another Steiner chain. Reflection is taking points to the other side of a line so that they are the same distance from the line as
they were before. Inversion is taking points to the "other side" of a circle. For a better understanding of inversion as well as for an active applet where you can invert a circle over another
circle, I highly suggesting looking at the page titled Inversion.
By inverting points along all the circles of the Steiner chain, another can be formed that differs slightly from the original but still maintains the properties specific to Steiner chains (which are
mentioned above under "Basic Description").
1. Construct an inversion circle over which the present Steiner chain will be reflected.
2. To obtain another Steiner chain, invert the already existing Steiner chain over the inversion circle.
Below is a more accurate inversion, this figure was obtained by using the program Cabri Plus II. You can see that after inverting the original Steiner chain, a different Steiner chain was formed. The
circles within the ring between the two non-intersecting circles now have different diameters than their related non-inverted circles, meaning that they are not symmetric anymore (this also suggests
that the two non-tangent circles are no longer concentric). The large size of the image helps you to notice these mentioned points.
A More Mathematical Explanation
Note: understanding of this explanation requires: *Algebra and Trigonometry
Within this section we will provide numerous formulas with the purpose of supplying you
Within this section we will provide numerous formulas with the purpose of supplying you with the algebraic proofs of Steiner chain construction. By applying these formulas geometrically, you can
verify that what you have created is indeed a Steiner chain.
Tangent Circles Formula
The purpose of this section is to remind you of the formulas used to determine whether two circles are tangent. When constructing a Steiner chain, you can choose radii that in fact will produce
tangent circles. This is the algebraic proof for the geometric image of tangent circles.
Two circles as pictured above, are tangent if:
$(x_{1}-x_{2})^2 + (y_{1}-y_{2})^2 = (r_{1}\pm r_{2})^2$
Further explanation of the above equation can be found on the page Problem of Apollonius
Figure 1 represents two circles that are externally tangent; their centers are separated by a distance:
$d = r_1+r_2$
Figure 2 represents two circles that are internally tangent; their centers are separated by a distance:
$d = \left\vert r_1-r_2 \right\vert$
Concentric Circle Formula
The purpose of this section is again algebraic, this formula will verify two circles to be concentric. When constructing an annular Steiner chain, it is important to make sure the two
non-intersecting circles are indeed concentric. If they aren't concentric, the tangent circles within the annulus won't be symmetrical.
When two circles are concentric, the area of the annulus in between is the area of the large circle minus the area of the small circle:
$\text{Area of Annulus} = \pi(R^2 - r^2)$
Circle to Circle Inversion
The purpose of this section is to show you the algebra associated with the geometric property inversion. We have created an image that represents an inverted circle, below the figure is an algebraic
explanation. This section will help you better understand inversion and therefore will help you be able to find the inverse of circles in the future. As you already know, a Steiner chain can be
formed by taking another Steiner chain and inverting it over a circle. The figure below shows the inverse of circle $E$ with respect to circle $J$.
The figure above illustrates the following relationship:
• The point $C'$ is the inverse of the point $C$ with respect to circle $J$
• The point $B'$ is the inverse of the point $B$ with respect to circle $J$
1. We can see that $C, B, C', B'$ form a quadrilateral. One of the most basic theorems about quadrilaterals says that their opposite angles are supplementary, meaning
$\angle B'C'C + \angle B'BC = 180^\circ$
Therefore, $\angle ABC$ must equal $\angle B'C'C$ since
$\angle B'BC + \angle ABC=180$
$\angle B'BC + \angle B'C'C=180$
2. If you reorient$\triangle AC'B'$ you will see that it is a larger version of $\triangle ABC$. The angle measurements of both triangle are the same, the only thing that differs are the lengths of
the sides.
$\triangle ABC\sim\triangle AC'B'$
3. Now that we know these two triangles are similar, we can solve for the length of any side of the triangle. ${C B}$ is proportional to ${B' C'}$ and ${A B}$ is proportional to ${A C'}$
Multiplying each side by the common denominator gives us
Dividing both sides by $AB$ gives us
Now we know the radius of the new circle. You can find the lengths of the other segments simply by solving for them in the equation.
Steiner Chain Construction via Inversion Formulas
Below are two figures of Steiner chains. The image on the left, Figure 3, represents a closed annular chain whereas the image on the right represents the Steiner chain that is obtained by inverting
Figure 3 over a circle. The new Steiner chain is no longer annular or concentric. Noticeably, Figure 3 has more than just a Steiner chain. There is also an enlarged triangle shown and I will describe
how to find the measurements of its legs. These measurements are helpful when looking at the formula which represents:
• The radii ratio for the two concentric circles of the Steiner chain
Since the radius of the large red circle is $R$ and the radius of the small blue circle is $r$ (and since both these circles are concentric) we can agree that the diameter of each tangent circle
located in the annulus is
The radius is half the diameter, thus expressed as
By looking at Figure 3, we see that $AD$ is the radius of each tangent circle.
Since we know the length of $AD$, we can easily find the length of $BA$ by adding together the radius of the inner concentric circle and the radius of the tangent circle, $AD$
$BA$ $=$ $r+AD$
$BA$ $=$ $r+\frac{R-r}{2}$
$r+\frac{R-r}{2}$ $=$ $\frac{R+r}{2}$
$BA$ $=$ $\frac{R+r}{2}$
Knowing the lengths of $AD$ and $BA$ will help us find a trigonometric equation representing the radii ratio of the concentric circles. So, if we label the triangle in figure 3 we will see that
Look at the triangle in Figure 3, as you can see $\angle B$ opens directly into the yellow circle. Well, a circle is $360^\circ$ and in radians is expressed as $2\pi$. Therefore, we can say that the
measure of this angle is
$\angle B = 2\pi$
But, we need to account for all the circles, because this angle will be determined by the number of circles. So, we can write this as follows
$\angle B =\frac{2\pi}{n}$
$\sin B = \sin (\frac{2\pi}{n})$
But, we want the sine of theta which is half the measurement of $\angle B$. To find the sine of theta, just multiply $\frac{2\pi}{n}$ by a half
And therefore,
With the above equation we can find that the ratio of radii for the non-intersecting, concentric circles is:
$\frac{R}{r}=\frac {1-\sin\left(\frac{\pi}{n}\right) }{1+ \sin\left(\frac{\pi}{n}\right)}$
Multiply both sides by $R+r$
On the left side of the equation, distribute $\sin\frac{\pi}{n}$
Collect like-variables
Factor out the like-variables on each side of the equation
Therefore, the relationship between $R$ and $r$ is
$\frac{R}{r}=\frac {1-\sin\left(\frac{\pi}{n}\right) }{1+ \sin\left(\frac{\pi}{n}\right)}$ .
Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
About the Creator of this Image
Fdecomite contributes pictures and self-created images to the website http://www.flickr.com/. He has created many interesting images of things other than Steiner's chain as well.
[1] Chakerian, G.D. Circles and Spheres. Retrieved from http://mathdl.maa.org/images/upload_library/22/Polya/00494925.di020690.02p0026g.pdf
[2] Chien-Hsun Lu. Exploring Steiner's Porism with Cabri Geometry. Retrieved from http://sylvester.math.nthu.edu.tw/d2/lue-thesis-inversion/Exploring_Steiner_s_Porism_with_Cabri_Geometry.pdf
[3] Circle Inversion. Retrieved from http://www.geogebra.org/en/examples/frisbee/worksheets/circle_inversion.html
[4] Concentric Circles. Retrieved from http://www.vitutor.com/geometry/plane/concentric_circles.html
[5] Davis, T. Inversion in a Circle. (2011, March 23). Retrieved from http://geometer.org/mathcircles/inversion.pdf
[6] Kunkel, P. Inversion Geometry. (2003, September 4). Retrieved from http://whistleralley.com/inversion/inversion.htm
[7] Steiner Chain. Retrieved from http://en.wikipedia.org/wiki/Steiner_chain
[8] Weisstein, E. Steiner Chain. Retrieved from http://mathworld.wolfram.com/SteinerChain.html
[9] Weisstein, E. Tangent Circles. Retrieved from http://mathworld.wolfram.com/TangentCircles.html
[10] Yiu, P. Rational Steiner Porism. Retrieved from http://math.fau.edu/yiu/RationalSteinerPorism.pdf
[11] 2.2 Steiner Chain. Retrieved from http://home.educities.edu.tw/iamalumi/chapter2/SteinerChain-1.htm
Future Directions for this Page
Explore the elliptical and hyperbolic properties of the Steiner chain. Also, including an applet displaying circle inversion would be extremely helpful.
If you are able, please consider adding to or editing this page!
Have questions about the image or the explanations on this page?
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. | {"url":"http://mathforum.org/mathimages/index.php/Steiner's_Chain","timestamp":"2014-04-21T08:31:02Z","content_type":null,"content_length":"55341","record_id":"<urn:uuid:e70a6f41-56a6-4246-865b-4885c793e1c8>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00355-ip-10-147-4-33.ec2.internal.warc.gz"} |
- Fall 2006
Seminar on representation theory
and reductive groups - Fall 2006
The ground rule of the seminar is that all talks should be comprehensible to graduate students. Not necessarily with details of proofs, but references should be adequate.
For those graduate students interested in obtaining credit for this seminar, it is Mathematics 620A, section 101.
• Meetings: Tuesdays
• 2:30 - 3:30 in Math Annex 1118
• 3:30 - Math Annex 1102
• September 19 - Bill Casselman - The L-group.
Written after my lecture: I had meant to talk about the L-group, and in fact to cover essentailly what is in the relevant section of my notes on spherical functions (mentioned later), but wound
up talking about more elementary material.
• September 26 - Hesam - Finite groups of Lie type.
He writes in e-mail, "When I first learned about linear algebraic groups one of the hard parts was groups over a field which is not alg. closed. I think a good exposition of what happens to
groups (or their root data) over finite fields is very useful in that regard. Also later if we do the same for p-adic groups it is nice to compare the results."
• October 3 - Patrick Walls on the ring of adeles
• October 10 - Julia Gordon on unramified principal series, mostly SL(2).
• October 17 - Michael LeBlanc, `The Bruhat decomposition'.
• October 24 - Hesaam - reductive groups over finite fields II
• October 31 - Patrick Walls - Adeles II
• November 7 - Me - on the construction of a group from its root datum.
By the way, "data" is plural, "datum" singular. "These data", not "this data." So is "agenda": "He has hidden agenda," not "a hidden agenda." "Agenda" is literally "things to be acted upon." But
even so, the use of "root datum" for an array of things is unfortunate.
• November 14 - Michael LeBlanc - local zeta functions. Thsi will go over questions raised in Patrick's last talk.
• November 21 - Julia Gordon - TBA
Good for browsing to choose topics.
Structure of p-adic groups
• Robert Steinberg, Lectures on Chevalley groups. The .pdf file has been compressed with gzip, and meant to be downloaded before opening.
This classic was originally a set of lecture notes published by the mathematics department of Yale University. It is posted here with Steinberg's generous permission, but copyright remains with
him. A copy taken from here must be for personal use only.
• Ian Macdonald - Spherical functions on groups of p-adic type, Madras, 1971.
• Jacques Tits - Reductive groups over local fields, in Proceedings of Symposia in Pure Mathematics 33.
Nearly all the proceedings of the Corvallis conference are available on line at http://www.ams.org/online_bks/pspum331/
• Kenneth Brown - Buildings.
• Arjeh Cohen, Scott Murray, and Don Taylor - Computing in groups of Lie type.
• Arjeh Cohen and Scott Murray, Algorithm for Lang's theorem
Representation theory
Weyl groups and root systems
• James E. Humphreys, Reflection groups and Coxeter groups, Cambridge University Press.
• James E. Humphreys, Introduction to Lie algebras and representation theory, Springer.
• Nicholas Bourbaki, Lie groups and Lie algebras-Chapters IV, V, VI, Masson.
• H. S. M. Coxeter, Regular polytopes.
There are several editions, interestingly different. Contains valuable geometric interpretations of things others don't deal with, particularly the role of Coxeter elements. | {"url":"http://www.math.ubc.ca/~cass/courses/seminar-06b/","timestamp":"2014-04-19T09:26:26Z","content_type":null,"content_length":"5435","record_id":"<urn:uuid:3b5ec33b-61e6-4060-9040-f9619bab231d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
January 4th 2009, 07:23 PM #1
Jan 2009
If $f'(x)=cos(x^2-1) and f(-1)=1.5, then f(5)= ?$
so far i have:
$\int_{-1}^5 cos(x^2-a) dx= f(5)-f(-1)$
i let $u=x^2-1$
change the limits from x=5 to u=24, x=-1 to u=0
and have
$1/2 \int_0^{24} cos(u) du + f(-1)$
which gives me -1.788 but that isn't any of the answers provided!
If $f'(x)= cos(x^2-1) and f(-1)=1.5, then f(5)= ?$
so far i have:
$\int_{-1}^5 cos(x^2-a) dx= f(5)-f(-1)$
i let $u=x^2-1$
$du=2x$Mr F says: du = 2x dx =>dx = du/(2x). This will lead nowhere for you because the x will cause trouble. The integral cannot in fact be found using a finite number of elementary functions.
change the limits from x=5 to u=24, x=-1 to u=0 and have
$1/2 \int_0^{24} cos(u) du + f(-1)$Mr F says: Wrong. What happened to the 1/x part of du ....?
which gives me -1.788 but that isn't any of the answers provided!
$\int_{-1}^5 \cos(x^2-1) \, dx = f(5) - f(-1)$
$\Rightarrow f(5) = \int_{-1}^5 \cos(x^2-1) \, dx + f(-1)$
$= \int_{-1}^5 \cos(x^2-1) \, dx + 1.5$
$= 1.5244 + 1.5$ correct to four decimal places using my TI-89
= 3.0244.
i don't know what i am doing wrong with this portion of the problem:
$\int_{-1}^5 cos(x^2-1) dx + 1.5$
i keep getting by antiderving cos
$\int_{-1}^5 sin(x^2-1)$
and when i evaluate that between -1 and 5
i get -.9056 which when added to 1.5 isn't correct. could you guide me as to where i went awry? thanks. =/
i don't know what i am doing wrong with this portion of the problem:
$\int_{-1}^5 cos(x^2-1) dx + 1.5$
i keep getting by antiderving cos
$\int_{-1}^5 sin(x^2-1)$
and when i evaluate that between -1 and 5
i get -.9056 which when added to 1.5 isn't correct. could you guide me as to where i went awry? thanks. =/
I have already told you in my earlier reply that $\int \cos(x^2-1) \, dx$ cannot be found in terms of a finite number of elementary functions. I don't know why you're insisting on trying to
integrate it.
$\int \cos(x^2-1) \, dx eq \sin (x^2 - 1)$ .... if you bother to differentiate $\sin (x^2 - 1)$ (using the chain rule) you will quickly realise this.
January 4th 2009, 08:32 PM #2
January 4th 2009, 08:59 PM #3
Jan 2009
January 4th 2009, 09:52 PM #4 | {"url":"http://mathhelpforum.com/calculus/66883-confused.html","timestamp":"2014-04-17T08:28:49Z","content_type":null,"content_length":"46076","record_id":"<urn:uuid:4467ef19-20d2-4e10-aea3-3548c822ecda>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fix segfault on discriminated record type
This is a regression present on the mainline and 4.5 branch: the gimplifier
crashes during the placeholder substitution in maybe_with_size_expr on an
assignment statement involving a self-referential size. The root cause of
the problem is the way COND_EXPR is gimplified, namely by modifying the node
in place instead of building a new one. But this has been so since day #1 so
should probably be left as-is.
It turns out that the COND_EXPR should never have come into play here because
COND_EXPRs are meant to be factored out in self-referential size expressions;
that was the initial design of size functions. This doesn't happen any more
in some cases because skip_simple_arithmetic sometimes skips more than simple
arithmetic operations; this was changed when TREE_INVARIANT was replaced with
the tree_invariant_p predicate. That's probably a good thing in most cases
but, for size functions, we really want to get only simple operations.
So the patch introduces skip_simple_constant_arithmetic, which is equivalent
to the old skip_simple_arithmetic with s/TREE_INVARIANT/TREE_CONSTANT/ and
uses it for size functions, thus bringing us back to the original design.
Tested on i586-suse-linux, applied on the mainline and 4.5 branch as obvious.
This only affects the Ada compiler.
2010-10-20 Eric Botcazou <ebotcazou@adacore.com>
* stor-layout.c (skip_simple_constant_arithmetic): New function.
(self_referential_size): Use it instead of skip_simple_arithmetic.
2010-10-20 Eric Botcazou <ebotcazou@adacore.com>
* gnat.dg/discr25.adb: New test.
* gnat.dg/discr25_pkg.ad[sb]: New helper.
Index: stor-layout.c
--- stor-layout.c (revision 165610)
+++ stor-layout.c (working copy)
@@ -173,6 +173,32 @@ variable_size (tree size)
/* An array of functions used for self-referential size computation. */
static GTY(()) VEC (tree, gc) *size_functions;
+/* Look inside EXPR into simple arithmetic operations involving constants.
+ Return the outermost non-arithmetic or non-constant node. */
+static tree
+skip_simple_constant_arithmetic (tree expr)
+ while (true)
+ {
+ if (UNARY_CLASS_P (expr))
+ expr = TREE_OPERAND (expr, 0);
+ else if (BINARY_CLASS_P (expr))
+ {
+ if (TREE_CONSTANT (TREE_OPERAND (expr, 1)))
+ expr = TREE_OPERAND (expr, 0);
+ else if (TREE_CONSTANT (TREE_OPERAND (expr, 0)))
+ expr = TREE_OPERAND (expr, 1);
+ else
+ break;
+ }
+ else
+ break;
+ }
+ return expr;
/* Similar to copy_tree_r but do not copy component references involving
PLACEHOLDER_EXPRs. These nodes are spotted in find_placeholder_in_expr
and substituted in substitute_in_expr. */
@@ -241,7 +267,7 @@ self_referential_size (tree size)
VEC(tree,gc) *args = NULL;
/* Do not factor out simple operations. */
- t = skip_simple_arithmetic (size);
+ t = skip_simple_constant_arithmetic (size);
if (TREE_CODE (t) == CALL_EXPR)
return size; | {"url":"http://patchwork.ozlabs.org/patch/68425/","timestamp":"2014-04-17T10:48:28Z","content_type":null,"content_length":"9527","record_id":"<urn:uuid:99c1073b-b9c9-4b10-b9ae-50d11b44a367>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help with this formula -b/2a..Please read.
06-04-2002 #1
Registered User
Join Date
Jun 2002
Need help with this formula -b/2a..Please read.
I am a newbie here so please be kind. I am writing a program that will tell the user the vertex of a parabola. The formula for a parabola is y=ax^2+bx+c. To find the vertex of the parabola you
use the formula -b/2a. So here is my problem, when the user enters 3 coefficients (a,b,c) such as 2 2 2 and you plug a and b into the formula you get zero but what I'm suppose to get is -1/2 or
-.5. For example, plug in the numbers -(2)/2*2 the program spits out zero. The rule is 1 divided 2 is zero but when you use a calculator of course the answer is .5. How can I get around this.
Please help, this is due tomorrow night.
Try using floats instead of integers.
I tried using floats but didn't work. Here is a small part of the program. Hope you can help.
// Main
void main(void)
int a, b, c;
double x;
double y;
double y_intercept;
//double x;
//double y;
char ans;
puts("This project by James Volkman, analyzes");
puts("Equations of the form y=ax^2+bx+c.");
puts("The user will be asked to enter the coefficients a,b, and c.");
printf("Would you like to continue? ");
scanf(" %c", &ans);
while (ans == 'y' || ans == 'Y')
printf("\nEnter the coefficients a,b, and c of y=ax^2+bx+c: ");
scanf("%i%i%i", &a, &b, &c);
if (a > 0)
printf("The equation you entered is y=%ix^2+%ix+%i\n", a, b, c);
printf("It represents a parabola opening upward\n");
Here is the problem> x = (-b) / (2*a);
y = a*(x*x) + b*x +c;
printf("Vertex: (%g,%g)\n", x, y);
y_intercept = c;
printf("Y intercept (0,%g)\n", y_intercept);
printf("Would you like to continue? ");
else if (a < 0)
printf("The equation you entered is y=%ix^2+(%i)x+(%i)\n", a, b, c);
printf("It represents a parabola opening downward\n");
x = -(b)/(2*a);
y = a*(x*x) + b*x +c;
printf("Vertex: (%g,%g)\n", x, y);
y_intercept = c;
printf("Y intercept (0,%g)\n", y_intercept);
printf("Would you like to continue? ");
printf("The equation you entered is y=%ix^2+%ix+%i\n", a, b, c);
printf("It represents a line\n");
printf("Would you like to continue? ");
scanf(" %c", &ans);
> void main(void)
should be int main(void)
add return 0; to the end of main
> scanf("%i%i%i", &a, &b, &c);
To get input with doubles use %g or %f (float).
- Sean
If cities were built like software is built, the first woodpecker to come along would level civilization.
Black Frog Studios
You are trying to mix data types. Try casting
Hint: x = (-(double)b) / (2.0 * (double)a);
06-04-2002 #2
06-04-2002 #3
Registered User
Join Date
Jun 2002
06-04-2002 #4
06-04-2002 #5 | {"url":"http://cboard.cprogramming.com/c-programming/19141-need-help-formula-b-2a-please-read.html","timestamp":"2014-04-17T17:52:27Z","content_type":null,"content_length":"55593","record_id":"<urn:uuid:d6d3766f-f01e-475f-b94f-88a957c51c9e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bilinear transformations
Prove that a loxodromic transformation can be expressed as a resultant of elliptic and hyperbolic transformations. Thanks and Regards, Kalyan.
Hi, Here is a proof I came up with any bilinear transformation $L(\omega, z)$ with two fixed points say $\alpha, \beta$ will be of the form $\frac{\omega - \alpha}{\omega - \beta} = k.\frac{z - \
alpha}{z - \beta}$ when $k$ is neither unimodular nor $k \in R$ the transformation is neither elliptical nor hyperbolic in the respective cases, hence loxodromic. Now for all $k \in C$ we know that
$k = r.(\cos\theta + i\sin\theta) , r \in R$ and $|\cos\theta + i\sin\theta| = 1$ $\therefore$ we can express the transformation $L(\omega, z)$ can be expressed as $E(H(\frac{z - \alpha}{z - \beta}))
$ where $E(t) = (cis\theta)t$ and $H(t) = r.t , r \in R$ where $E(t), H(t)$ are elliptic and hyperbolic transformations respectively. The notation used for $E(t), H(t)$ may be confusing but what I
mean is its just a matrix multiplication. Let me know if this proof is ok. Kalyan. | {"url":"http://mathhelpforum.com/differential-geometry/166385-bilinear-transformations.html","timestamp":"2014-04-19T00:59:29Z","content_type":null,"content_length":"35018","record_id":"<urn:uuid:0d3cdbc3-67f4-4c9b-9533-924d9a7be6a4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Do You Know if a Number is Divisible by 2, 3, 5, 6, or 10?
Learning about divisibility? Then you should check out this tutorial! You'll learn some neat rules for figuring out if a number is divisible by 2, 3, 5, 6, and 10. Take a look; you'll be glad you
Divisibility is an important part of math. When you're finding the factors of a number, you need to figure out what numbers you number is divisible by. Take a look at this tutorial and learn about
Learning about divisibility? Take a look at this tutorial! You'll see how to test if a number is divisible by 2, 3, 5, 6, and 10 using some cool tricks!
Looking for practice finding the least common multiple (LCM)? Then be sure to check out this tutorial! Follow along with this tutorial as it goes through the process of listing multiples of given
numbers and identifying the smallest of these multiples in order to find the LCM. Take a look!
Looking for practice finding the least common multiple (LCM)? Then be sure to check out this tutorial! Follow along with this tutorial as it goes through the process of using a factor tree for each
given number in order to help find the LCM. | {"url":"http://www.virtualnerd.com/pre-algebra/factors-fractions-exponents/divisibility/divisibility-definition-rules/simple-divisibility-tests","timestamp":"2014-04-16T23:14:16Z","content_type":null,"content_length":"26648","record_id":"<urn:uuid:b8f3d23e-63df-490e-b28b-5480a1c15e29>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Present value makes the world go 'round...
... the financial world, at least.
If you were with me as we calculated the present value of future receipts, consider the fact that one can compute the present value of even an infinite stream of future receipts. As explained in
Chirelstein's appendix, at 5%, the present value of $1 a year forever is $20.
How can one calculate the value of an infinite annuity? If we do it manually, it's going take a long time. We can take the present values of $1 every future year, and add them all up. We'd start with
year 1, then year 2, year 3, etc., and keep going until we drop or until we see that it's useless to continue. Here are the first 10 years -- and let's use $100 annual receipts:
PV of $100, 1 year from now: $95.24
PV of $100, 2 years from now: $90.70
PV of $100, 3 years from now: $86.38
PV of $100, 4 years from now: $82.27
PV of $100, 5 years from now: $78.35
PV of $100, 6 years from now: $74.62
PV of $100, 7 years from now: $71.07
PV of $100, 8 years from now: $67.68
PV of $100, 9 years from now: $64.46
PV of $100, 10 years from now: $61.39
Adding those all up, we get to $772.16, with many more years to go. But as you can see, the amounts get smaller as time marches on, and that trend will continue for as long as we want to play this
game. Here's year 15:
PV of $100, 15 years from now: $48.10
Here's year 30: PV of $100, 30 years from now: $23.14
Here's year 50: PV of $100, 50 years from now: $8.72
Here's year 100: PV of $100, 100 years from now: $0.76
Eventually, the present values get so small that you can hardly see them, and as it turns out, they eventually disappear from the naked eye. For example, the present value of a $100 payment to come
in 250 years from now is $0.0005.
In the end, the present value of the infinite stream is $2,000. In other words, if one deposits $2,000 in a 5% account, one will get $100 a year to spend, every year, forever, leaving the original
$2,000 in place to keep going.
When the annual payment is $100 and the present value is $2,000, it is sometimes said that the "multiplier" is 20. That is, the present value is 20 times the annual payment. The multiplier is just
the inverse of the discount rate. 1 divided by .05 = 20.
Now here's where Wall Street comes into the picture. When discussing the price of a stock that's traded on a national exchange, people often refer to the company's "price-earnings ratio." And that's
just another name for the multiplier.
For example, take a look at this entry on Yahoo! Finance for Dow Chemical:
Over on the right, do you see the line "P/E (ttm)"? That stands for price/earnings (trailing 12 months). What Yahoo! has done is compare the share price of the stock (P), $27.07, with the earnings
per share of the company for the most recent 12 months (E), $2.18. In this case, the stock price is 12.41 times annual earnings ($27.07/$2.18), and so the multiplier is 12.41.
Given that multiplier, the discount rate that the market is apparently applying to that stock -- the inverse of the multiplier -- is 1 divided by 12.41, or 8.06%. The P/E ratio is a rough way of
saying that the discount rate that investors are using in pricing that stock is 8.06% a year.
The P/E ratio shown there is crude in one respect -- it looks backward at the last 12 months of earnings, which is public knowledge because of the company's regular filings with the SEC. In fact, a
smart investor is looking to the future, not the past. He or she is trying to predict, and then present-value, future receipts. The past may be some indicator of future earnings, but there are
certainly no guarantees. And so it's not entirely certain what discount rate the market is actually using. But P/E based on the last 12 months tells us something.
Most importantly, the fact that the P/E ratio is calculated and published for every public company, in real time, shows that investors are keenly interested in present value. Present value is what
that ratio is all about. | {"url":"http://bojacktax.blogspot.com/2011/09/present-value-makes-world-go-round.html","timestamp":"2014-04-17T13:35:50Z","content_type":null,"content_length":"72655","record_id":"<urn:uuid:2357cb45-5926-45cc-a54e-7dba4c35b7b0>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
8. H[0] t[0]
One of the most powerful tests for a non-zero cosmological constant is provided by a comparison of the expansion and oldest-star ages. To quote Carroll, Press and Turner (1990), ``A high value of H
[0] (> 80 km/s/Mpc, say), combined with no loss of confidence in a value 12-14 Gyr as a minimum age for some globular clusters, would effectively prove the existence of a significant [] term. Given
such observational results, we know of no convincing alternative hypotheses.''
In Figure 3, the dimensionless product of H[0]t[0] is plotted as a function of [] = 0 Universe, and a flat Universe with [] + [m] = 1. Suppose that both H[0] and t[0] are both known to ± 10% (1-
including systematic errors). The dashed and dot-dashed lines indicate 1-H[0] = 70 km/sec/Mpc and t[0] = 15 Gyr. Since the two quantities H[0] and t[0] are completely independent, the two errors have
been added in quadrature, yielding a total uncertainty on the product of H_0t[0] of ± 14% rms. These values of H[0] and t[0] are consistent with a Universe where [] = 0.8, [m] = 0.2. The Einstein-de
Sitter model ([m] = 1, [] = 0) is excluded (at 2.5
Despite the enormous progress recently in the measurements of H[0] and t[0], Figure 3 demonstrates that significant further improvements are still needed. First, in the opinion of this author, total
(including both statistical and systematic) uncertainties of ± 10% have yet to be achieved for either H[0] or t[0]. Second, assuming that such accuracies will be forthcoming in the near future for H
[0] (as the Key Project, supernova programs and other surveys near completion), and for t[0] (as HIPPARCHOS provides an improved calibration both for RR Lyraes and subdwarfs), it is clear from this
figure that if H[0] is as high as 70 km/sec/Mpc, then accuracies of significantly better than ± 10% will be required to rule in or out a non-zero value for H[0] were larger (or smaller), this
discrimination would be simplified!)
Figure 3. The product of H[0]t[0] as a function of []+ [m] = 1. The abscissa in this case corresponds to []. The solid curve represents a Universe with []= 0. In this case, the abscissa should be
read as [m]. The dashed and dot-dashed lines indicate 1-H[0] = 70 km/sec/Mpc and t[0] = 15 Gyr in the case where both quantities are known to ± 10% (1-H[0]t[0] = 2/3 and [m] = 1 (i.e., those
predicted by the standard Einstein-de Sitter model). Also shown for comparison is a solid line for the case H[0] = 50 km/sec/Mpc, t[0] = 15 Gyr. | {"url":"http://ned.ipac.caltech.edu/level5/Freedman/Freedman8.html","timestamp":"2014-04-17T10:36:21Z","content_type":null,"content_length":"6483","record_id":"<urn:uuid:0e0fd1e6-0ab9-489e-afdf-e1b0dc8edd8d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics Help Please....
August 28th 2013, 11:29 PM #1
Aug 2013
Statistics Help Please....
Hello Everyone! Can anyone help me with this Statistics problem???
1. Calculate p̂1 from the following data.
x1 20
n1 50
x2 55
n2 100
Re: Statistics Help Please....
Please do not double post. See the original thread here.
Re: Statistics Help Please....
Hello Dan,
Take a look at the problem...It's not a double post. One problem has a p-hat symbol & the other has a p-value symbol. Are you familiar with both cause the p-value method is the one I'm having
trouble with...
Re: Statistics Help Please....
I'll reverse the infraction, but all I see is a "p" with a square after it.
August 29th 2013, 08:16 AM #2
August 29th 2013, 09:03 AM #3
Aug 2013
August 29th 2013, 10:08 PM #4 | {"url":"http://mathhelpforum.com/statistics/221482-statistics-help-please.html","timestamp":"2014-04-23T23:03:40Z","content_type":null,"content_length":"40004","record_id":"<urn:uuid:94f3046c-c0eb-4a3c-b8bc-5528a42de076>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
NEWTON, Ask a Scientist at Argonne National Labs
Balloons and Pressure
Name: Rick
Status: educator
Grade: 9-12
Location: NY
Country: USA
Date: Summer 2012
Question: If I have an elastic balloon (or bladder) that contains 300 psi gage pressure of a compressible gas and a subject the outside surface of that balloon to 25 psi gage pressure
of a non-compressible liquid, what happens to the pressure inside the balloon (or bladder), assuming the temperature remains constant? Or another way to put the question would be, does
it take more that 300 psi pressure of the liquid to affect the pressure inside the bladder?
Replies: Hi Rick -- the pressure on the inside of the balloon must equal the pressure exerted on it. The balloon exists in equilibrium -- it expands or contracts until the pressures
balance out. So, *any* change in the external pressure would have some effect on the balloon (even if very small). If you were to take the balloon to the bottom of a swimming pool, it
would shrink a little (because the pressure at the bottom of the pool is higher). The pressure exerted on it is not just the external fluid pressure, though; it is the sum of the
pressure from the external fluid plus the force due to the elasticity of the balloon. So, the answer depends on the nature of the balloon. If the balloon were very "stiff", then the
pressure in the balloon would increase slightly, but less than the 25psi difference exerted externally. If the balloon were very stretchy, then the internal pressure change would be
closer to the 25psi. The "incompressible" designation you placed on the external fluid is irrelevant because you have specified the pressure, not the system volume.
Hope this helps, Burr Zimmerman
This is an application of the Ideal Gas Law,
PV=NrT or (Pressure)(Volume)=(molar number of gas)(Ideal Gas Constant)(Temperature)
You can assume that the gas volume within the balloon follows this law. Since the volume of the gas is constant, and since you stated that the temperature is constant, then the right
hand side will stay constant for the volume regardless of whether it compresses or not. Call this constant K. The measured pressure (without the additional liquid) is 300 psi; so given
the volume of the balloon, you can calculate what the constant will be. Now, the tricky part is that the balloon is stretched and pulling in on the gas. It is stretched to the point
where the inward force at its surface exactly balances the measured pressure. If this inward force changes, as from an externally-applied force from the incompressible liquid (25 psi),
then the balloon will change its shape to where again the inward force from inner surface of the balloon exactly matches the outward force of the enclosed gas. The balloon will not
have to apply as much force as before, and it will simply shrink as a result. The volume of the balloon will thus decrease, but the gas inside will still satisfy the equation PV=K.
Thus, the gas pressure must increase. The difficulty is determining by how much. If we assume that the pressure created by the balloon depends on its stretched volume we can write
Where Pb is the balloon pressure as a function (F) of its volume (V). This pressure must exactly balance the pressure of the enclosed volume, along with that of the applied pressure
(25 psi):
P=F(V) + 25 (1)
With the ideal gas law, we have
PV=K (2)
Thus, we have two equations (1, 2) and two unknowns (P,V). Unfortunately, at this point we need to know F(V), i.e., how much force/area (or pressure) the balloon creates at different
volumes. The equation will depend on the particular balloon material and its elasticity. If you are interested in pursuing the solution further, you can apply a generalized version of
Hooke’s law (treating the balloon material like a stretched spring). Unfortunately, it is not a trivial problem. One further element to consider is the effects of the atmospheric
pressure of the background. Including this pressure will affect the final pressure (because F(V) is nonlinear), but it doesn’t change the fundamental reasoning.
Kyle Bunch, PhD, PE
The volume of the balloon would expand until the pressure inside the balloon (300 psi) (P1) matched the pressure outside of the balloon (25 psi) (P2). If the P2 is increased to 325
psi, the volume of the balloon would decrease until P1 was 325 psi. From the following URL:
P1 = 300 psi P2 = 25 psi
For Isothermal process, Known Ratio of P2/P1
V2 = V1/(P2/P1) = V1/(25/300) = V1/(1/6) V2 = 6 V1 The balloon will expand its volume by six times to reach 25 psi.
Sincere regards, Mike Stewart
Hi Rick,
First thing one must do to answer this is to delete all information that is irrelevant to the question, and restate it in a simpler way. In this case, the only thing that matters
regarding the outside of the bladder is the pressure. What fluid (compressible or not, liquid or gas) is irrelevant!
You start with a very heavy but flexible bladder that can withstand high pressure, then fill it with a gas (obviously a "compressible gas" since all gases are compressible) to a
pressure of 300 psig.
At this point we can summarize that there is 300 psig inside the bladder and 0 psig outside.
Now, we increase the pressure that acts on the outside of the bladder to 25 psig.
Quite clearly, the pressure inside must increase, since the increased outside pressure tends to squeeze the bladder and its contents, resulting in a smaller bladder size. Squeezing the
bladder and its contents to a smaller volume, quite clearly compresses its contents and thus increases the pressure inside the bladder.
Regards, Bob Wilson
There are some issues you have not addressed. Or put another way, it does not matter that the liquid is non-compressible if it is in a container that has moveable walls. In that case
the gas @ 300 psi would push against the non-compressible liquid until the gas pressure and the liquid pressure attained equal pressures (also assuming no solubility of the gas in the
liquid). If by non-compressible you mean that the liquid is immovable, because it is in a rigid solid container, then there is no difference whether the liquid is present or not
present, the gas would expand to fill the volume contained in the rigid container. If the non-compressible liquid is in the rigid container, then the volume available to the gas is the
volume of the rigid container minus the volume of the non-compressible liquid.
Vince Calder
Click here to return to the Engineering Archives
NEWTON is an electronic community for Science, Math, and Computer Science K-12 Educators, sponsored and operated by
Argonne National Laboratory
Educational Programs
, Andrew Skipor, Ph.D., Head of Educational Programs.
For assistance with NEWTON contact a
System Operator (help@newton.dep.anl.gov)
, or at Argonne's
Educational Programs
NEWTON AND ASK A SCIENTIST
Educational Programs
Building 223
9700 S. Cass Ave.
Argonne, Illinois
60439-4845, USA
Update: November 2011 | {"url":"http://www.newton.dep.anl.gov/askasci/eng99/eng99725.htm","timestamp":"2014-04-17T03:50:44Z","content_type":null,"content_length":"16277","record_id":"<urn:uuid:bb05a7f2-799f-43c4-9a64-69862e557abd>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Daily Peloton - Pro Cycling News
Today, with the spector of Jean Marie Leblanc's controversial decision about the Tour de France wildcards hanging in the air (see our article here), the riders set out on a 160km race that is largely
flat in the last half.
The early climbs end after 84km, and the rest of the stage is pan flat, which should lead to another bunch sprint. Enjoy hearing about the Domina Vacanze-Elitron train in action now, 'cause you won't
see them on the big stage in July.
Garzelli is still in the pink by 31" ahead of Simoni; both took 10" on their rivals yesterday as the field split towards the end. Noe', Tonkov, Rumsas, and the rest missed the split and gave up more
time to the top two men.
6:35 PST. 63km left. Sandy Casar (FDJeux.com) takes the intergiro sprint from a small group of six breakaways. They have a 1' 36" lead. Gianni Faresin (Gerolsteiner), Ruggero Marzoli (Alessio), and
Thomas Brozyna (CCC-Polstat) are also in the break.
Mario Cipollini (Domina Vacanze-Elitron) is in the pack. The World Champion had some dirty words this morning for Jean Marie Leblanc. One of his teammates is on the front leading the chase, sharing
work with men from Caldirola, and Lotto are also helping.
Fabio Sacchi (Saeco) and Marzio Bruseghin (Fassa Bortolo) are also in the breakaway. Bruseghin's presence will give the other Fassa boys a free ride, forcing Lotto and Domina to do all the work of
chasing. In the feed zone, Aart Vierhouten (Lotto-Domo) pulls out of the race.
In the pack, Dario Pieri (Saeco) clowns a bit for the cameras with Marco Pantani (Mercatone Uno). Pieri raises his fist above Pantani's head and bonks his helmet a few times to show that Pantani is
wearing his helmet again, and hasn't taken it off prematurely like he did a couple of days ago.
The site of Pantani's bald pate covered by a helmet is also probably a source of Pieri's amusement...it covers a distinctive part of Pantani's trademark "look."
Cipollini gives a show for the ladies (and some of the gents) in the back of the pack...he takes off his jerseys and takes off an undershirt, riding bare-chested next to his teamcar for a moment. He
hands the undershirt to his car and flexes and swings his arms a bit. Fans around the world swoon at Mario's fabulous pecs.
Finally, Cipollini zips up his World Champion's Jersey and rejoins the peloton. He has a teammate with him to help him bridge back up. Show's over for now!
The peloton is strung out a bit, but the pressure is clearly not on. Behind the peloton, Fabio Baldato (Alessio) gives an impressive display of bike handling by taking off his right shoe on the fly
and getting his team car to adjust something.
6:58 am PST. The leading group of six is 1' 43" ahead of the peloton. The peloton is keeping the break in range, but not chasing them back yet. They'll probably let them fry off the front until
closer to the finish.
The six leaders are working hard, taking turns as they fly along pan flat roads. The surroundings are beautiful, but the action is not exactly scintillating in the race right now. The break has no
realistic chance, and the excitement will come as the sprinters' teams start to up the pace and set up their men for the finale.
There are hordes of fans along the roads. The peloton is more strung out now, and it looks like the pace could be increasing.
7:09 am PST. 37km to go. The breakaway has gained a few seconds on the peloton, and now have a 1' 50" advantage. Lotto-Domo and Fassa Bortolo are selected for the Tour de France, but Domina
Vacanze-Elitron will be hot on getting a win for Cipo to break Binda's record and make a statement to JML and the Tour selection committee.
There is still a chance for big Jan Ullrich to go to the Tour, but the big question is, with whom? Will the newly forming Bianchi squad get an invite for big Jan, or will he have to jump to another
7:17 am PST. 30km left. The break has an advantage of 56", so the peloton has started the chaseback. The six men off the front are still working hard, but their minutes in the lead are numbered.
7:25 am PST. 25 km left.The peloton is really flying with about ten men taking turns on the front, representing a number of different teams. The six men off the front have sat up, with the peloton
really bearing down on them now.
There is a circuit in the middle of Montecatini that is a bit over 6km; that's where the fireworks should really start. With such flat roads today, a breakaway really was a suicidal waste of energy.
There are some amusing signs in Italian along the sides of the road directing obscene Italian words at Jean Marie Leblanc. While Leblanc says that Cipollini won't finish the race if invited, the big
Italian has shown in recent years that he is both willing and able to make it over the climbs and compete in the sprints late in the race.
Paolo Lanfranchi (Ceramiche Paneria) has to drop back to get a flat taken care of. Meanwhile, Luca De Angeli (Columbia-Selle Italia) puts in a short-lived, ham-brained attack off the front. He had
his head down and was pushing hard, but the boys from Alessio, Domina Vacanze-Elitron, and Lotto-Domo were all over him in short order.
7:37 am PST. 15km left. The riders are flying along at about 60kph right now. There is a Kelme man on the front, clearly working for their fine sprinter Isaac Galvez Lopez. Some Alessio men are
there, indicating that Angelo Furlan is feeling good.
Now Garzelli's Caldirola boys are pulling the Maglia Rosa to the front of the race. Undoubtedly, they are trying to keep him out of trouble and maybe help him snatch a few more seconds from his
inattentive rivals yet again today.
While there is much angst about the exclusion of Cipo's Domina squad from the Giro, there must certainly be a lot of satisfaction amongst the riders that there have been no doping scandals yet this
year (knock on wood). The lone raid on the Formaggi Pinzolo squad turned up nothing, and nobody has tested positive for a banned substance yet.
Isaac Galvez Lopez has crashed in a corner! He and some of his Kelme boys are chasing the peloton through the finishing circuit!
Elio Aggiano (Formaggi Pinzolo Fiave) has attacked in the confusion and gotten a gap. Saeco is working to keep Simoni out of trouble, and a Domina rider is helping out. One of Cipo's leadout men went
down in the crash too. Hope this doesn't hurt the organization of the Domina train.
7:47 am PST. 7 km left. The riders wind through the sharp right hander that heads slightly uphill to the finish line. The bell rings indicating that there is one lap left to go. Kurt Asle Arvesen
(Team Fakta-Pata Chips) shows his Norwegian Champion's Jersey in an attack that is short-lived. There are three zebra jerseys on the front of the race, and Mario is fourth wheel right now.
Now Denis Lunghi (Alessio) has a go, with the somewhat shortened zebra train in pursuit. He gets absorbed almost immediately. The peloton is strung out. There will likely be more splits in the group,
and inattentive GC men will lose more time tonight.
The sprinters are banging shoulders for position. Petacchi throws a punch and misses, and the rider throws and punch back and hits Petacchi! It's CCC-Polstat's Nauduzs! Furlan and McEwen are now
going at it!! They're literally fighting for Cipo's wheel!!
Now Carlos Decruz of FDJeux.com is having a go off the front. They are in the final kilometer!!
Dacruz is caught, and all the sprinters are still banging for position behind the Domina zebra train.
Cipo is third wheel around the tight corner! And Nauduzs crashes in the corner!! It's only McEwen and Cipo going at it to the line! IT'S CIPO!! He breaks Binda's record in a crazy, wild, dirty,
fighting, swinging, crashing finale!!
Only about eight riders made it through the last corner clean! The Domina leadout was FLYING, and Nauduzs, who moments before had been busy punching Petacchi, lost it as his back wheel slid out from
under him. He didn't take anyone else down with him that I saw, but he caused a road jam at that corner. I wonder if any GC men will have lost time to rivals because of that crash?
It looks like Cipo, then McEwen second, Petacchi third and Svorada fourth, Bennati fifth and Lombardi sixth. Eisel 7th, Pieri 8th. Garzelli was involved in Naudusz's crash.
It was Cipo heading down the middle, McEwen coming around the right, and Petacchi sitting up behind shaking his head at the finish. Cipo had it measured perfectly, taking the sprint by a wheel from
the rapidly advancing McEwen. It really was a two-up sprint at the finish. It will be interesting to see if the judges dish out any fines or DQs for all that fighting near the finish.
None of the fighting involved Cipo, whose train was leading the pack. The fighting really was for Cipo's wheel, with shoulders and shouts and punches exchanged between the men trying to follow the
World Champion.
And of course, Cipollini breaks Binda's record of 41 stage wins, taking his 42nd win in the Giro!!
On reviewing the replays, most of the commotion behind Cipollini was simply hardcore but standard banging of shoulders and jostling of elbows between the sprinters. Petacchi didn't look like he
really swung at Nauduzs, he just shouted and gesticulated violently with his hands. But Nauduzs then came up around him and really sucker-punched Petacchi in the head. That should get him
McEwen gave Furlan an elbow in the ribs, but Furlan was squeezing him too tight and threatening to crash them both, so I doubt anyone will get a penalty but Nauduzs, and perhaps (though I doubt it)
And of course, above it all, focusing on yet another perfect leadout by his teammates, was Mario "SuperFabio" Cipollini. He had history clearly in his sights, and a big "up yours" to Leblanc in the
back of his mind, and he nailed his second consecutive stage win. Nobody can take away that win, and nobody can take away his record of 42 stage victories!
Still no word on whether there will be any disqualifications on the stage, so until that point, here are the top ten stage results:
1 CIPOLLINI Mario ITA DVE 3:41:58 /20" 43,249 Km/h
2 MC EWEN Robbie AUS LOT s.t. /12"
3 PETACCHI Alessandro ITA FAS s.t. /8"
4 SVORADA Jan CZE LAM s.t.
5 BENNATI Daniele ITA DVE s.t.
6 LOMBARDI Giovanni ITA DVE s.t.
7 EISEL Bernhard AUT FDJ 0:03
8 PIERI Dario ITA SAE s.t.
9 RIEBENBAUER Werner AUT FAK s.t.
10 BAK Lars Ytting DEN FAK 0:05
General Classification
1 GARZELLI Stefano ITA VIN 40:51:16
2 SIMONI Gilberto ITA SAE 0:40
3 NOE' Andrea ITA ALS 0:54
4 PELLIZOTTI Franco ITA ALS 1:36
5 SABALIAUSKAS Marius LTU SAE 4 1:38
6 TONKOV Pavel RUS CCC 1:47
7 POPOVYCH Yaroslav UKR LAN 1:53
8 RUMSAS Raimondas LTU LAM 2:04
9 TOTSCHNIG Georg AUT GST 2:26
10 MAZZOLENI Eddy ITA VIN 3:12
Note: Because the crash happened in the final kilometer, none of the GC men lost any time today.
We have word that the judges have disqualified Andris Naudusz (CCC Polsat) from the race. Alessandro Petacchi has been fined but allowed to continue.
In the meantime, thanks for joining us today for our live coverage. See you tomorrow for Stage 10!
Stage 9 Results
1 Cipollini Mario Ita Dve 3:41:58 /20" 43,249 Km/H
2 Mc Ewen Robbie Aus Lot s.t. /12"
3 Petacchi Alessandro Ita Fas s.t. /8"
4 Svorada Jan Cze Lam s.t.
5 Bennati Daniele Ita Dve s.t.
6 Lombardi Giovanni Ita Dve s.t.
7 Eisel Bernhard Aut Fdj s.t.
8 Pieri Dario Ita Sae s.t.
9 Riebenbauer Werner Aut Fak s.t.
10 Bak Lars Ytting Den Fak s.t.
11 Chmielewski Piotr Pol Ccc s.t.
12 Trampusch Gerhard Aut Gst s.t.
13 Furlan Angelo Ita Als s.t.
14 Garcia Quesada Carlos Esp Kel s.t.
15 D'amore Crescenzo Ita Ten s.t.
16 Popovych Yaroslav Ukr Lan s.t.
17 Pietropolli Daniele Ita Ten s.t.
18 Gonzalez Martinez Fredy Col Clm s.t.
19 Velo Marco Ita Fas s.t.
20 Tonkov Pavel Rus Ccc s.t.
21 Gasparre Graziano Ita Dnc s.t.
22 Noe' Andrea Ita Als s.t.
23 Gonzalez Jimenez Aitor Esp Fas s.t.
24 Casagrande Francesco Ita Lam s.t.
25 Caucchioli Pietro Ita Als s.t.
26 Aggiano Elio Ita Fpf s.t.
27 Cioni Dario David Ita Fas s.t.
28 Derepas David Fra Fdj s.t.
29 Frigo Dario Ita Fas s.t.
30 Mazzoleni Eddy Ita Vin s.t.
31 Hamburger Bo Den Fpf s.t.
32 Rumsas Raimondas Ltu Lam s.t.
33 Garzelli Stefano Ita Vin s.t.
34 Tiralongo Paolo Ita Pan s.t.
35 Zanetti Mauro Ita Ten s.t.
36 Garcia Quesada Adolfo Esp Kel s.t.
37 Sunderland Scott Aus Fak s.t.
38 Totschnig Georg Aut Gst s.t.
39 Baranowski Dariusz Pol Ccc s.t.
40 Sabaliauskas Marius Ltu Sae s.t.
41 Pellizotti Franco Ita Als s.t.
42 Weigold Steffen Ger Gst s.t.
43 Scarponi Michele Ita Dve s.t.
44 Mason Oscar Ita Vin s.t.
45 Pantani Marco Ita Mer s.t.
46 Codol Massimo Ita Mer s.t.
47 Bondariew Bogdan Ukr Ccc s.t.
48 Manzoni Mario Ita Mer s.t.
49 Castelblanco Joaquim Col Clm s.t.
50 Figueras Giuliano Ita Pan s.t.
51 Backstedt Magnus Swe Fak s.t.
52 Gasperoni Cristian Ita Mer s.t.
53 Szmyd Sylvester Pol Mer s.t.
54 Arvesen Kurt Asle Nor Fak s.t.
55 Baliani Fortunato Ita Fpf s.t.
56 Simoni Gilberto Ita Sae s.t.
57 Bernucci Lorenzo Ita Lan s.t.
58 Perez Cuapio Julio A. Mex Pan s.t.
59 Lelekin Sergei Rus Ten s.t.
60 Moerenhout Koos Ned Lot s.t.
61 Bertagnolli Leonardo Ita Sae s.t.
62 Adyeyev Sergiy Ukr Lan s.t.
63 Davis Scott Aus Pan s.t.
64 Belli Wladimir Ita Lam s.t.
65 Stremersch Tom Bel Lan s.t.
66 Faresin Gianni Ita Gst s.t. /2"
67 Casar Sandy Fra Fdj s.t. /6"
68 Galvez Lopez Isaac Esp Kel s.t.
69 Tosatto Matteo Ita Fas s.t.
70 Gutierrez Cataluna Ignacio Esp Kel s.t.
71 Nocentini Rinaldo Ita Fpf s.t.
72 Mazzanti Luca Ita Pan s.t.
73 Zaballa Gutierez Constan Esp Kel s.t.
74 Lanfranchi Paolo Ita Pan s.t.
75 Scirea Mario Ita Dve s.t.
76 Bruseghin Marzio Ita Fas s.t.
77 Conti Roberto Ita Mer s.t.
78 Laverde Jimenez Luis F. Col Fpf s.t.
79 Vila Errandonea Franc Esp Lam s.t.
80 Marin Ruber Alverio Col Clm s.t.
81 Winn Juilian Gbr Fak s.t.
82 Dacruz Carlos Fra Fdj s.t.
83 Spezialetti Alessandro Ita Sae s.t.
84 Honchar Serhiy Ukr Dnc s.t.
85 Wegelius Charles Gbr Dnc s.t.
86 Sacchi Fabio Ita Sae s.t.
87 Clavero Daniel Esp Mer s.t.
88 Fritsch Nicolas Fra Fdj s.t.
89 Munoz Hernan D. Col Clm s.t.
90 Romanik Radoslaw Pol Ccc s.t.
91 Joergensen Rene' Den Fak s.t.
92 Petersen Jorgen Bo Den Fak s.t.
93 Kirchen Kim Lux Fas s.t.
94 Brozyna Thomas Pol Ccc s.t.
95 Fornaciari Paolo Ita Sae s.t.
96 Zanotti Leonardo Ita Dnc s.t.
97 Hardter Uwe Ger Gst s.t.
98 Fontanelli Fabiano Ita Mer s.t.
99 Moreni Cristian Ita Als s.t.
100 Usano Martinez Julian Esp Kel s.t.
101 Rodriguez Alexis Esp Kel s.t.
102 Conte Biagio Ita Fpf s.t.
103 Miholievic Vladimir Cro Als s.t. /4"
104 Kohut Seweryn Pol Ccc s.t.
105 Carrara Matteo Ita Dnc s.t.
106 Frattini Cristiano Ita Ten s.t.
107 Gates Nick Aus Lot s.t.
108 Nauduzs Andris Lat Ccc s.t.
109 Riera Valls Jordi Esp Kel s.t.
110 Bertoletti Simone Ita Lam s.t.
111 Muraglia Giuseppe Ita Fpf s.t.
112 Brown Graeme Allen Aus Pan s.t.
113 Mesa Mesa Hector O. Col Fpf s.t.
114 Tonti Andrea Ita Sae s.t.
115 Scholz Ronny Ger Gst s.t.
116 Gobbi Michele Ita Dnc s.t.
117 Strauss Marcel Sui Gst s.t.
118 Cunego Damiano Ita Sae s.t.
119 Khalilov Mykhaylo Ukr Clm s.t.
120 De Angeli Luca Ita Clm s.t.
121 Van Impe Kevin Bel Lot s.t.
122 Barbero Sergio Ita Lam s.t.
123 Illiano Raffaele Ita Clm s.t.
124 Quinziato Manuel Ita Lam s.t.
125 Garcia John Freddy Col Clm s.t.
126 Palumbo Giuseppe Ita Dnc s.t.
127 Giordani Leonardo Ita Dnc s.t.
128 Lunghi Denis Ita Als s.t.
129 Balducci Gabriele Ita Vin s.t.
130 Baldato Fabio Ita Als s.t.
131 Colombo Gabriele Ita Dve s.t.
132 Duma Vladimir Ukr Lan s.t.
133 Massi Rodolfo Ita Clm 0:58
134 Bileka Volodimir Ukr Lan 1:24
135 Gryschenko Ruslan Ukr Lan 1:24
136 Apollonio Massimo Ita Vin 1:29
137 Andriotto Dario Ita Vin 1:35
138 Trenti Guido Usa Fas 1:42
139 Ongarato Alberto Ita Dve 2:04
140 Di Biase Moreno Ita Fpf 2:21
141 Lancaster Brett Aus Pan 2:33
142 Verstrepen Johan Bel Lan s.t.
143 Pozzi Oscar Ita Ten s.t.
144 Secchiari Francesco Ita Dve s.t.
145 Zampieri Steve Sui Vin s.t.
146 Cheula Gian Paolo Ita Vin s.t.
147 Tonetti Gianluca Ita Ten s.t.
148 Scamardella Salvatore Ita Lan s.t.
149 Aug Andrus Est Dnc s.t.
150 Forster Robert Ger Gst s.t.
151 Hoj Frank Den Fak s.t.
152 Wiggins Bradley Gbr Fdj s.t.
153 Marini Mirko Ita Ten s.t.
154 Steegmans Gert Bel Lot s.t.
155 Gerosa Mauro Ita Vin s.t.
156 Ravaioli Ivan Ita Mer 3:25
157 Lhuillier Regis Fra Fdj s.t.
158 Verbrugghe Ief Bel Lot s.t.
159 Hvastija Martin Slo Ten s.t.
160 Guesdon Frederic Fra Fdj s.t.
161 Marichal Thierry Bel Lot s.t.
162 Mondini Gianpaolo Ita Dve s.t.
163 Contrini Daniele Ita Gst s.t.
164 Casper Jimmy Fra Fdj 5:27
General Classification after Stage 9 (Maglia Rosa)
1 Garzelli Stefano Ita Vin 40:51:08
2 Simoni Gilberto Ita Sae 0:31
3 Noe' Andrea Ita Als 0:54
4 Pellizotti Franco Ita Als 1:36
5 Sabaliauskas Marius Ltu Sae 1:38
6 Tonkov Pavel Rus Ccc 1:50
7 Popovych Yaroslav Ukr Lan 1:56
8 Rumsas Raimondas Ltu Lam 2:04
9 Totschnig Georg Aut Gst 2:26
10 Mazzoleni Eddy Ita Vin 3:12
11 Casagrande Francesco Ita Lam 3:21
12 Perez Cuapio Julio A. Mex Pan 3:32
13 Scarponi Michele Ita Dve 3:43
14 Velo Marco Ita Fas 3:59
15 Figueras Giuliano Ita Pan 4:10
16 Bertagnolli Leonardo Ita Sae 4:19
17 Belli Wladimir Ita Lam 4:21
18 Baranowski Dariusz Pol Ccc 4:46
19 Pantani Marco Ita Mer 4:51
20 Honchar Serhiy Ukr Dnc 4:59
21 Garcia Quesada Carlos Esp Kel 5:14
22 Caucchioli Pietro Ita Als 5:16
23 Codol Massimo Ita Mer 5:19
24 Wegelius Charles Gbr Dnc 5:33
25 Faresin Gianni Ita Gst 5:41
103 Cipollini Mario Ita Dve 31:35
Points Classification after Stage 9 (Maglia Ciclamino)
1 Petacchi Alessandro Ita Fas 138
2 Cipollini Mario Ita Dve 108
3 Mc Ewen Robbie Aus Lot 89
4 Garzelli Stefano Ita Vin 71
5 Svorada Jan Cze Lam 63
6 Eisel Bernhard Aut Fdj 60
7 Galvez Lopez Isaac Esp Kel 56
8 Backstedt Magnus Swe Fak 52
9 Simoni Gilberto Ita Sae 34
10 Gasparre Graziano Ita Dnc 34
11 Lombardi Giovanni Ita Dve 33
12 Casagrande Francesco Ita Lam 32
13 Colombo Gabriele Ita Dve 32
14 Furlan Angelo Ita Als 30
15 Baldato Fabio Ita Als 29
16 Nauduzs Andris Lat Ccc 28
17 Brown Graeme Allen Aus Pan 28
18 Di Biase Moreno Ita Fpf 27
19 Pellizotti Franco Ita Als 26
20 Duma Vladimir Ukr Lan 24
21 Pieri Dario Ita Sae 24
22 Noe' Andrea Ita Als 23
23 Casper Jimmy Fra Fdj 21
24 Figueras Giuliano Ita Pan 18
25 Moreni Cristian Ita Als 18
KOM after Stage 9 (Maglia Verde)
1 Gonzalez Martinez Fredy Col Clm 20
2 Zaballa Gutierez Constan Esp Kel 19
3 Garzelli Stefano Ita Vin 16
4 Simoni Gilberto Ita Sae 10
5 Noe' Andrea Ita Als 6
6 Gryschenko Ruslan Ukr Lan 6
7 Moreni Cristian Ita Als 5
8 Tonkov Pavel Rus Ccc 4
9 Garcia Quesada Carlos Esp Kel 3
10 Laverde Jimenez Luis F. Col Fpf 3
11 Mazzoleni Eddy Ita Vin 2
12 Casagrande Francesco Ita Lam 2
13 Pozzi Oscar Ita Ten 2
14 Strauss Marcel Sui Gst 2
15 Backstedt Magnus Swe Fak 2
16 Tiralongo Paolo Ita Pan 1
17 Nocentini Rinaldo Ita Fpf 1
18 Khalilov Mykhaylo Ukr Clm 1
19 Bileka Volodimir Ukr Lan 1
Intergiro after Stage 9 (Maglia Azzurra)
1 Di Biase Moreno Ita Fpf 26:19:10 0:00
2 Backstedt Magnus Swe Fak 26:19:28 0:18
3 Nauduzs Andris Lat Ccc 26:19:41 0:31
4 Aggiano Elio Ita Fpf 26:19:50 0:40
5 Casper Jimmy Fra Fdj 26:20:04 0:54
6 Palumbo Giuseppe Ita Dnc 26:20:11 1:01
7 Svorada Jan Cze Lam 26:20:15 1:05
8 Casar Sandy Fra Fdj 26:20:22 1:12
9 Cipollini Mario Ita Dve 26:20:22 1:12
10 Gonzalez Martinez Fredy Col Clm 26:20:22 1:12
11 Fontanelli Fabiano Ita Mer 26:20:25 1:15
12 Miholievic Vladimir Cro Als 26:20:28 1:18
13 Gutierrez Cataluna Ignacio Esp Kel 26:20:28 1:18
14 Zaballa Gutierez Constan Esp Kel 26:20:28 1:18
15 Usano Martinez Julian Esp Kel 26:20:28 1:18
16 Petacchi Alessandro Ita Fas 26:20:28 1:18
17 Forster Robert Ger Gst 26:20:30 1:20
18 Faresin Gianni Ita Gst 26:20:34 1:24
19 Moreni Cristian Ita Als 26:20:34 1:24
20 Hvastija Martin Slo Ten 26:20:34 1:24
21 Brozyna Thomas Pol Ccc 26:20:38 1:28
22 Conte Biagio Ita Fpf 26:20:38 1:28
23 Furlan Angelo Ita Als 26:20:38 1:28
24 Pozzi Oscar Ita Ten 26:20:38 1:28
25 Sacchi Fabio Ita Sae 26:20:43 1:33
Trade Team after Stage 9 (Fast Team)
1 Saeco - Macchine Per Caffe' 122:39:47
2 Alessio 0:43
3 Lampre 2:25
4 Vini Caldirola - Sidermec 3:55
5 Ccc Polsat 5:29
6 Ceramiche Panaria - Fiordo 7:11
7 Gerolsteiner 7:50
8 Mercatone Uno - Scanavino 9:06
9 De Nardi - Colpack 10:38
10 Fassa Bortolo 11:55
11 Kelme - Costa Blanca 14:59
12 Colombia - Selle Italia 15:11
13 Landbouwkrediet - Colnago 15:20
14 Formaggi Pinzolo Fiave' - Ciarrocchi 20:17
15 Team Fakta - Pata Chips 20:34
16 Fdjeux.Com 22:47
17 Tenax 38:37
18 Domina Vacanze - Elitron 41:28
19 Lotto - Domo 55:45 | {"url":"http://www.dailypeloton.com/displayarticle.asp?pk=3726","timestamp":"2014-04-20T05:44:56Z","content_type":null,"content_length":"32332","record_id":"<urn:uuid:0f1641ab-00ee-4f61-b8e3-f450b2c27b84>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
The History of Numerical Analysis and Scientific Computing
Gaston Gonnet
Oral History (pdf)
Interviewer: Thomas Haigh
Born in Uruguay, Gonnet was first exposed to computers while working for IBM in Montevideo as a young man. This led him to a position at the university computer center and in turn to an undergraduate
degree in computer science in 1973. In 1974, he left for graduate study at the University of Waterloo, earning an M.Sc. and a Ph.D. under the supervision of Alan George. After one year teaching in
Rio de Janeiro he returned to Waterloo, as a faculty member.
In 1980, Gonnet began work with a group including Morven Gentleman and Keith Geddes to produce an efficient interactive computer algebra system able to work well on smaller computers: Maple. Gonnet
discusses in great detail the goals and organization of the Maple project, its technical characteristics, the Maple language and kernel, the Maple library, sources of funding, the contributions of
the various team members, and the evolution of the system over time. He compares the resulting system to MACSYMA, Mathematica, Reduce, Scratchpad and other systems. Gonnet also examines the licensing
and distribution of Maple and the project’s relations to its users. Maple was initially used for teaching purposes within the university, but soon found users in other institutions. From 1984,
distribution was handled by Watcom, a company associated with the university, and in 1988, Gonnet and Geddes created a new company, Waterloo Maple Software, Inc. to further commercialize Maple. Maple
established itself as the leading commercial computer algebra system. However, during the mid-1990s the company ran into trouble and disagreements with his colleagues caused Gonnet to withdraw from
managerial involvement. Since then, he feels that Maple has lost its battle with Mathematica. Gonnet also discusses Maple’s relation to Matlab and its creator, Cleve Moler.
Gonnet continued to work in a number of areas of computer science, including analysis of algorithms. In 1990, Gonnet moved from Waterloo to ETH in Switzerland. Among his projects since then have been
Darwin, a bioinformatics system for the manipulation of genetic data, and leadership of the OpenMath project to produce a standard representation for mathematical objects.
Key words: mathematical software, symbolic computation, computer algebra, MAPLE, MACSYMA, Mathematica, Reduce, Scratchpad, MATLAB, Darwin, OpenMath project
Funding Agency:
Time frame: 1970's, 1980's, 1990's
People: Alan George, Frank Tompa, Ian Munro, Morven Gentleman, Keith Geddes, Cleve Moler, Gene Golub, Donald Knuth, William (Velvel) Kahan, Walter Gander
Location: University of Waterloo, IBM (Montevideo), ETH Switzerland
Citation: Gaston Gonnet Oral history interview by Thomas Haigh, 16 - 18 March, 2005, Zurich, Switzerland. Society for Industrial and Applied Mathematics, Philadelphia, PA
Statement of Use Policy: Copyright © by the Computer History Museum. Use of this the material for research purposes is allowed. Any such use should cite the SIAM History of Numerical Analysis and
Scientific Computing Project (http://history.siam.org). Use of the oral history materials for commercial purposes requires the written permission of the Computer History Museum. Contact the Computer
History Museum, 1401 N Shoreline Boulevard, Mountain View, CA 94043-1311 USA for permissions. | {"url":"http://history.siam.org/oralhistories/gonnet.htm","timestamp":"2014-04-19T14:36:07Z","content_type":null,"content_length":"9071","record_id":"<urn:uuid:73b8c3e0-17a0-403b-bfa1-0607075ec513>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cattaraugus Creek Characteristics
Cattaraugus Creek Characteristics:
Kevin Williams, Buffalo State College
Intended Audience: Undergraduate course in geomorphology, this activity relates to the fluvial geomorph aspect.
I use this activity on Cattaraugus Creek, New York, although it can be used on any local stream.
This activity follows a field trip where we visit locations along Cattaraugus Creek, NY. In this assignment, students use topo maps to calculate gradient and they calculate and evaluate hydrographs.
Students start this exercise using topographic maps of an area recently visited on a field trip to calculate and consider stream gradient of a major river south of Buffalo, NY. The activity then
changes gears to have students work with discharge measurements from this stream. They use these measurements to plot and evaluate a few hydrographs which are used to compare how discharge in this
stream can be used to consider how much precipitation was received in a certain year. In this lab, students practice mathematically calculating geomorphic properties of a stream, plotting data, and
comparing topographic maps to what they observed on the recent field trip. Designed for a geomorphology course, this activity uses online and/or real-time data. The activity addresses student fear of
quantitative aspect and/or inadequate quantitative skills.
Students will use measurements off of topographic maps to calculate river gradient. They will also calculate average discharges to plot hydrographs and will interpret those graphs in order to compare
the rainfall in different years. Students need to critically evaluate the hydrographs that they produce to determine whether certain years were "wet" or "dry" in terms of rainfall/snowmelt. Students
use a fair amount of math in this exercise, so any deficiencies can be identified and addressed.
Assessment and Evaluation:
Based on the answers to the lab, it is usually clear which areas students need help on. The one time I have run this lab, problems were usually with math or interpreting the hydrographs.
Materials and Handouts:
Activity Description/Assignment (Acrobat (PDF) 43kB May29 08) | {"url":"http://nagt.org/nagt/teaching_resources/field/fieldtrips/cattaraugus_creek_characteristics.html","timestamp":"2014-04-19T19:03:33Z","content_type":null,"content_length":"23137","record_id":"<urn:uuid:28038e43-45df-432e-a4e6-05bfedd8f9c1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
A recommended roadmap to Fermat's Last Theorem
up vote 9 down vote favorite
I was inspired to undertake math as a career after watching a documentary on the proof of Fermat's Last Theorem. As such it's been a small goal of mine to understand Wiles et al's proof.
In a similar vein to this question, I was hoping to get a roadmap as to the required topics, with either suggested books or papers to read, I would be required to learn undertake this task. I am, in
particular, looking for expository papers on Galois representations of elliptic curves and deformations of Galois representations.
As for my background I am currently a first year graduate student with the usual algebra, analysis, and topology prerequisites. I also have a course in algebraic number theory (up to the proof of the
finiteness of class numbers), modular forms, and algebraic curves (up to Riemann-Roch) under my belt. I am also currently working through Silverman's AEC.
Thank you in advance for any advice given.
modular-forms elliptic-curves nt.number-theory reference-request
4 This should be community wiki. – Igor Rivin May 24 '12 at 13:49
14 Let me make the following "philosophical" point. If you read these books like Cornell-Silverman-Stevens, then probably you'll be happy, but in some sense all you will have learnt is how to deduce
FLT from stuff that was regarded as standard in the early 90s. So, for example, you will have to take on trust the modularity of $E[3]$ (proved by Langlands and Tunnell using a lot of very
complicated analysis, e.g. analytic continuation of Eisenstein series, cyclic base change for GL(2), non-Galois cubic base change...). As another example... – Kevin Buzzard May 24 '12 at 14:50
11 ...you'll have to believe in the Neron model of the Jacobian of a curve over a $p$-adic field, the relationship between the reduction of the curve and the reduction of the model. You'll have to
believe in local-global for modular forms, a hard theorem of Carayol involving some very delicate vanishing cycles calculations. You'll have to believe in SGA7. You'll have to believe in the
reduction of Shimura curves at primes dividing the discriminant (to follow Ribet's work) and this is very technical... – Kevin Buzzard May 24 '12 at 14:54
10 ..., and you'll have to believe in Fontaine's work on $p$-divisible groups in order to follow Ramakrishna's thesis, which is crucial. Those are just a few things that spring to mind. In books
like Cornell-Silverman-Stevens a lot of these things are regarded as "standard" (because they were!) and references are given. On the other hand $R=T$ theorems are now regarded as "standard"! And
FLT follows "via a standard argument" from such theorems! So in some sense it's hard to see where to logically draw the line :-) – Kevin Buzzard May 24 '12 at 14:56
6 To continue on Kevin's riff, you'll also have to believe Faltings's proof of the Tate conjecture, the Hecke-Weil theorem relating modular forms and L-series with functional equations and a bunch
of other stuff. – Felipe Voloch May 24 '12 at 21:56
show 5 more comments
3 Answers
active oldest votes
What about
• Cornell-Silverman-Stevens, Modular Forms and Fermat's Last Theorem
• Darmon-Diamond-Taylor, Fermat's Last Theorem, http://modular.math.washington.edu/edu/2011/581g/misc/Darmon-Diamond-Taylor-Fermats_Last_Theorem.pdf
up vote 17 down vote accepted • Diamond-Shurman, A First Course in Modular Forms
• some of Milne's course notes http://jmilne.org/math/CourseNotes/index.html
• William Stein's course notes http://sage.math.washington.edu/edu/Fall2003/252/lectures/?
add comment
The book edited by Cornell, Silverman and Stevens is terrific (though you'll of course find some articles more readable than others), but a less demanding alternative is Alf van der
Poorten's Notes on Fermat's Last Theorem, which is really great fun to read, or to dip into. I see that there's a second edition due out in September, so you might or might not want to
up vote 14
down vote Edited to add: Here is Andrew Granville's review.
add comment
up vote 2 down vote mentions a book by Gary Cornell, among other resources.
add comment
Not the answer you're looking for? Browse other questions tagged modular-forms elliptic-curves nt.number-theory reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/97820/a-recommended-roadmap-to-fermats-last-theorem?sort=votes","timestamp":"2014-04-18T13:29:14Z","content_type":null,"content_length":"65923","record_id":"<urn:uuid:e4b372ec-a0ba-4404-b2ac-53085f37c8d7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modeling antiviral resistance, post zero: A failed blog experiment
Tomorrow we begin a blog experiment, one we already judge has failed. In January Marc Lipsitch and his team at the Harvard School of Public Health published a splendid paper using a mathematical
model to investigate the spread of antiviral resistance in the control of pandemic influenza. When we read it our first thought was to write a substantial blog post about the results. The paper was
published in PLoS Medicine almost the same week as another mathematical model on spread through the air traffic system by Colizza et al. and the Colizza paper seemed to get most of the newswire
notice. But the Lipsitch paper is important in several respects. It has extremely interesting results of interest to planners or anyone else interested in pandemic influenza. And it is instructive.
The Harvard team are expert modelers and subject matter experts. This is modeling done the right way and PLoS Medicine is an Open Access journal, so the entire paper was available for free download.
Readers could follow the explanations with the original paper in hand. So we decided to go ahead and explain it in detail.
Why consider this an “experiment”? The experiment was to see if a paper that used a coupled system of non-linear ordinary differential equations as its main technical tool could be explained
sufficiently so a lay audience could understand what was involved and how the model worked. In that way they would have a better appreciation for the findings and some understanding of an important
tool, mathematical modeling. We took it as a personal challenge, and that part of the experiment succeeded, we think. We have been teaching a long time and it is our experience you can teach just
about any subject to non-technical audiences if you take the time and effort. Some of the posts might take more focus and attention than most readers can afford or desire to devote to them, but we
think most everyone who reads this site could make their way through the explanation if they invested the effort.
That’s where, we judge, the experiment fails. The blog format is both flexible and constraining. Readers come for short, usually stand alone, posts about something they’re interested in or just to
see what’s being said in the blogosphere, a venue that has become surprisingly influential. They don’t come for a connected series of sixteen posts on a single specialized scientific paper.
Yes. Sixteen posts. This surprised even us. We read the original paper in an hour or two and it seemed straightforward. It wasn’t until we set out to explain it in detail — enough detail so a lay
reader could see what was going on in almost every paragraph, figure and equation — that we began to realize how many moving parts there were that we took for granted. Even the figures took a couple
of paragraphs each, sometimes a whole post for one figure. We’ve spent between almost forty hours writing this series. We wouldn’t do it again. We got a great deal out of it ourselves, for to teach
is to learn twice. But it is a cost-benefit question for both reader and writer.
However we did do it, and it would be a waste to just delete the final drafts. We’re not forcing anybody to read them and some of you, no doubt, will do so with interest, and we hope even with
pleasure. If we want most people who read us to keep coming here, we will have to keep posting in the usual blog style, though. We’ll also keep doing that to the best of our abilities.
We took a risk we don’t think paid off. Live and learn. First post, tomorrow.
Table of contents for posts in the series:
The Introduction. What’s the paper about?
Sidebar: thinking mathematically
The rule book in equation form
Effects of treatment and prophylaxis on resistance
Effects of Tamiflu use and non drug interventions
Effects of fitness costs of resistance
A few words about model assumptions
1. #1 traumatized March 20, 2007
2. #2 JJackson March 20, 2007
Good go for it!
A while a go (old site) you wrote a series of posts confusingly called Turkish Mutations (I think) explaining how AI gains access to the cell a2,3 a2,6 etc. These were great and I doubt these
posts will fail in your objective. I, and I suspect others, are very willing to put in the time to try and improve our understanding and modelling is a great place to start. It is ubiquitous and
the conclusions are often quoted but without an explanation of the method or how chaotically the model behaves depending on the parameters chosen as inputs.
Richard Feynman was asked to try and explain Quantum Electro Dynamics (QED) by a non scientist friend but felt it was too difficult to do justice to for a lay audience. It was a matter of
considerable regret to him that Alix Mautner died before he figured out how to give this lecture but he did so in a memorial lecture and the book based on it is readable and extraordinary. He
felt a full understanding of physics could not be gained without equal understanding of math but was equal convinced that a fair understanding was attainable by everyone and was worth having.
3. #3 Melanie March 20, 2007
Yes, the blog format is constraining. But so are symphonies, string quartets and poetry. That having been said, I join JJackson to encourage you to bring us what you found. You are a gifted
teacher who has taught me so much technical information in this space that I didn’t ever think I’d be able to understand. You’ve already invested a great deal in this series, so put it out and
let’s dicker, as needed in the comments threads. I’m speaking as a math phobic who is always trying to overcome my shortcomings.
4. #4 gilmore March 20, 2007
It is better to have learned and forgotten, than to have never have learned at all. . .
5. #5 andy March 20, 2007
I’m a theoretical physicist with an interest in mathematical epidemiology, and read the paper but with some difficulty. I hope your that your “lay explanation” will clarify some parts I found
confusing. I can hardly wait! Thank you so very much for your efforts — I assure you they have not been wasted.
6. #6 R Hayes March 20, 2007
Do you write for the many, or the few?
In this forum, for the many, obviously; but how many of the many does it take for the effort to be worthwhile?
It might be that those of your readers who read and benefit from the exposition are a crucial “market segment”; it depends on what your payoff is — and that might not be clear-cut or static.
7. #7 MFoley March 20, 2007
I agree with the above posts. As an economist with an interest in simulations I find a lot of good ideas in the math-epi field. I am probably able to get through these papers on my own, but
having your guidance will greatly increase the range of insights I expect to gain. Let’s see those posts!
8. #8 Chris March 20, 2007
I think your little experiment shows a great deal about the concise and precise writing style that is expected of scientists. I still remember my first big science write-up in undergrad. I
thought I put in some extra time and thought that I did a pretty good job of making it short and to the point. Imagine my surprise when my prof handed it back with a note that said “good, but it
should be about half as long”.
This condensed style we cultivate also leads to more jargon, and makes most of the science literature incomprehensible to those with no training or familiarity with the field. As a result, we’ll
always need people like Carl Zimmer and Carl Sagan who popularize science without distorting its claims. Thanks for joining the ranks of people making science more accessible.
9. #9 marquer March 20, 2007
With regard to that Colizza paper, it says, right in the front matter,
Like all mathematical models, this model for the global spread of an emerging pandemic influenza virus contains many assumptions (for example, about viral behavior) that might affect the accuracy
of its predictions.
Seems to me that the largest single assumption that is made by Colizza, et al, is that the neuraminidase-inhibitor antivirals are going to work to slow down the spread.
There are serious reasons to suspect that any emergent panflu may be largely or completely resistant to Tamiflu and Relenza. It’s purely Panglossian to expect otherwise. And I am enough of a
grouchy old engineering type that I instinctively dismiss any planning or modelling effort which does not include worst-case scenarios from the outset.
With regard to air travel, I carry a folded N95 mask whenever I fly, tucked in my cabin baggage. If there is an obviously sick person in proximity to me, it is a sensible prophylaxis. Imperfect,
but nevertheless much better than going entirely unprotected. Even if what the other person is infected with is not a lethal panflu, I prefer not to sit in foreign hotel rooms sick with even an
annoyance-level head cold. I’ve done that. Ugh.
I note en passant that the entire cohort of passengers and cabin crew could be provided with this level of protection, or better (N100) for a tiny cost. A set of fifty such masks costs, in bulk,
about fifty bucks, and masses about 1kg, and fits in a space the size of a shoe box. Fifteen such boxes stowed in otherwise unused corners of the aircraft (like the avionics bays) would protect
even a 747- or A380-sized population.
The fewer people who get infected on the plane, the slower the spread at the destination once they get off. Not rocket science.
I note as well that airliner environmental controls, just as in automobiles, have two settings: input of outside air, and recirculation of inside air. Airline operations staff invariably dictate
that the cabin air be set for max recirc — because it’s cheaper that way. Outside air has to be bled off the engines, warmed, and compressed to a breathable density, which cuts fuel economy.
So the passengers and cabin crew rebreathe each others’ exhalations for hours. The health statutes for decent treatment of prison inmates mandate a higher fresh air flow than is provided to an
airline passenger! (The cockpit crew get max fresh outside air at all times, which tells you certain things.)
If the panflu side of the public health apparat had a clue, their contingency plans for managing the early stages of a breakout would include mandating that the airlines use maximum outside
ventilation on those flights which were permitted to operate. Again, this isn’t a perfect guarantee against infection. What it would likely do would be to reduce the number of persons likely to
become infected. Which, given the epidemiological mathematics, is important.
10. #10 revere March 20, 2007
R. Hayes: The many or the few? It’s not so much a question of that. It’s a cost benefit calculaation with respect to my time and effort and how much benefit would accrue. It literally too, me
more than 40 hours to write and rewrite the posts (not that you’ll know it from reading them; they’re not that good). I just don’t have the time to do this and I judge most people will not have
the time to read all of them. I tried to make them semi-independent, but several of them are so specific to the paper that it wasn’t possible to do that entirely. They will go too fast for some,
agonizingly slow for others. It’s tne nature of the form I am writing in. I certainly take as much or usually much more time explaining things to far fewer people when I teach, so it’s not that
kind of accounting.
marquer: IMO N100s would be a waste of money. N95s for cabin crews would not be properly fitted and so much of their protection would be wasted, too. They are probably better than nothing, but
I’d rather have Tamiflu. You can read the posts (or not) and make up your own mind. The models will do one thing: help you set out your assumptions, which I recommend for your mask statements.
11. #11 Steph March 20, 2007
For what it’s worth – those explaining-the-science posts are my favorite ones! Thanks for taking the trouble and time. I for one really appreciate it.
12. #12 JDBallenger March 20, 2007
Self-deprecating comments on style from the Reveres aside, this is an important effort and should be applauded. If nothing else, I expect the forthcoming posts will be entertaining.
The Revere efforts will probably be the only detailed lay explanation of mathematical epi modelling on the web – yes, from a cost / benefit perspective the short term return is poor, but over the
longer term we will all be better informed.
All I can say is thanks – it will probably inform my research and that of many others, so in that regard it is most certainly not a failed experiment.
13. #13 MFoley March 20, 2007
Since the cost for providing this service is so concentrated onto one individual while the benefits are very large but diffused across many readers, it is unlikely this can be sustained in its
present structure. This would be a shame since it is likely very beneficial, a perfect example of a “public good”. In grad school we formed “journal clubs” where we would each take it in turn to
present advanced-lay-reader presentations of recently published work to each other in our areas of special interest. This leveraged our resources wonderfully, and exposed us to many more articles
we would never otherwise have read. Is there some way we could form a kind of on-line version of this among ourselves?
14. #14 Tom DVM March 20, 2007
How exactly does one model something (nature) that is beyond our perception and capabilities?
15. #15 revere March 20, 2007
Tom: You mean like religion (modeled like an old man with a beard in the sky), quantum mechanics (probability functions), proteins (ball and stick models), atoms (solar systems), etc., etc. What
exactly did you mean by “exactly”?
16. #16 sharpstick March 20, 2007
I shut the office door and forward phone calls when you do the “explaining the science posts”. I’m looking forward to them.
17. #17 Rob March 20, 2007
Whoa. I hear you when you say it’s not easy. I did quite a bit of journalistic writing for scientific professionals back before my hair turned grey. You have to first find the floor, and then
explain your way up to the ceiling–all without letting your reader turn the page. I’m looking forward to this.
18. #18 M. Randolph Kruger March 20, 2007
Mar-Q. M-95′s are designed to keep virus in and not out. Funk in the air would be blown about but the airplanes are now fitted with an M-95 filter on the return air. Not good enough though as
evidenced by the SARS outbreak. Yeah, they had them on there then too but it was for TB more than anything else. . Your eyes, skin and the mask itself would be pretty good points for the stuff to
collect in and on. How do you take an M-95 off safely pray tell? .
Revere is the modeling thing going to be modeled on that QuickTime movie that was produced about a year ago? The movie for those who just joined us had H5N1 or a pandemic flu running thru the US
and it changed a green map, to yellow, then red, then bright red and then slid back down the scale after four months (single wave event). There were white areas before the start of the pandemic
indicating few or no people. Markedly absent were the white areas showing where the dead would be stacked up as it progressed and of course when it had finished with us. If it maintains its
current yearly rate of 83% it would create a lot of “white” areas that werent there before. So that was something that I noted. It also just made one single assumption that we wouldnt be doing
squat and just letting it happen. Now we have the antiviral thing that Revere is setting up here.
I know they didnt put it in because they didnt and dont know what the CFR’s would be until it breaks. So does this antiviral blanket throwdown create fewer CFR’s in your opinion or does it do
what its doing now which is save the mild cases and not do a thing for the major Revere? I say that because I see that the virus is becoming resistant to Amantadines, Rimantadines in all of the
sideline news… did I miss any ‘dines or are there others out there? That original model would likely still stand and be modified by all of the factors that they could likely be cranking into it
it such as lack of food, antivirals, masks? (if they worked), etc.
So whats your best guess of what antivirals will really be able to do? I just havent seen anything that indicated that any of them working but that doesnt meant that it wont.
19. #19 Tom DVM March 20, 2007
Faith-based science?
You can’t model what you can’t understand or control.
Therefore, you can’t model nature.
Models are ‘toys for boys’…an interesting parlour game for the sophisticated…in my opinion.
20. #20 Tom DVM March 20, 2007
…but a heck of a way to get research funding.
21. #21 revere March 20, 2007
tom: It’s not an easy way to get research funding. Getting research funding today is painfully difficult, more so for modelers than many others. I’ll discuss what models are tomorrow. Read it if
you wish.
22. #22 Tom DVM March 20, 2007
Revere. There is quite a difference between biochemistry molecular models and disease epidemiology models. I don’t know anything about quantum mechanics or God. /:0)
My personal opinion of this type of modelling is completly separate from the interest with which I will study your analysis.
Thanks again.
23. #23 Miso March 21, 2007
“There are serious reasons to suspect that any emergent panflu may be largely or completely resistant to Tamiflu and Relenza.”
could you supply more information with regard to resistance to Relenza? I am not aware of any discussion of a viable Relenza resistant mutation.
24. #24 marquer March 21, 2007
Miso, if one looks at the (now depressingly long) list of drugs which have lost their utility due to evolved resistance, it is evident that in many cases, drugs with a similar mechanism of action
frequently end up being rendered generically useless by adaptation to one drug in the class. Uncle Darwin’s scythe has a wide blade.
That has long been seen in antibiotics, and holds true for antivirals as well. Amantadine and rimantadine, with comparable biochemical mechanisms, used to work on H5N1. Now neither one do. The
culprit seems to have been primarily amantadine overuse.
Oseltamivir and zanamivir are both neuraminidase inhibitors. And oseltamivir is being used now in reckless, not well thought out or clinically overseen ways. There are already signs that response
to its administration during H5N1 progression is not as strong as would have been thought. Ready to bet on either or both drugs as the last line of defense? It seems a chancy wager.
It would be frankly a reassuring thing to have a third class of antiviral in the formulary right now, unrelated to either of these two groups. That does not (yet) exist.
Revere and Randy, on the mask issue: most of what I know about respirators comes from their use in the context of IDLH industrial chemical hazards. Pandemic flu might kill you in a few days if
you get a whiff of it. The chemicals I’m referring to are ones that will kill you in sixty seconds, guaranteed or your money back, if the respirator fails.
Contemplating panflu risks are almost calming by contrast. But, on the other hand, industrial chemicals don’t replicate uncontrollably in biological hosts. They stay put.
Is an N95/N100 just the thing to protect against an aggressive airborne pathogen? No. Not at all. Totally enclosed hoods with their own supplied oxygen would be preferred. In an airliner context,
ones with provision to be fed off of outside air. And that did not dump exhaled air to the cabin.
But that is fantasy. Too expensive, too complex. And many laypersons are frightened to put on heavy head-covering respiratory kit — it makes them claustrophobic. It does for me, too, except that
I know the alternative is certainly worse. N95s exist right now, they are cheap, and can be proliferated widely as a means of slowing down (*not* preventing entirely) initial pandemic infections.
The innocuous little quarter-face masks don’t freak out civilians. You can even make them in cute pastel colors.
And while they don’t work ideally well, they nevertheless work.
I don’t think that the increment of protection from N95 to N100 is large. But the incremental cost of manufacturing to an N100 standard is tiny. Might as well.
With regard to fit, a badly designed mask will not fit *anyone*. I’ve suffered through wearing some of those. But a mask with multipoint attachments, a flexible nose clip and an exhale valve will
fit most people well enough to accomplish some protection. That’s all that is being recommended here: partial, temporary, imperfect protection. This is not a magic bullet. But in a situation
where nothing else is a magic bullet either, we have to stack up layers of improvised partial solutions.
Magic bullets are great — if they arrive in time. I refer interested parties to Arthur C. Clarke’s classic short story Superiority for a lesson in over-reliance on said timely arrival.
Note also that air leaks around the periphery of a lightweight mask can be almost totally ameliorated with a few lengths of impermeable medical tape. I know because I have had to have recourse to
that when the only protection to hand was a crappy mask with a bad fit, which had to be made to do the job via rude field expedients.
25. #25 marquer March 21, 2007
Oh, and I forgot to mention — no, Randy, not the M95 military mask! The N95 civilian aerosol mask is what I had referred to. Just one M95 probably occupies the size and mass spec which I had
listed for a box of fifty N95s. And it probably costs several times more for one M95 than for that lot of fifty as well.
It would be far better protection. But the capacity isn’t there to make zillions of those heavy complex costly masks as early first-pass response items. Not to mention that Grandma would never
put one on.
26. #26 cpg March 21, 2007
“There are serious reasons to suspect that any emergent panflu may be largely or completely resistant to Tamiflu and Relenza.”
‘Oseltamivir and zanamivir are both neuraminidase inhibitors’
You didn’t answer miso question about your statement.
Relenza is structurely different than Tamiflu even though they are both neuraminidase inhibitors.
Resistance to Relenza has not be been proven in ANY strain of H5N1 currently circulating. Quite the opposite it has been proven to work were Tamiflu and Amantadine have not.
So your comment is in fact false based on available evidence and I like you to retract it or provide facts to support it.
27. #27 JJackson March 21, 2007
I too am unaware of any in vivo Relenza resistance. I am more pessimistic than the modellers on how often Tamifu resistance will occur; we have had human H5N1 in 12 countries and resistance has
emerged independently in 3 of them. Another problem is getting it used within the 24 to 48 hour time window after symptom onset as per the packet: Indonesia recently published the ‘good news’
that the average time to delivery had been reduced from 5.7 to 5.2 days. Late application, in the absence of any alternative, can not help in the battle to reduce resistance. The ion channel
blockers readily produce resistant strains but are cheaper, might they have a role in combination therapy to slow the emergence of Tamiflu resistant strains. Has anyone seen a model with
Amantadine in the mix?
28. #28 revere March 21, 2007
JJ et al.: Resistance is not all or none. A contribution of the paper is to consider various degrees of resistance. But the paper won’t settle the questions or doubts people have. I am going
through it both for its substance and its method, so you can see what is involved and have a better idea of what this kind of science is about. If you’ve alrady made upyour mind about antiviral I
hpe you will still get something out of learning about modeling.
29. #29 Miso March 22, 2007
Perhaps the modelling would have been more informative if the known differences between Tamiflu and Relenza were accounted for. Instead Tamiflu characteristics, particularly resistance, MAGICALLY
became neuraminidase inhibitor characteristics. Relenza doesn’t even get mentioned by name, and yet if Tamiflu and Relenza had been modelled individually, wouldn’t that have been more useful?
Call it a conspiracy theory but I think alotting Relenza it’s own characteristics generated an embarassing model. One that does not vindicate the worlds rush to stockpile Tamiflu.
30. #30 revere March 22, 2007
Miso: Nothing in the model structure depends on this being Tamiflu, or, for that matter, a neuraminidase inhibitor. The variables here are rate of emergence of resistance (and it could as well be
Relenza), fitness cost (which may be zero or something much greater) and transmission rates. Tamiflu is the only oral agent at the moment, so when it comes to dispensing ten of millions of doses,
self administered, it is the natural example, but the model doesn’t require it. Perhaps you should read the series and then see what you think. Relenza and Taiflu are very similar and it is not
by any means obvious (to me, anyway) that you cannot get resistance to Relenza, too. Suppose that resistance is much less common, say 100 or 1000 times less common. Then you are in the domain of
this paper, which asks what the effect of just such rare emergences might be. So everywhere it says Tamiflu, feel free to substitute Relenza.
31. #31 Tom DVM March 22, 2007
Unfortunately, influenza has effectively harnessed its instability.
It’s rapid replication rate and error rate in replication has allowed it to evade every technology, treatment(antivirals) and defense(vaccination), that we have developed against it.
It took decades to develop significant resistance to antibiotics while it takes weeks or days to develop equivalent resistance to antivirals, due to genetic instability and replication rate.
H5N1 has been with us for ten years. The signposts (significant antiviral resistance and vaccine failure) are there for all to see.
Whether we choose to put our faith in technology and fail to read them is a uniquely human failing.
I think it would improve our chances for survival, on many levels, if we admitted that current technologies and infrastructure shortcomings, leave us more at risk then they were in 1918.
32. #32 Miso March 22, 2007
“Relenza and Tamiflu are very similar and it is not by any means obvious (to me, anyway) that you cannot get resistance to Relenza, too.”
“So everywhere it says Tamiflu, feel free to substitute Relenza.”
I didn’t say you can’t get resistance to Relenza. I said that Tamiflu resistance already exists, was predicted, and there are mutations that have not occurred yet but will also confer resistance
without reducing viability in the virus. Viable resistance to Relenza was not predicted, and although it has not (reportedly) been used in the field against bird flu, when used in the laboratory
it has been effective against Tamiflu and Amantadine resistant strains.
Why don’t you at least read:
Oseltamivir Resistance – Disabling Our Influenza Defenses by Anne Moscona, M.D. (http://content.nejm.org/cgi/content/full/353/25/2633)
and then tell me the NIs are interchangeable.
I didn’t tell the authors what to call their model, but having called it a model of NIs, they not only missed an opportunity to highlight the diferences, they ignored them.
No need for Relenza, no need to develop i.v.Relenza, we’ve got Tamiflu in convenient capsules. Well I tell you what, tell me where I can get some zanamivir so I can make up my own i.v. Relenza,
and good luck to you.
33. #33 revere March 22, 2007
Miso: It’s not about Relenza or Tamiflu. It’s about modeling and what we can learn from it. If you want to learn something. And don’t have a one track and closed mind. And don’t perseverate. BTW,
there is at least one report of resistance in flu B to Relenza and we can expect more as more is used. But Relenza sounds great. I’d like to have some. I’m not against it. I’m not pro Tamiflu.
It’s just that these posts are about something else.
34. #34 Miso March 22, 2007
Well researched revere,
“An immunocompromised 18-month-old girl developed influenza B infection following bone marrow transplantation for juvenile chronic myelocytic leukemia. She was treated with 6 mg of ribavirin
every 12 h by continuous aerosolized delivery. When her clinical condition deteriorated, approval was obtained from the US Food and Drug Administration for individual use of a new influenza NA
inhibitor, zanamivir , and treatment was started 6 days after the diagnosis of infection. Zanamivir was administered by nebulizer at a dosage of 16�32 mg in 1�2 mL of sterile water every 6 h,
the highest dosage tested in preliminary clinical trials . Treatment with zanamivir was discontinued when her clinical status worsened. During the 2 weeks she was treated with zanamivir, she shed
virus, which was detected by routine nasopharyngeal swabs for virus. The child died of respiratory failure 2 days after zanamivir treatment was discontinued.”
“There is no evidence of zanamivir resistance in viruses isolated from normal healthy patients after treatment with the drug. The only case of in vivo zanamivir resistance is that of an
18-month-old immunocompromised child, who acquired an influenza B virus infection and failed to respond to ribavirin treatment. The child was subsequently treated with zanamivir and after 12 days
of treatment a virus containing an R152K NA mutation was isolated. This virus also contained a mutation in the HA protein, T198I, which had appeared prior to the NA mutation. In contrast,
resistance to oseltamivir occurs in 1%-4% of adults and 4%-8% of the paediatric population.
Sir Lancelot: Look, my liege!
[trumpets play a fanfare as the camera cuts briefly to the sight of a majestic castle]
King Arthur: [in awe] Camelot!
Sir Galahad: [in awe] Camelot!
Sir Lancelot: [in awe] Camelot!
Patsy: [derisively] It’s only a model!
35. #35 cpg March 26, 2007
I notice with pleasure the splitting of the model into two. One theoretical, the other practical.
I was going to post about seeing the wood from the trees re Modeling, but now we have a good compromise.
Can we now talk on this blog about the resistance of antivirals from a practical perspective based on existing facts
Can we then inter-twine the thoeretical with the practical when the right time presents itself.
I think this is the meaning of what you have just presented ???
36. #36 greensmile April 6, 2007
congratulations. You got all the way through it. We will all pass around links to this for quite a while.
I should thank you twice. I have had half a mind to try blogging on math modelling…it was something I did and enjoyed at a number of points throughout my career. But I have to admit it is not
nearly as much fun to write about as it is to design, code and run…I never even wrote the first post.
You have spared me the trouble of actually demonstrating the difficulty by trying it myself.
37. #37 revere April 6, 2007
greensmile: Thanks. Much appreciated.
38. #38 Pete G Kinnon August 12, 2008
To return couple of points relating to generalities of the initial post.
“The experiment was to see if a paper that used a coupled system of non-linear ordinary differential equations as its main technical tool could be explained sufficiently so a lay audience could
understand what was involved and how the model worked. In that way they would have a better appreciation for the findings and some understanding of an important tool, mathematical modeling.”
It seems to be often overlooked that mathematics is simply a language. While, unlike the natural languages, it is largely synthetic, all of its expressions having direct application to the real
world must, in principle, be expressible in natural language.
In practice, of course, translation of a page of equations into natural language may be a formidable task requiring hundreds of pages of output.
Similarly, describing a rose using the language of mathematics, although perhaps not impossible, requires a vast output to achieve even a modest representation.
Such considerations form part of a book I am currently writing (with the general reader in mind) and I would be interested to hear opinions on this. | {"url":"http://scienceblogs.com/effectmeasure/2007/03/20/modeling-antiviral-resistance-1/","timestamp":"2014-04-18T10:50:51Z","content_type":null,"content_length":"127664","record_id":"<urn:uuid:a51f17f8-e309-46bb-aacf-0f7ae9aa9de0>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
18. Write a paragraph proof of this theorem: In a plane, if two lines are perpendicular to the same line, then they are parallel to each other. Given: r perpendicular s, t perpendicular s Prove: r ||
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/508baf08e4b0d596c460fdcc","timestamp":"2014-04-19T15:18:59Z","content_type":null,"content_length":"65606","record_id":"<urn:uuid:79342f6f-273d-4248-8553-9379501bbe44>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Crescenta Math Tutor
...As a Certified Public Accountant I am also well qualified to tutor Accounting, my past students have gone from "what's a debit?" to "I got an A in Accounting!" My day job is as an Audit
Supervisor where I train my staff in audit standards and tax law. I look forward to helping others enjoy the learning process as much as I do. I am a Certified Public Accountant.
5 Subjects: including algebra 2, algebra 1, prealgebra, accounting
...I advise students on their doctoral dissertations as a methodologist and also serve on their dissertation committees as chair. I have a PhD in International Economics and have taught classes in
micro and macroeconomics, monetary economics, and managerial economics I use SPSS daily as part of my ...
7 Subjects: including econometrics, probability, statistics, SPSS
...Normally, I work with students at their own pace and am effective at conveying the essence of the subject across to students. Rather than simply talking at students and expecting them to
understand, I try to ask pointed questions to get them to do the thinking and the problem solving. This mann...
15 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have stayed very adept in K-12 math and proficient at college algebra. I have continued my passion for English by assisting my wife and son with proofreading and necessary editing of any
written school assignments. I love reading fiction novels and have grown very fond of young adult literature as I read all books that my stepson reads to assist him with his reading comprehension.
14 Subjects: including prealgebra, SAT math, public speaking, algebra 1
...I started tutoring in high school and have continued to do so until now, whether it be with friends or my little brothers and sisters. I am easy to get along, communicate well, and have a lot
of patience. I look forward to building long-lasting relationships with you.
21 Subjects: including linear algebra, elementary (k-6th), GED, reading | {"url":"http://www.purplemath.com/la_crescenta_math_tutors.php","timestamp":"2014-04-16T10:20:01Z","content_type":null,"content_length":"24018","record_id":"<urn:uuid:483be443-081a-498b-8525-84410eed1dd3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
s Finds in Mathematical Physics (Week 246)
This Week’s Finds in Mathematical Physics (Week 246)
Posted by John Baez
In week246 of This Week’s Finds, read about Peter Woit’s Not Even Wrong and Lee Smolin’s The Trouble With Physics:
Posted at February 25, 2007 11:26 PM UTC
Re: This Week’s Finds in Mathematical Physics (Week 246)
Layman question: What about another “string-inspired” application?
“Applications of the AdS/CFT correspondence to strongly coupled QCD as observed in relativistic heavy ion collisions” (citing backreaction.blogspot.com here)
Isn’t it the most promising “string-inspired” construction? It seems it could be (almost) experimentally verified.
Posted by: serg271 on February 26, 2007 6:39 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
First of all, the application of mathematical techniques from string theory to the problem of heavy ion collisions is completely separate from the issue of whether string theory is a correct theory
of fundamental particle physics. The stuff you’re talking about is just a clever new way of doing calculations in Standard Model physics. Even if it works, it doesn’t mean the universe is made of
Do you know this? I ask because I just realized, to my horror, that it might not be obvious to layfolk! It’s obvious to physicists.
It’s as if Einstein figured out a way to use math from general relativity to solve problems in hydrodynamics. Suppose it turned out that these methods could correctly predict what happens when you
flush your toilet. This would not mean general relativity is correct! To test general relativity you need to look at bending starlight or black holes, not flush toilets.
Second of all, it’s not clear how well these AdS/CFT methods actually work. Since string theory is supersymmetric, these methods actually apply to something called $N = 4$ supersymmetric Yang–Mills
theory. This is similar to the ordinary Yang–Mills theory that we use to describe quarks and gluons… but it’s different. So, it only gives approximately correct answers for the real-world problems
involving quarks and gluons, and there’s a bit debate over how well it works, and how much it’s been hyped.
Over at that blog you mentioned, Backreaction, you’ll see that Larry McLerran has given Brian Greene a “Pinocchio award” for overstating how well these AdS/CFT methods work for relativistic heavy ion
collisions! Here’s one of his slides:
Here ‘$N = 4$ SUSY Yang–Mills’ is the theory that string theory techniques can be applied to, while quantum chromodynamics (= ‘QCD’) is the theory that actually describes the strongly coupled
quark-gluon plasma (= ‘sQGP’) they’re seeing at the Relativistic Heavy Ion Collider (= ‘RHIC’). The slide is pointing out how these are quite different.
See the blog for more details.
Posted by: John Baez on February 26, 2007 7:10 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
John Baez wrote:
The stuff you’re talking about is just a clever new way of doing calculations in Standard Model physics. Even if it works, it doesn’t mean the universe is made of strings.
When my lawyer and artist friends ask me (“that physics guy”) about string theory, this is one of the points I try to get across, and it seems to transmit well. It’s just string theory going back to
its roots, after all. So, leaving aside the point raised by Polchinski, the idea appears to be pretty easily grokkable.
Polchinski wrote:
String-theory skeptics could take the point of view that it is just a mathematical spinoff. However, one of the repeated lessons of physics is unity — nature uses a small number of principles in
diverse ways. And so the quantum gravity that is manifesting itself in dual form at Brookhaven is likely to be the same one that operates everywhere else in the universe.
One could easily take this idea too far. In undergrad quantum mechanics, I was taught to solve the hydrogen atom with a method which is essentially the grandchild of superstrings, but that doesn’t
mean the universe is “stringy”!
My biggest concern with the AdS/CFT business is not that it shows string theory is correct, or anything like that. Instead, it worries me that a discussion which purports to concern the sociology of
science seems to sidestep the question of what a significant fraction of the scientists are actually doing. If a large group of people are not being seduced by the “Theory of Everything” grail-shaped
beacon, but instead choosing to work in a mathematically related field with direct ties to experiment, then doesn’t that have incredible importance for the psychological and sociological parts of the
argument? This holds true, I think, even if the QGP calculations never really bear fruit — say, if the whole thing doesn’t give much more precise answers than dimensional analysis.
To steal the Vegas analogy, it’s as if a group of gamblers had decided to play the odds a better way: they put computers in their shoes to predict where roulette balls will fall, or they make side
bets with people around the craps table who have superstitious ideas about lucky numbers. They’re not playing the same game as the tourists, but they’re seeing real money. Can any study of the
gambling world rightfully ignore them?
Conflict of Interest Disclaimer: my only stake in the String Wars is a small one. I took Barton Zwiebach’s String Theory for Undergraduates (8.251) and helped proofread the textbook, when it was only
a stack of LaTeX documents. I worked the exercises in the last few chapters to make sure that an actual undergraduate with no intellectual superpowers had a chance of solving them. In my mind, this
was a good thing to do even if the whole thing goes kaput and the M in M-theory turns out to stand for “mud pie”. After all, we’ll only be able to tell if the ideas are good or not if enough brains
can gather around them. CITOKATE — “criticism is the only known antidote to error”.
Posted by: Blake Stacey on February 26, 2007 3:41 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
I’m all in favor of string theorists using the technology they’ve developed to tackle real-world problems like the study of quark-gluon plasma. I just wanted to make sure Mr. Serg271 here understood
the difference between the supersymetric quark-gluon plasmas these folks are studying, the actual quark-gluon plasmas folks are creating at Brookhaven, and string theory as a theory of fundamental
I’m not surprised that a bunch of string theorists, starved for contact with experiment, would enjoy working on this stuff. Grail-shaped beacons are all very well and good, but physicists only bring
home the bacon when they predict the results of experiment.
So, the big question is: how much reliable information can we obtain about real-world quark-gluon plasmas from studying their supersymmetric analogues? I’d like to know… but I guess this is very
controversial, since I’ve seen diametrically opposite claims.
I would also enjoy knowing how many string theorists are working on this stuff. Clifford Johnson claims it’s “a huge percentage”. Any idea what percentage that is, or how many people it amounts to?
I’m less interested in grinding some sociological axe than getting some data. I’m sick of the String Wars — I only wrote about these books because I felt some kind of duty to do so. What I really
want to talk about is Schur functors, Littlewood–Richardson rules, cohomology of Grassmannians, and groupoidification! But that’ll be next Week’s Finds.
Posted by: John Baez on February 26, 2007 9:24 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Maybe I should try throwing together a script which browses the arXiv and counts how many different authors have written papers relating to or citing a given publication (in this case, perhaps hep-ph
/0608177). Heck, I could probably get a journal article in social networks out of that.
Posted by: Blake Stacey on February 26, 2007 9:57 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
What I really want to talk about is Schur functors, Littlewood–Richardson rules, cohomology of Grassmannians, and groupoidification! But that’ll be next Week’s Finds.
Roll on next week!
Posted by: David Corfield on February 27, 2007 7:36 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Posted by: Blake Stacey on February 27, 2007 5:19 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Well that I definitely understand. I also got the idea that “stringy” SUSY QGC is not exactly the same as that of Standard Model. But I had heard the opinion that the prediction of SUSY QGC seems an
“unexpectedly good” fit to experiment. I guess we have to wait for more results from RHIC or LHC. But if this correspondence works, isn’t it an argument in favor of string theory? At least that does
mean it’s not self-contradictory and not trivial (in the sense that it potentially can predict something).
Posted by: Serg271 on February 27, 2007 11:18 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
I imagine that the success of SUSY QCD depends on which quantity you are looking at. The susceptibility seems only to be off by a few percent, whereas the beta-function is off by a factor infinity,
since N=4 SYM remains scale invariant after quantization.
The problem with AdS/CFT in this context is that it is an uncontrolled approximation, AFAIU. That it involves infinitely many colors is not a problem, since you typically can calculate n-color
corrections as a power series in 1/n, and 1/3 is close to 1/infinity.
In contrast, a theory with SUSY, especially four SUSIES, is qualitatively different from a theory without SUSY. I have at least never heard of somebody considering N = 4-ε SYM, work out the
corrections as a power series in ε, and set ε = 4 in the end, which one would expect to do if one could turn N=4 SYM into a starting point for a controlled approximation.
Posted by: Thomas Larsson on February 27, 2007 1:34 PM | Permalink | Reply to this
Littlewood-Richardson rules! Re: This Week’s Finds in Mathematical Physics (Week 246)
Littlewood-Richardson rules – the 1934 results on Grassmannians, with the pretty proof, and the new applications such as tableaux and Knutson and Tao’s puzzles being found? Cool!
And wasn’t Grassman a Valley Crosser to the extent that he was consider a lunatic by some hillclimbers?
Posted by: Jonathan Vos Post on March 1, 2007 11:57 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
In TWF 246, John wrote:
Once I drove through Las Vegas, where there really is just one game in town: gambling. I stopped and took a look. I saw the big fancy casinos. I saw the glazed-eyed grannies feeding quarters into
slot machines, hoping to strike it rich someday. It was clear: the odds were stacked against me. But, I didn’t respond by saying “Oh well - it’s the only game in town” and starting to play.
Instead, I left that town.
Earlier on this blog page, he also wrote:
It’s as if Einstein figured out a way to use math from general relativity to solve problems in hydrodynamics. Suppose it turned out that these methods could correctly predict what happens when
you flush your toilet. This would not mean general relativity is correct! To test general relativity you need to look at bending starlight or black holes, not flush toilets.
I laughed out loud when I read these. Arguments about the “String Wars” aside, I feel that these two passages demonstrate John’s amazing ability for a turn of phrase! He gets the Oscar “Lifetime
Achievement Award for Explaining the Complicated in Simple Terms.”
Posted by: anon on February 26, 2007 11:07 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
It is true that N=4 and QCD are quite different. However, AdS/CFT can be deformed to cases which are non-SUSY and or non-conformal, and many view this as an existence proof that a string theory dual
to QCD is possible. I really yearn for it because it will put to rest a lot of absurd criticisms of string theory. For example:
“Even if it works, it doesn’t mean the universe is made of strings.”
I imagine a conversation like this many years ago.
“Even if this so-called `wave-particle duality’ you propose works, it doesn’t mean the particles in our universe are actually waves. It’s just some new-fangled mathematical mumbo-jumbo.”
Gauge/gravity duality is no different. If you prefer to say the world is “made of gluons”, which in a certain limit behave like coherent states of gravitons, that’s fine. If I prefer to say the world
is “made of gravitons, which move around in some higher dimensional curved space”, but in certain limits behave like coherent states of gluons in 4-dimensional Minkowski space, that’s equally fine.
Each description is useful in different regimes, but none is more “true” than the other.
To put it another way: there are hundreds or thousands of people around the world who are working on QCD. In fact, they are working on string theory; they just don’t know it yet.
Posted by: A String Theorist on February 27, 2007 4:20 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
It is true that N=4 and QCD are quite different. However, AdS/CFT can be deformed to cases which are non-SUSY and or non-conformal, and many view this as an existence proof that a string theory
dual to QCD is possible.
Alas, for this to be useful the putative string dual must be tractable. If you can replace physical QCD with a much more complicated theory, you haven’t gained anything.
“Even if it works, it doesn’t mean the universe is made of strings.”
I imagine a conversation like this many years ago.
“Even if this so-called `wave-particle duality’ you propose works, it doesn’t mean the particles in our universe are actually waves. It’s just some new-fangled mathematical mumbo-jumbo.”
Another conversation from long ago:
“Even if this ether theory that you propose works, it does not mean that electromagnetic waves in our universe are really waves in the ether.”
Anyway, if quantum gravity combines background independence with locality, QJT is the only game in town. This is because only QJT supports the 4D diff anomalies which are necessary to have
correlators depend on separation. As is well known, even infinite conformal symmetry in the strict sense is incompatible with locality. Nontrivial correlators, i.e. a positive anomalous dimension,
requires an anomaly, and diffeomorphisms work the same way as conformal transformations.
Posted by: Thomas Larsson on February 27, 2007 6:08 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
“Each description is useful in different regimes, but none is more “true” than the other”
The description of QCD in terms of gauge fields is certainly more “true” than any known description in terms of strings. QCD is a fully non-perturbative theory, unlike string theory. Not only has no
one yet found a version of string theory that accurately approximates QCD at weak (string) coupling, but there is not even a proposal for a non-perturbative string theory that would be fully
equivalent to QCD.
And, being wildly optimistic and assuming you find such a thing, the universe will still not be “made of strings”, since there is the rest of the standard model to take into account. You have to find
a non-perturbative string theory whose strong-coupling limit gives you electroweak gauge fields, spinor fields, the Higgs mechanism, etc. Or else you have to get this out of weakly coupled strings/
branes, an idea which has led to the “landscape” and pretty conclusive failure.
Comparing the current situation of string theory vs. QFT to wave-particle duality doesn’t seem to me to hold water. When people were talking about wave-particle duality they had a specific, testable
and validated model to point to.
Working on finding a string theory dual to QCD is certainly a valid project with some promise. But it doesn’t justify in any way claiming vindication for the project of unifying quantum gravity and
the Standard Model using 10d strings.
Posted by: Peter Woit on February 27, 2007 7:32 PM | Permalink | Reply to this
atical Physics (Week 246)
John quotes Peter Woit quoting Michael Atiyah remarking that:
If we end up with a coherent and consistent unified theory of the universe, involving extremely complicated mathematics, do we believe that this represents “reality”?
If all the rich mathematics springs from a simple principle then, yes, I would be inclined to do so.
Or, better, put the other way around: I would be surprised, then, if all that rich structure had no place in the reality we perceive.
The math used in string theory may be complicated and demanding. But so is that of the 3-body problem in Newtonian mechanics.
The principle from which all this math springs from is, however, rather simple: pass from the functional $\gamma \mapsto e^{-m\int_\gamma d s}$ used in quantum field theory on maps $\gamma : [0,1] \
to M$ from the interval into some pseudo-Riemannian space
to the functional
(2)$\Sigma \mapsto e^{-\frac{T}{2}\int_\Sigma d^2s}$
on maps from 2-dimensional spaces into target space.
Quantizing (and second quantizing) this gives all the rich structure that is called “string theory”.
And all the indeterminacy: what would be more irritating: if the formula (2) completely encoded the mass of the pion, or if it did not?
The remarkable thing is that we can choose target spaces (1) such that (2) knows about anything like pions at all.
Which doesn’t prove anything. But is remarkable.
Posted by: urs on February 26, 2007 10:05 AM | Permalink | Reply to this
Re: atical Physics (Week 246)
Perhaps Atiyah would object to his “extremely complicated mathematics” becoming “rich mathematics”. I would guess that he thought it wasn’t rich.
Posted by: David Corfield on February 26, 2007 10:25 AM | Permalink | Reply to this
This Weeks Finds in Mathematical Physics (Week 246)
Perhaps Atiyah would object to his “extremely complicated mathematics” becoming “rich mathematics”. I would guess that he thought it wasn’t rich.
A lot of the math that appeared in string theory was his math: K-theory, index theory.
If string theory turns out to have nothing to do with physics, it will remain a pool of rich mathematics.
The dynamics of string backgrounds is that of Ricci flow (or the other way around).
If you like to put it that way: points in that infamous “string landscape” are fixed points of a generalized Ricci flow.
I’d call that “rich mathematics”. Though it is certainly complicated, too.
There are mathematicians working on a mathematical field called topological T-duality who don’t know the first thing about string theory. Their field originates in string theory and is being pursued
as a mathematical entity in its own right.
As you know, the latest in that direction is geom. Langlands. Rich and complicated.
Posted by: urs on February 26, 2007 10:58 AM | Permalink | Reply to this
Re: This Weeks Finds in Mathematical Physics (Week 246)
So Atiyah’s comment is rather odd from your perspective? Necessarily, any simply principled ‘theory of everything’ will require complicated techniques to extract the way our messy universe is. Simple
consequences of simple principles would be too, well, simple?
Posted by: David Corfield on February 26, 2007 11:54 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
So Atiyah’s comment is rather odd from your perspective?
The part of the quote that I have seen, taken by itself – yes.
Necessarily, any simply principled ‘theory of everything’ will require complicated techniques to extract the way our messy universe is.
I would think so.
Simple consequences of simple principles would be too, well, simple?
And rather unlikely to describe a highly non-symmetric world.
Posted by: urs on February 26, 2007 12:14 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Urs and David,
I actually sent Atiyah a draft of the book, in particular to ask him about whether I was accurately reflecting his opinions in the final section where I discussed some of what he had said at the
recent conference in honor of Gelfand. He sent me back some comments and a draft of the writeup for his talk that I ended up quoting in the book.
Atiyah is definitely more of a fan of string/M-theory than I am, and he reminded me that he is a co-author with Witten of a paper on M-theory and has an extremely high opinion of Witten’s judgement
in these matters. I don’t want to put words in his mouth, but I think what he wrote for the Gelfand conference speaks for itself. I believe he sees string/M-theory as a very fruitful source of
mathematical ideas and something that has probably captured some aspect of physical reality, but he’s no fan of the complicated mess that 10/11 dimensions leads one into. The fact that this
complicated mess invokes algebraic geometry of 3-folds, K-theory, index theory, the Ricci flow and all sorts of other sophisticated mathematical technology is not necessarily something he would see
as a positive thing. Atiyah knows those subjects well enough to distinguish between a deep and a superficial use of them, and I think he’s fairly explicit in saying this is not a deep use of
I think his attitude is not fundamentally different than that of David Gross, who continually makes the point that we “don’t know what string theory is”, that the current understanding of string
theory both lacks any deep new symmetry principle and a non-perturbative formulation. Gross hope that a deeper understanding of string theory will lead to a revision of our ideas about space and
time. This is pretty much the same as what I think Atiyah would like to see, a mathematically deeper and more geometrical insight into what is going on with string theory, one that would do away with
the rather complicated and ugly constructions it currently uses to (unsuccessfully) connect to reality.
And, by the way, that “unsuccessfully” is the point. If these constructions actually led to any accurate predictions of anything about the world, Gross or Atiyah wouldn’t be going on so much about
how important it is to look for something simpler.
Posted by: Peter Woit on February 26, 2007 7:15 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
So, then, it seems that when Atiyah refers to (in the above quote) “extremely complicated mathematics”, he is actually referring to the infamous algorithmic complexity (meaning: it takes us lots of
pages to define) of those backgrounds that are known to produce something close to the standard model.
If that’s what is meant, I would just remark that I would maybe hesitate to call this kind of complexity “complicated mathematics”. It is rather “complicated data”:
some fixed points of your Ricci flow will take you more effort (more parameters) to specify than others. The math governing them is the same in both cases.
Or: some solutions of Newton’s equations may require specifying more data – like for instance that of the solution describing the 9-body problem being our solar system – than others – like that
describing just the earth-moon system.
Once some people were concerned about this algorithmic complexity of the solar system. At least a great mind like Kepler was. Kepler proposed that the distance relationships between the six planets
known at that time could be understood in terms of the five Platonic solids. #.
“Oh how naive!”, we say today. We know that the precise distances and masses in the solar system are a result of various arbitrary coincidences in the details of the history of its formation, and
that there is no reason to expect that the algorithmic complexity of this particular solution of Newton’s equations is less than that given by specifying all these parameters one by one.
But somehow, we are in a similar situation now with respect to the standard model of particle physics as Kepler was back then with respect to the solar system.
We do not know: are the number and masses and couplings of the particles in the standard model a result of various arbitrary coincidences in the details of the history of its formation? Or should we
expect their algorithmic complexity to be much less?
Nobody really can know the answer to that today, for sure.
Posted by: urs on February 26, 2007 7:50 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Urs wrote in small part:
the 9-body problem being our solar system
That’s Sun and 8 planets, eh? Poor Pluto. –_^
Posted by: Toby Bartels on February 27, 2007 12:30 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
the 9-body problem being our solar system
That’s Sun and 8 planets, eh? Poor Pluto. –_^
Yeah. Pluto is out, as is, then, Charon, the entire Kuiper belt, the Oort cloud, etc.
The point in the landscape of solutions to Newton’s equations that we are at is so disgustingly complex that I’ll gladly make some approximations to get anywhere at all.
Posted by: urs on February 27, 2007 8:36 PM | Permalink | Reply to this
Re: This Weeks Finds in Mathematical Physics (Week 246)
(…)it will remain a pool of rich mathematics.
Is mathematics endless? Is it limited by our own mind? Or is it external to our mind? (If these questions make sense)… What could the existence of a richer and richer (or more complicated, for what
is worth) mathematics tells us about reality itself?
Posted by: Christine Dantas on February 26, 2007 2:36 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
I think the expression “pool of rich mathematics” can only be taken here as a (subjective) statement made by someone relying on his/her past experience rather than a claim that “math is endless” as
you ask (which I think isn’t).
About your other questions, complicated mathematics may not have anything to do with reality, just like it’s possible to imagine what the life of a bunch of blue elephants who speak seven languages
would be like: pretty complicated but not real. Since maths is always purely imagined (in the sense of not exerting any influence on non-humans, like a cat or a rock) the same non-relevance argument
Posted by: thomas1111 on February 26, 2007 10:44 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
complicated mathematics may not have anything to do with reality, just like it’s possible to imagine what the life of a bunch of blue elephants who speak seven languages would be like: pretty
complicated but not real
No. That is not the line of reasoning that I meant with my question. The fact that we can do complicated mathematics has implications on how our mind works. Why is our intellect mathematically driven
and what clues does this fact give about reality? If mathematics is in principle an ever evolving activity as far as we humans evolve, why does reality allow such a state of affairs? Why is it not
much more constrained? Or is it really constrained (but we still do not know how far)? Then, why?
Posted by: Christine Dantas on February 27, 2007 11:56 AM | Permalink | Reply to this
Re: This Weeks Finds in Mathematical Physics (Week 246)
Mathematics tells us about the world precisely because it tells us about the nature of our minds. I wrote about this in an essay in Sica’s book The Language of Science
Posted by: Scott Carter on March 1, 2007 10:43 PM | Permalink | Reply to this
Re: This Weeks Finds in Mathematical Physics (Week 246)
Dear Scott Carter,
Very interesting essay throughout, but I specially appreciate one of the last paragraphs, starting with “mathematical studies are studies of self-realization”, etc.
And, “The truths come from introspection, but they remain objective”, this remains quite mysterious and intriguing to me…
Posted by: Christine Dantas on March 2, 2007 6:28 PM | Permalink | Reply to this
Feynman; Re: This Weeks Finds in Mathematical Physics (Week 246)
Is Math endless? Probably. Is math about the Universe endless? Possible, but less probable, in the following sense.
Richard Feynman, most of the time, exulted in how simple assumptions could lead to robustly complicated behaviors. He loved solving problems on the fly, and, in his famous course “Physics X”, even
liked to solve audience-supplied problems in public. Well, in a classroom of grad students, a few undergrads, and visitors.
Yet once in a while, he entertained an alternative view. He discussed with me more than once (1968 until a few months before his death) the possibility that there are actually an infinite number of
“natural laws”, each expressed by its own equations, with no more fundamental meta-equation. Perhaps, he speculated, some of these infinite number of natural laws only occur at very high energies, or
very weird combinations of parameters, or at different ages of the universe.
He always claimed that there were plenty of people, even at Caltech, who were better at Math than him. He gave some priority to his “physical intuition.” He even told me that he tolerated my relative
weakness in Math because I did have good physical intuition. He was skeptical of String Theory as perhaps pretty Math, but not established to have any connection to Physics as he saw it.
Posted by: Jonathan Vos Post on March 2, 2007 9:25 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
John wrote in TWF in part:
For example, some people have tried to refute the claim that string theory makes no testable predictions by arguing that it predicts the existence of gravity! This is better known as a
This is not even retrodiction! Retrodiction is (in the English Wikipedia’s current words) ‘the act of making a prediction about the past’. Logically, it is still a scientific prediction (that is, a
factual claim whose truth we do not know but which we believe and intend to test). Retrodiction may be used, for example, in archaeology, to predict what the contents of a site will be before it is
Rather, ‘predicting’ gravity is more akin to postdiction, which is (to edit the text of Wikipedia) ‘an effect of hindsight bias that explains claimed predictions of significant events, such as plane
crashes and natural disasters’. This term is used by those sceptical of paranormal phenonema, properly extended here to scepticism of string theory.
Indeed, in fundamental physics that seeks to describe the nature of time itself, the difference between prediction and retrodiction is ill defined, while postdiction is quite different. The
difference between (scientific) pre-/retro-diction and (pseudoscientific) postdiction is the position in the subjective timeline of the one making the -diction.
But, the “only game in town” argument is still flawed.
I certainly hope that nobody is seriously using this phrase as an argument! It was originally the punch line to a sarcastic joke, quoted here from an article about online casinos:
Reminds me of one of the more legendary gamblers of all time named Canada Bill. His gambling immortality does not rest on his gambling prowess, nor his formidable wins or losses. He is remembered
by a single line he once uttered on the Mississippi, a phrase recited by a myriad of gamblers since. Bill was losing his entire bankroll at Faro when a friend approached and said, “Bill, don’t
you know this game is crooked?” “Yes,” answered Canada Bill, “but it’s the only game in town.”
Your Las Vegas parable is just a less sarcastic way of pointing out precisely the same point. Anybody earnestly using this phrase is making a fool of themself (like an anti-immigration politician
citing Robert Frost).
Posted by: Toby Bartels on February 27, 2007 1:25 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Many historians of physics reckoned that general relativity’s accounting for the anomalous precession of Mercury’s perihelion was hugely influential in getting physicists to sign up to it. Now what
kind of ‘-diction’ was involved there? The anomaly had been known for several years.
Philosophers of science usually call it a retrodiction. Fierce debate broke out perhaps twenty years ago about the degree of support to a theory in such a case. Did it matter whether the data had
been used in the construction of the theory, etc. Indeed, Einstein had rejected an earlier theory because it was incompatible with this data.
Some argue that it’s not the temporal order of the devising of a theory and the observation of data that matters, but rather a question of the extent to which to explain the data you need to fix
certain parameters, i.e., that there is a timeless relation of support between evidence and theory.
When you look into the details of a case, things become mighty complicated.
Posted by: David Corfield on February 27, 2007 7:31 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Some argue that it’s not the temporal order of the devising of a theory and the observation of data that matters, but rather a question of the extent to which to explain the data you need to fix
certain parameters, i.e., that there is a timeless relation of support between evidence and theory.
Yes, exactly.
The Schrödinger equation (plus some extra data) still predicts the diameter of the hydrogen atom, doesn’t it?
We don’t teach students that Schrödinger’s equation “retrodicts” or “postdicts” the size of the hydrogen atom, just because the comparison with experiment had been done long time ago.
I would not think that the term “predictions of a theory” is usually used in the sense of “gives us a vision of the future” (though that may be a special case), but in the sense of “these facts are
derivable from the axioms of the theory plus given extra data”.
In math we say “axioms” and “implications/theorems/corollaries”. In physics we say “theory” and “predictions”.
The fight about whether string theory pre- posts- or retrodicts something is hence like many of those fights in the String Wars: it is not so much about the theory itself, but about its sociological
Technically I think it is quite right to say that string theory predicts gravity. But when the discussion is all about whether or not we, as a society, are taking great risks by spending so many
resources on this theory, we may tend to dislike stating this technically correct statement this way, because we may feel that it does not sufficiently amplify the point that this prediction is
rather useless, for our practical purposes.
Posted by: urs on February 27, 2007 11:11 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Of course, one advantage of a future prediction is that without knowing the details of the predicting theory, if the prediction is surprising and it turns out true, then you will be impressed.
On the other hand, in the case of retrodiction, without diving into the details of the theory, as far as you know the theorists might have just tweaked the parameter knobs of their theory to get the
right result.
Posted by: David Corfield on February 27, 2007 11:32 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
How many “knobs” are there which we can tweak to make gravity not emerge? Closed strings obeying special relativity yield graviton states upon quantization. I can change the string length, or I can
ramp up the dilaton expectation value to vary the coupling, but just how hard is it to make the basic form of gravitation go away?
Posted by: Blake Stacey on February 27, 2007 5:17 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
I recall a quote from Lisa Randall about this:
“Sure, string theory predicts gravity…. ten-dimensional gravity.”
Posted by: Peter Woit on February 27, 2007 6:34 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Urs wrote:
In math we say “axioms” and “implications/theorems/corollaries”. In physics we say “theory” and “predictions”.
Suppose someone comes up with a theory of physics. We can imagine three things happening:
1. The theory lets us to calculate some quantity whose value we didn’t previously know. We measure the quantity and the theory turns out to be right.
2. The theory lets us calculate some quantity whose value we previously knew, but could not calculate using previous theories.
3. The theory lets us calculate some quantity whose value we previously knew, and could calculate using previous theories.
Very roughly speaking (see the fine print below), we get really excited when a theory does 1. We get a bit excited when a theory does 2. And, we feel the theory isn’t obviously wrong when it does 3.
For example:
General relativity reduces to Newtonian gravity in a suitable limit — that’s an event of type 3. It told Einstein that general relativity isn’t obviously wrong.
General relativity predicted the rate at which Mercury’s orbit precesses — that’s an event of type 2. This got Einstein and other people a bit excited about general relativity.
But also, general relativity predicted how much starlight bends when it goes around the sun — that’s an event of type 1. This got Einstein and other people really excited about general relativity.
This is when Einstein made the front page of the New York Times, with a headline reading:
Lights All Askew In The Heavens
Men Of Science More Or Less Agog Over Results Of Eclipse Observations
Einstein Theory Triumphs
When I spoke of ‘predictions’ in This Week’s Finds, I was referring to events of type 1 and (to a lesser extent) type 2. The ‘prediction’ of gravity by string theory is an event of type 3.
The fight about whether string theory pre- posts- or retrodicts something is hence like many of those fights in the String Wars: it is not so much about the theory itself, but about its
sociological consequences.
I don’t think the difference between events of type 1, 2 and 3 is merely ‘sociological’. I really think a good scientist should — other things being equal — be more excited by events of type 1 than
by events of type 2, and more excited by events of type 2 than events of type 3. There are good reasons for this: the low-numbered events really do have more ‘confirmatory power’, all else being
equal. Consult your local philosopher of science (David) for further discussion of ‘confirmatory power’.
Of course, other things aren’t always equal. We know the masses of elementary particles already, so if someone invents a theory that lets us calculate them all, it will be an event of type 2 — but if
the theory is very nice, we’ll get really excited, because people have tried and failed to do this for so long!
Etcetera: one can imagine all sorts of scenarios where we’re bored stiff by an event of type 1, or fantastically thrilled by an event of type 3.
Nonetheless, I still think there’s something to my point.
If an event of type 1 occurs for string theory, there will be a headline on the front page of the New York Times about it, and an issue of This Week’s Finds specially devoted to it!
Posted by: John Baez on March 1, 2007 2:34 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Etcetera: one can imagine all sorts of scenarios where we’re bored stiff by an event of type 1, or fantastically thrilled by an event of type 3.
Case in point:
I seem to have lent my copy of Intellectual Impostures to a friend, so I’d have to look up the references from scratch, but I recall that Weinberg among others has pointed out that the “type 1”
prediction of GR, the deflection of light during a solar eclipse, was in fact a worse test than the perihelion of Mercury. Once all the error bars are figured in, etc., the eclipse measurement was
more likely to be wrong. So, it’s the type 2 prediction which gives us more reason to perk up our ears.
A true pedant might want to call the perihelion measurement a type 3 prediction, because it could be “predicted” (or at least “explained away”) by a dark matter hypothesis: an intra-Mercurial planet,
which the people of the time called Vulcan. (Insert your own Star Trek joke here.)
Of course, now we have type 1 predictions — gravitational redshifts, to begin with — making the whole shebang more than a little academic.
Posted by: Blake Stacey on March 1, 2007 3:22 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Blake Stacey wrote:
A true pedant might want to call the perihelion measurement a type 3 prediction, because it could be “predicted” (or at least “explained away”) by a dark matter hypothesis: an intra-Mercurial
planet, which the people of the time called Vulcan.
Le Verrier predicted the existence of Neptune in the 1840s, due to anomalies in the motion of Uranus. It was found in 1846. Attempting to repeat his success, in 1859 he predicted the existence of a
new planet to explain the precession of the orbit of Mercury. Between 1859 and 1878 there were some reported sightings of this planet, and it was dubbed ‘Vulcan’. But these sightings were never
reliably confirmed. I’ve heard that by the time Einstein came along, fans of this hypothesis were reduced to positing a gaseous Vulcan to explain the precession of Mercury.
Shades of dark matter indeed!
Anyway, the true pedant could argue that almost every really interesting type 2 prediction is really a type 3 prediction, because someone, somewhere, has some nutty theory that already predicts this
number. For example, there are certainly plenty of crackpot numerologists who can ‘explain’ the masses of elementary particles!
So, perhaps type 2 should be defined as:
2. The theory lets us calculate some quantity whose value we previously knew, but could not calculate using previous accepted theories.
But I don’t have the patience to fill all the other loopholes one can dream up, so please don’t point out more!
Posted by: John Baez on March 1, 2007 8:10 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
We can imagine three things happening:
Yes! Certainly. I agree. I wrote essentially the same, necessarily in other words, a few comments above:
Technically I think it is quite right to say that string theory predicts gravity. […] we may tend to dislike stating this technically correct statement this way, because we may feel that it does
not sufficiently amplify the point that this prediction is rather useless, for our practical purposes. #
So we all agree that “predicts/explains gravity” does not imply “get too excited”.
What I was just trying to point out is that the reverse is also false: “don’t be excited about it” does not imply that “string theory does not predict/explain gravity”, which was the statement I was
responding to #.
For me, it is important that I understand what is true and not about the technical aspects of a given theory and know how to distinguish that from assessing what that implies for the relevance of the
theory for our endeavor of understanding the universe.
I think it is a true fact that string theory predicts/explains gravity. You and Peter Woit keep emphasizing of how little use this is, for our practical needs (that’s what I meant by the
“sociological” implication of this fact). And I agree with that!
Posted by: urs on March 1, 2007 10:34 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Having said what I said above, concerning how very non-exciting aspects of string theory are, I would want to add the following:
this is true from a phenomenological standpoint. Everybody who cares about observable physics but not about theoretical structures and the like, should ignore string theory.
On the other hand, personally, I am of that other kind: my main interest is maybe more the structural aspect of theoretical physics, than cranking out numbers and compare notes with the accelerator
From that point of view, I do find string theory very exciting indeed. It may be phenomenologically unviable, but I can hardly ignore it when I am interested in structural aspects of quantum theory,
gauge theory and gravity.
I am often puzzled by conversations like
A: “AdS/CFT is considered to provide a non-perturbative definition of quantum gravity on asymptotically $\mathrm{Ads}_5 \times S^5$ spaces.”
B: “Yawn, oh how very boring! That’s not the number of large dimensions we observe, nor the right sign of the cosmological constant. That’s so uninteresting. ”
Posted by: urs on March 1, 2007 10:54 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
From that point of view, I do find string theory very exciting indeed. It may be phenomenologically unviable, but I can hardly ignore it when I am interested in structural aspects of quantum
theory, gauge theory and gravity.
Urs, would you agree with the following two assertions?
1. GR is both background independent and local, in an appropriate sense. In contrast, no (known) formulation of string theory fulfills both desiderata: perturbative ST is not background independent,
and AdS/CFT is not local.
2. Infinite conformal symmetry in the strict sense is not compatible with locality neither. To have local observables, i.e. correlators that depend on separation, you need an anomaly.
Being interested in structural aspects of quantum gravity, I find it very exciting to be able combine background independence and locality.
Posted by: Thomas Larsson on March 1, 2007 3:37 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Urs wrote:
I am often puzzled by conversations like
A: “AdS/CFT is considered to provide a non-perturbative definition of quantum gravity on asymptotically $AdS_5 \times S^5$ spaces.”
B: “Yawn, oh how very boring! That’s not the number of large dimensions we observe, nor the right sign of the cosmological constant. That’s so uninteresting.”
To understand the reaction, you need to imagine not just one person saying statement A, but hundreds of people writing papers about it — and seeking tenured jobs on the basis of these papers. An
unproved conjecture can be fascinating but still become tiresome when enough people write about it. When jobs are at stake, the negative reactions can become more stronger.
As you know (but others reading may not), the unproved conjecture you cite appeared in a 1997 paper by Maldacena. In 1998, this was the second most highly cited paper on the High-Energy Physics
Literature Database — second only to the annual review of particle physics, a widely cited source of particle data. Maldacena’s paper was cited by 456 papers: “a number comparable to the total size
of the string theory community (including wannabees).”
This dramatic reaction to Maldacena’s exciting idea lasted for many more years — it goes on to this day. This naturally caused a strong counter-reaction from people who wondered why so many
physicists (rather than mathematicians) should be getting jobs for working on supersymmetric quantum gravity in a universe with the wrong number of large dimensions and with a cosmological constant
of the wrong sign.
Imagine, for example, that one year the second-best-cited paper in physical chemistry concerned a conjecture about a very beautiful theory in which carbon was a noble gas.
Posted by: John Baez on March 2, 2007 4:04 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
To understand the reaction […]
Yes, I know: it is the “sociological component” of this which is disturbing. (Maybe that word is not the best one: I just mean it is a problem with us, not within the platonic world of ideas that
string theory lives in, hope you see what I mean)
Just for myself, though, I will keep finding fact X, interesting indepent of the number of other people doing so.
And, as you know, for many of the facts that I find interesting, the problem is exactly the opposite as for the one we discussed here: there are annoyingly few other people appreciating them, rather
than annoyingly many. :-)
Posted by: urs on March 2, 2007 11:32 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
And, as you know, for many of the facts that I find interesting, the problem is exactly the opposite as for the one we discussed here: there are annoyingly few other people appreciating them,
rather than annoyingly many. :-)
Amen, brother.
And then the two can go together. Few things are more demoralizing than an underpopular great idea (say, extensions of knot invariants to tangles, cospan extensions…) paired with a vastly more
popular idea (say, Khovanov homology). The upshot is like the counterintuitive effect when two balloons are connected by a tube for air to flow: the smaller balloon contracts even further until none
of the molecules left trying to fill it can get a job anywhere.
Posted by: John Armstrong on March 2, 2007 2:30 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
an underpopular great idea (say, extensions of knot invariants to tangles, cospan extensions…) paired with a vastly more popular idea (say, Khovanov homology)
Do I understand correctly that your concern here is that Khovanov homology is just a very specific example of the more general problem of extensions of knot invariants?
If so, I could add to that example the curious interest for “integrable systems” in certain circles…
Posted by: urs on March 2, 2007 2:43 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Do I understand correctly that your concern here is that Khovanov homology is just a very specific example of the more general problem of extensions of knot invariants?
Partly yes and partly no. Khovanov-style homology theory is one form of categorification for knot invariants, and categorification is one (very important) part of extending knot invariants to
tangles, but I wouldn’t say it’s just a special case. It lies in the intersection of the categorification program and the tangle theory program, and so goes beyond both in certain ways.
There are also “lower” parts of the tangle theory program that can lend insight into the Khovanov program. For instance, there are many tangle extensions of the bracket, and Kh(T) categorifies only
one of them. And though it’s solved some old problems, I do agree with the old knot guard that cry out, “but where’s the topology?” The bracket extensions “on the ground” show all the possible
shadows for categorification, and one of them may show what the topological content of the bracket is.
As it is, though, I know just a handful of people who have expressed interest in extending knot invariants to tangles, and only one person actually working in earnest on the problem. Everyone already
knows that Khovanov homology is interesting now that he crossed that valley, and many people are busy climbing the hill. Trying to convince people to cross another valley is difficult at best.
Posted by: John Armstrong on March 2, 2007 3:18 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
I don’t have the relevant books to hand, but I seem to remember an argument to the effect that scientists were more impressed by GR’s retrodiction of Mercury’s behaviour than they were by the bending
of star light, and that they were right in this as Mercury tested GR more severely.
The argument was conducted in terms of number of parameters tested. For the sake of argument let’s simplify the situation so that we have a theory which when we ensure it lines up with an old theory
in a limit fixes its constants. Then we may imagine that as we look at the expansion as a Taylor series about the classical solution there are a string of predicted coefficients. Now to the point, it
may happen that a piece of retrodiction puts more of these coefficients to the test than does a prediction. In which case the retrodiction was a more severe test and has greater confirmatory power.
Of course, if you’re viewing all this from the outside and you see a new phenomenon - like light-bending - predicted and found, you’ll probably be more impressed than by an old phenomenon - like
Mercury - explained. Without knowing the details of parameter-fixing this is reasonable.
Internally things are more subtle than I’ve allowed so far. You’d want to have an idea of how the selection of a particular theory from a family is done. Was it a ‘natural’ family, the chosen member
of which is a simple choice by say agreement in the limit with an old theory. How much support do the principles guiding the choice of that family have. The fear being that the family has been
engineered to look like a simple choice has been made to agree with the old theory.
A final note, those of you who have been reading my posts and/or cake talk on statistical learning theory will know that learning is not just about the number of parameters.
Posted by: David Corfield on March 1, 2007 1:58 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
John suggests 3 things that may increase our confidence in a scientific theory:
1. The theory lets us to calculate some quantity whose value we didn’t previously know. We measure the quantity and the theory turns out to be right.
2. The theory lets us calculate some quantity whose value we previously knew, but could not calculate using previous theories.
3. The theory lets us calculate some quantity whose value we previously knew, and could calculate using previous theories.
Item 1. (and only item 1.) is what I understand by the word ‘prediction’. If the quantity predicted and measured had been in some way determined earlier (but unknown to the proponents of the theory,
for whatever reason), then you might say ‘retrodiction’ instead, but this is not philosophically signifcant.
I would call item 2. ‘explanation’, rather than ‘prediction’. Item 3. may be an explanation as well, if there is some superiority of the new theory over the old: initially, if it simpler (or
background independent, or otherwise preferable, possibly controversially so); later on, if we come to believe (probably through separate evidence of type 1.) that the old theory is simply wrong and
the new theory is (more) correct.
Well, this is how I understand the words ‘prediction’ and ‘explanation’, but I won’t fight for them. Now that John has made this list, it’s probably better just to use the numbers 1,2,3., since they
get at the heart of the matter, rather than arguing over the meaning of common words.
And here is one point at the heart of the matter: No matter how many successes a theory has of type 3., and even of type 2., in science we require confirmation of type 1. as well. If necessary, we
force events of type 1. to occur; that is, even if we have already accounted for all observations, we perform experiments to create new observable phenomena. Without experimentation (or some other
source of continually new observations), we do not have falsifiability (to use Popper’s term), at least not in practice. Then we risk degeneration into ‘Greek science’ (if I may be so boldly
insulting to Aristoteles and company), flying into the clouds of theory without the ground of observation. (And this problem may occur through no fault of the theory or its proponents, if we simply
lack the technological ability to perform experiments! Therein lies the tragedy of this endeavour.)
Posted by: Toby Bartels on March 2, 2007 10:38 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Well, this is how I understand the words ‘prediction’ and ‘explanation’, but I won’t fight for them. Now that John has made this list, it’s probably better just to use the numbers 1,2,3., since
they get at the heart of the matter, rather than arguing over the meaning of common words.
Agreed! What’s more, by speaking of 1-dictions, 2-dictions and 3-dictions, we can generalize to $n$-dictions, where the “worth” of the diction falls with increasing $n$. This gives a numerical gloss
to the qualitative graph shown on page 5 of this Alan Sokal paper.
Creationism, for example, is an $\omega$-diction.
Posted by: Blake Stacey on March 3, 2007 8:51 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
David Corfield wrote:
Many historians of physics reckoned that general relativity’s accounting for the anomalous precession of Mercury’s perihelion was hugely influential in getting physicists to sign up to it. Now
what kind of ‘-diction’ was involved there? The anomaly had been known for several years.
You already said yourself what it was: an explanation. That’s valuable, but that’s all; it’s intellectually dishonest to try to turn this into a prediction. (And GR had its own important prediction
at the time: the bending of light by massive objects).
Philosophers of science usually call it a retrodiction.
Do they? This doesn’t fit my own understanding of the word’s meaning (which is well represented by the definition that I quoted from Wikipedia). Perhaps people do not use the word consistently?
Did it matter whether the data had been used in the construction of the theory, etc. Indeed, Einstein had rejected an earlier theory because it was incompatible with this data.
In my opinion, this is exactly what is important. If Einstein hadn’t known about Mercury, then this should have counted as a prediction (or retrodiction, but as I said the distinction is vague and
unimportant) for him, increasing his own confidence in the theory. As it was, Einstein was still rightly confident, as only his theory provided an explanation, but not as confident as a confirmed
prediction should make him. In any case, astronomers at large would count this only as an explanation, not a prediction. But if they too had not known about Mercury’s anomaly (say, if they were just
beginning to get such precise measurements), then they too would count it as a prediction, and it would have been a more stunning success.
String theorists may argue that string theory explains gravity. (I don’t think so, since the explanation is more complicated and less clear than Einstein’s theory, which is to be explained. But at
least it is consistent with quantum physics, so I can certainly accept a difference of opinion here.) But unless they’re using words differently from the way that I understand them, it certainly does
not predict (or retrodict, whatever) gravity.
Science is more than merely a search for explanations of known facts. It relies also on tests against new facts. This is where prediction comes in.
Posted by: Toby Bartels on February 28, 2007 4:21 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
String theorists may argue that string theory explains gravity. (I don’t think so, since the explanation is more complicated and less clear than Einstein’s theory, which is to be explained.
At the classical level, which is what you seem to have in mind, Einstein says: extremize the Einstein-Hilbert action.
String theory would offer an explanation for why this is the action functional to be extremized: because this is what makes spacetime a target for conformal 2-dimensional theories.
That, in turn, would happen to xxxdict or explain the presence of a perturbative quantization of the Einstein-Hilbert action.
It might be useful to look other examples for this kind of explanation of one theory by another.
Alain Connes has another theory which offers an explanation for why Einstein found himself extremizing the Einstein-Hilbert functional, instead of some other functional: he postulates that the action
functional in question should be a functional of target space together with a certain Dirac operator on that, satisfying a natural compatibility condition. He calles this principle the “spectral
action principle”.
And it predicts/explains gravity: feed in a Dirac operator and out comes the Einstein-Hilbert action functional, together with couplings to other fields.
What Connes’s principle so far does not predict is the existence or nature of a quantum version of this.
Maybe he just has to realize his spectral triple as a limit of a suitable CFT, though…
Anyway, I very much agree with what you say about the usage of “predicts”, “explains”, etc. as long as the everyday understanding of these terms is concerned, and their implications for the
gullibility of those who have to distribute the money, but for technical reasons I doubt that you would want to consistently search for alternatives for “predict” when you want to refer to the
implications some physics theory has.
Well, I’d be content with consistently using any other reasonable term to express the technical idea “follows from the axioms”. “Explains” would be fine with me.
Posted by: urs on February 28, 2007 4:47 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
“Explains” would be fine with me.
My choice, in chapter 6 of my book, was “accounts for”. “Explains” comes with quite a lot of philosophical baggage from the heyday of logical empiricism.
Posted by: David Corfield on March 1, 2007 2:22 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
My choice, in chapter 6 of my book, was “accounts for”.
Okay, that’s another good possibility.
If all of physics had been made very precise, a physical theory would be a collection of definitions and axioms, and the only reasonable term for what the theory does to statements it explains,
predicts or accounts for would be: “implies”.
As Toby points out, many physics theories are much more vague than our standard mathematical axiom systems. But, on the other hand, hypotheses and implications of these certainly have their place
also outside of rigorous mathematics, where they may be subject to a certain degree of doubt, but still useful and relevant.
In fact, let me say: the assumption of a surface weighted by its proper volume to have a consistent quantization implies that its target space is a solution to Einstein’s equations (yes, in 26
dimensions and with a bunch of extra fields coupled to it).
Posted by: urs on March 1, 2007 2:45 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Urs wrote in part:
I’d be content with consistently using any other reasonable term to express the technical idea “follows from the axioms”. “Explains” would be fine with me.
Perhaps I’m just not properly aware of how words like ‘prediction’ are technically used in science (or rather, the philosophy thereof). But I certainly would prefer ‘explains’ to ‘predicts’ here.
Actually, I don’t intend anything so formal or mathematical by ‘explain’, so a theory can explain a fact without being precise enough that following from axioms is an applicable concept. On the other
hand, a complicated theory doesn’t really ‘explain’ a simple fact, even when the fact clearly does follow in as strict a sense as you may desire, so there is something akin to algorithmic complexity
(as you suggested) involved as well.
In any case, these are sociological or philosophical terms rather than strictly scientific ones.
Posted by: Toby Bartels on February 28, 2007 7:06 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
On the other hand, a complicated theory doesn’t really ‘explain’ a simple fact
I take it you are arguing that string theory does not even “explain” gravity (let alone predict it) due to it being more complicated than Einstein gravity. Is that what you have in mind?
This is curiously opposite to how I perceive the situation. String theory may be all wrong and everything, but it sure derives gravity (and a little more, maybe a little too much ;-) from a very
simple premise:
The action of the plain relativistic particle (in Nambu-Goto form) is the proper worldvolume of its worldline. Quantizing this gives the Klein-Gordon equation (the relativistic Schrödinger equation).
Second-quantizing this gives free field theory.
The single premise of perturbative string theory is to replace the 1-dimensional worldline of the Klein-Gordon particle by a 2-dimensional surface, while naturally keeping the action to be given by
the proper volume.
That’s it. Nothing more. Nothing more natural than this generalization.
Quantize this, and you run into a very rich structure, including gravity.
Posted by: urs on March 1, 2007 1:23 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
The single premise of perturbative string theory is to replace the 1-dimensional worldline of the Klein-Gordon particle by a 2-dimensional surface, while naturally keeping the action to be given
by the proper volume.
That’s it. Nothing more. Nothing more natural than this generalization.
Quantize this, and you run into a very rich structure, including gravity.
“Nothing more”?
If you just “quantize this” you have a problem on your hands with a tricky symmetry to handle. When you work hard and figure out how to handle it you have a 26 dimensional theory with a tachyon that
has no stable vacuum state.
Now if you figure out how to instead construct a superstring (which requires more new ideas than “nothing”), you’ll have a ten-dimensional theory. Sorry 10d GR is not the gravity we know and love
that Einstein figured how for us.
Getting successfully from 10d down to 4d is not exactly “nothing”.
Again, string theorists who claim that superstring theory predicts gravity should make clear when they say this that they are talking about 10d gravity.
Posted by: Peter Woit on March 1, 2007 2:03 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Urs wrote:
I take it you are arguing that string theory does not even “explain” gravity (let alone predict it) due to it being more complicated than Einstein gravity. Is that what you have in mind?
Yes, this is what I have in mind. But a couple of caveats:
First, my opinion that string theory doesn’t really explain gravity is not a strong opinion. (Remember, it was just a parenthetical comment orginally, within a philosophical discussion of the
difference between explanation and prediction.) I think that Peter Woit makes good points (such as those in the comment immediately above this one), but really you should discuss it with him, since I
know much less about the matter than you two do.
Second, to return to the philosophical discussion, I think that it’s quite possible for a theory to predict something that it does not explain! Well, maybe not at the same time, but the relevant
chronologies are different. To take a somewhat artificial example, consider this word problem from the textbook for the Algebra course that I’m teaching these days:
Find the annual interest rate on a savings account that earns $110 in 1 year on a principal of $1000.
According to the theory embodied in the techniques taught in the textbook, the interest rate is 11%. Of course, the interest rate has already been set, but since we don’t know what it is before we
work the problem, this counts a bona fide (and scientific) retrodiction, which is philosophically just as good as a prediction. But is it really fair to say that these techniques (even together with
the data in the problem) explain the interest rate? They do explain why 11% is the correct answer to the word problem, but if this problem describes a situation in the real world, then we should look
to banking policies and economic forces, rather anything involving the specific number 110, to explain the interest rate. In other words, we eventually want a deeper understanding for the
explanation. Nevertheless, to test whether algebra works (or whether the data are correct), the prediction (if confirmed) stands as a valid success.
Posted by: Toby Bartels on March 2, 2007 8:01 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Toby said:
I think that it’s quite possible for a theory to predict something that it does not explain!
Of course. The theory ‘It always rains on a Thursday’ predicts that it will rain next Thursday, but it doesn’t explain it.
The reason I opted for ‘accounts for’ rather than ‘explains’ is that there’s is an enormous literature on scientific explanation, e.g., Wesley Salmon’s Four Decades of Scientific Explanation.
Here’s a good place to start you off. Which way does a helium filled balloon tilt when it is held by a string in the hand of a passenger during take off?
You’ll all no doubt correctly say ‘forwards’. Now why?
1) Because of the pressure differential created while the plane accelerates.
2) As being accelerated is equivalent to experiencing a gravitational force, the horizontal situation in the plane is equivalent to the vertical situation while lying on your back in a room. In the
latter case, the balloon rises.
Are these both explanations?
Posted by: David Corfield on March 5, 2007 1:03 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
David asked at last:
Are these both explanations?
Well … being blissfully ignorant of the voluminous literature of explanation, I may end up being quite naive, but as long as you don’t mind that …
I’d say that (1) is an explanation to somebody who already understands about air pressure and so forth, while (2) is an explanation to somebody who already understands about the principle of
equivalence (and also understands, somehow or another, that helium balloons normally rise). This is because the explanation is supposed to make the description simpler, so we need to already
understand the background material.
It is (background material) + (application to explain this situation) that is simpler than (background material) + (merely observed or claimed phenomenon); if we had to explain all of the background
material as well, that would end up being more complicated than (observed or claimed phenomenon alone). On the other hand, (theoretical material) + Σ[i] (application of this theory to phenomenon i)
is liable to be simpler than Σ[i] (independent observed or claimed phenomenon i), so you doubtless want to explain the general theory (the existence of air pressure, the principle of equivalence,
etc) eventually, even to somebody that doesn’t already know that stuff.
There’s also an interesting relationship between (1) and (2). Assuming that somebody already understands the air-pressure explanation for why helium balloons rise in ordinary life, we can now apply
the principle of equivalence in a very direct way to that explanation to produce the air-pressure explanation for the motion of helium balloons in accelerating cabins. That is, (principle of
equivalence) [°] (air-pressure explanation why balloons rise) = (air-pressure explanation why balloons move forward in accelerating cabins), where [°] is function application in some sense.
(I’ve thought about this idea —applying a general principle to a concrete explanation to produce another concrete explanation— before in the context of proof theory. It works pretty well in the proof
theory of constructive mathematics; this is the basis for the idea that a constructive proof yields a computational algorithm, an idea that has been implemented, for example, to turn proofs in Coq
into programs in Haskell. It should work here too, except that the concepts are all less precise.)
So do the explanation-theorists discuss these sorts of ideas?
Posted by: Toby Bartels on March 5, 2007 2:48 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
So do the explanation-theorists discuss these sorts of ideas?
Well, I’ve never heard of the idea you raise in the last two paragraphs. It’s an intriguing thought.
As for what they do talk about, the two major currents in explanation theory are
(1) Subsumption of facts under general laws.
(2) Derivation of facts in terms of causal mechanisms.
My example was given with these in mind. As for (1), while initially the hope was that only the syntactic form of a statement is relevant to its lawlikeness, this proved not to work. There’s
something more to a law than its being a true general statement. (I touch on this in my natural kinds paper.)
Some responded to this by trying to cash out explanation in terms of the derivation of disparate facts under a unified theory. There’s a question here of whether one has gone beyond what you might
call ‘descriptive economy’ of law to the detection of (real) ‘natural kinds’.
Those who follow (2) also want to go beyond the positivists by talking about causal mechanisms.
But what you write points to a third strand - the pragmatic dimension. That an explanation is an answer to a why question. That a why question presupposes a contrast, comparing a description of what
happened with what did not happen. And that the giving of a satisfactory explanation depends on the state of knowledge of the receiver.
Posted by: David Corfield on March 5, 2007 4:23 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Pet peeve of mine, but I wish people would not use the word “quantization” to describe a process which results in a classical (field) theory. The words “relativistic Schrodinger equation” have
confused generations of graduate students as well…
Posted by: Moshe on March 1, 2007 7:41 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Not sure why you say so, since you seem to be exactly referring to the issue first/second quantization.
We are quantizing the Nambu-Goto and/or Polyakov action (either in their 1-dimensional incarnation (relativistic point particle) or their 2-dimensional incarnation (bosonic string)) to find a quantum
theory on a 1- or 2-dimensional parameter space.
The “relativistic Schrödinger equation” here is the 1-dimensional counterpart of $L_n|\psi\rangle = 0$ for the string, and that’s quantization.
Of course, we may alternatively regard $\partial^2 |\psi\rangle = 0$ as a classical equation and quantize again. That leads to a field theory on the former target space.
Analogously, we may, alternatively, regard $L_n |\psi\rangle = 0$ as a classical equation and quantize again – that leads to string field theory on the former target space.
Posted by: urs on March 1, 2007 7:54 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Well, the Klein Gordon equation, even if you write it as the Klein Gordon operator acting on a ket, is still an equation for a classical field, not for a wave function. For start that field is real,
and the square of it is not probability of anything. Planck constant is zero throughout.
The only similarity is that both Klein Gordon and Schrodinger equations are differential equations, but physically they have nothing at all in common. Similarly first quantization of the string
yields classical string theory, there is no place for h-bar anywhere.
Physically, there is really only one “quantization”, where you introduce wave-functions and probabilities, all this talk about first or second quantization and what not is just a good way of getting
Posted by: Moshe on March 1, 2007 10:49 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Moshe wrote:
Well, the Klein Gordon equation, even if you write it as the Klein Gordon operator acting on a ket, is still an equation for a classical field, not for a wave function.
The space of normalizable real solutions of the Klein–Gordon equation can be made into a complex Hilbert space in an essentially unique Poincaré-invariant way.
In fact, this complex Hilbert space is a irreducible unitary representation of the Poincaré group!
Such representations were classified by Wigner, who showed they correspond to various sorts of particles. The space of real solutions of the Klein–Gordon equation is the Hilbert space of a massive
spin-0 particle.
One can then form the Fock space on this Hilbert space, which is the Hilbert space for arbitrary collections of identical massive spin-0 particles. Equivalently, it’s the Hilbert space for a massive
spin-0 free quantum field.
It may seem odd that the space of real solutions of some equation is naturally a complex Hilbert space. But, there are lots of ways to see this. One is to show that the space of normalizable real
solutions of the Klein–Gordon equation is isomorphic to $L^2(\mathbb{R}^3)$ — the space of complex wavefunctions on space. Under this isomorphism, time evolution is given by the Hamiltonian that’s
the natural relativistic generalization of the Hamiltonian for Schrödinger’s equation:
$H = \sqrt{-abla^2 + m^2}$
in units where $\hbar = c = 1$.
This stuff is ‘well-known’… but often neglected in textbooks! I explained it in more detail here.
Posted by: John Baez on March 2, 2007 5:47 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
John, thanks! I have not thought about it this way. The way I present this when talking about first “quantization” (and I do use the scare quotes) is through the heat kernel, or Schwinger proper time
method, of solving linear differential equations.
In this method your differential equation is replaced by some heat (diffusion) equation for the kernel, or if you are not fussy about factors of (i h-bar) by Schrodinger equation. The time there is
some auxiliary parameter, which you can visualize if you want as parameter along the worldline of some particle (though personally I think it should inherently be thought of more as Euclidean time).
So, after that we have all the machinery of Hilbert spaces, operators, Hamiltonians and path integrals and all that good functional analysis stuff. Since we physicists only see that machinery in
quantum mechanics class we conclude immediately (and erroneously) we quantized something. I just find that this language leads to confusions regarding physical interpretation. Maybe it is better not
use “quantize” or “wavefunction” etc. when the objects we discuss are somewhat similar mathematically but have totally different interpretation.
(I said above it was a peeve, apologies…)
Posted by: Moshe on March 2, 2007 6:14 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
we conclude immediately (and erroneously) we quantized something
I’d conclude I quantized something when I did go through the quantization procedure.
We find $\partial^2\psi = 0$ from starting with the action of the free particle and either doing the path integral, or in fact the canonical quantization.
similarly first quantization of the string yields classical string theory
I guess you’d get a negative reaction to this statement if you told that to somebody working on the conformal field theory on the string’s worldsheet.
Of course I know (well, at least I think so, you will please correct me if not) what you have in mind: the effective theory on the target space of the string (its string field theory) is still
classical if we only have the worldsheet theory quantized.
But that’s exactly what the step from “first” to “second” quantization is all about.
On the other hand, I’d perfectly agree that the terminology “second quantization” is not the best one.
Remarkably, this is all at the very heart of the entire program of perturbative string theory, which says:
define a quantum theory on target space by specifying its perturbative expansion as a sum over diagrams which are themselves weighted by numbers obtained from a(nother) (nonperturbatively defined)
quantum theory in 2-dimensions.
Posted by: urs on March 2, 2007 11:53 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Yeah, OK, this is all about semantics anyhow. I am fine with referring to the familiar mechanical procedure as “quantization”, whether or not it involves actual quantizing, as long as one is careful
about the precise interpretation of what is going on. Perhaps just a warning that “quantization” does not always entail quantum mechanics (you know, probabilities, interference, Bell inequalities) is
Posted by: Moshe on March 2, 2007 4:20 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Oh, one more thing. When arriving at a worldline through the heat kernel method, the Hamiltonian would be simply the Klein-Gordon operator. In fact one gets a gauge fixed version of the generally
covariant worldline action, with e (the einbein) already set to 1. The Hamiltonian you mentioned is obtained from the same generally covariant action upon different gauge choice, where the worldline
time is chosen as the spacetime time. In other words the two choices are related by worldline reparametrization.
Not sure if this is too opaque, getting late here…
Posted by: Moshe on March 2, 2007 6:25 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
In your comments on the two books about the crisis in high energy theory, you consider the situation as if the community of theorists were independent of outside world. I think it may be useful to
take into account that the crisis may be a part of a general trend. In this trend the average mark obtained by scientists from the society dropped (20 years ago it was easier to boast that you are a
It happend for reasons independent of science performance. So, a substantial fraction of the theorists have been struggling to maintain the high score of science in the public eye. And it turned out
that the ground was too hard at that time: if we say of successfull computations for high energy particles, not much have changed the last 20 years. So, the folks were desperate to invent something
flashy. String theory was flashy in 1983 and it is flashy now, while the success of the ‘modest’ standard model combined with perturbation theory remains to be the only undeniable achievement of
particle theory. One can draw a moral from this story: science is not an easy bread, and one taking it as ones occupation should probably not count on becoming a succesful middle class person.
Posted by: gbpivo on February 27, 2007 9:27 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Hi John,
a very interesting post. Your analogy with Vegas is in a certain regard quite ironic. I was at the String Pheno at KITP last year, and of course there was a lot discussion about the landscape etc,
that our world might only be one in a huge multiverse. In one of the sessions the comment was made (sorry, can’t recall who) if someone would find the point in the landscape that reproduces the SM,
then we could just work with it and forget the rest, 20 years ago nobody would have been bothered by ‘the rest’ if we only had a theory that worked.
Being pragmatic myself, I am kind of sympathetic to this opinion. If we had a working model with all the promised blessings of string theory, then lets just use it, no?
The problem is, to come back to the Vegas analogy, that I can imagine generations of grad students playing the landscape lottery trying to get lucky and to find one that is ‘just right’! And that’s
definitly not where I want theoretical physics to go.
Another thing that I’d like to add is that the argument with the valley climbers and mountain seers (or swhatever) is a nice one, but one shouldn’t forget that as everything in life it is a matter of
balance. Besides the seers we still need the craftsmen, and the valley crossers need the mountain climbers - especially in a global community that is as tightly connected as ours. It is totally
silly, and equally irresponsible, to keep on doing science in the 21st century without taking into account how much the world, and our community with it, has changed.
PS: The more insightful post about AdS/CFT on our blog is
Posted by: Bee on February 27, 2007 7:49 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Besides the seers we still need the craftsmen, and the valley crossers need the mountain climbers
And, evidently, all of them need a (suitable) job. Is there room for them all?
Posted by: Christine Dantas on February 28, 2007 1:57 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Hi Christine,
There is plenty of room for all of them. If there’s only one point in which I agree with Lee 110% then it’s that theoretical physicists aren’t expensive. Compare that to the money that goes into
experimental physics, or worse - military applications. Sometimes I want to scream at the funding agencies to just add .1% to the total and give it to me, gee, how many people could be hired with
that?! It’s so totally stupid that positions are so rare because it’s totally unnecessary. The problem is (I think) that most positions still are at universities and tied to teaching, of which there
is limited demand. What we really need are more pure-research positions. But these are currently mostly provided by private institutions. Best,
Posted by: Bee on February 28, 2007 2:34 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Hi Bee,
Yes, of course. In my comment, “suitable” means exactly what you say – “pure-research positions”. And when I ask whether there would be “room for them all”, it is more like an ironic question, given
that people want to put money (even given that it’s far from being an “astronomically” high investment) elsewhere. Indeed, it wouldn’t cost too much to change the situation considerably.
Best wishes,
Posted by: Christine Dantas on March 1, 2007 11:11 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Hi, I remember John Baez from old arguments on the sci.physics newsgroups when he defended physics against all sorts of crackpot theorists, some quite nasty as I recall. His was, and I’m sure still
is, a very welcome voice of both reason, knowledge and authority. (sic) In that order! So it’s nice to see you have a blog, John and I’m glad I found it.
It’s a long time since I posted anything on physics (I’m not a physicist), but now I’m reading Lee Smolin’s book and can’t help myself. The first part is great, one of the clearest explanations of
certain fundamental issues I’ve ever read.But then when he starts talking about alternatives to the string theories he’s so neatly skewered, he starts sounding like part of the problem he’s trying to
It seems to me that the sort of fundamental issues Smolin is addressing have definitely reached a crisis point – and it is certainly a crisis for physics, because it seems to me that physicists are
no longer really prepared anymore to address such issues. It’s as though the current faith in math as the cure-all for everything scientific is starting to remind us of the old faith in, say, perfect
circles as the ideal path for planetary orbits to take.
For example, take the question Smolin raises concerning certain basic constants, such as the particle masses. At a certain point I kept hearing myself say, “evolution, evolution, evolution” and then
lo and behold HE starts talking evolution – but in a really weird way. Suddenly instead of simply evolution itself, which strikes me as being by far the most likely answer, he’s talking about some
“multiverse” that as far as I can see is every bit as dubious and unfalsifiable as any string theory. If I were a physicist I suppose it would look natural to me but I’m not so it doesn’t, it looks
just very weird. As weird as all those strings.
But if you want an explanation of all those masses, then why not consider just plain old evolution without all the fancy stuff about multiverses? Does EVERYTHING in physics have to be part of some
really neat, “beautiful” scheme where it all fits into neat little channels like Kepler’s orbits based on the Platonic solids?
I think those masses are the way they are because the universe evolved (THE universe, not some wacky multiverse) – and just as in biological evolution certain very unlikely things just happened and
then for one reason or another propagated themselves. I read recently that the only reason animals have mouths is because some ONE organism a few billion years ago experienced a mutation that
produced something that eventually became a mouth. And if that organism had not experienced that particular mutation and its progeny had not survived, then we would not have mouths today, but
something else. I think those mysterious constants are most likely derived from exactly that sort of random event. I have a feeling that’s what Smolin thinks also, but for some reason he felt he
needed to bring other stuff into it and the most interesting and significant and in fact revolutionary aspect got lost in the mathematical fog.
Waddya think, John?
Posted by: Victor Grauer on February 28, 2007 5:02 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
I think those masses are the way they are because the universe evolved (THE universe, not some wacky multiverse) – and just as in biological evolution certain very unlikely things just happened
and then for one reason or another propagated themselves.
I very much agree with this:
We are obliged to carefully analyze the collections of constants that go into the standard model of particle physics in order not to miss a hidden fundamental structure it may have, which may reduce
its algorithmic complexity and point us to a theory going beyond it.
We may even hope that this is the case.
But we have no good reason to be surprised if none such structure exists.
Not any more than Kepler had good reason to be surprised to find out that the constants of the solar system do not follow the laws of Platonic solids.
In general, in physics: expect your laws to be beautiful but the data to be messy.
But, of course, be prepared for surprises…
Posted by: urs on February 28, 2007 10:58 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
The idea of evolution is a great distillation of the concept which Urs mentions: beautiful laws and messy data. As Dawkins put it, life develops through the non-random survival of randomly varying
replicators. The complexity of a living thing is the reflection of the environments in which its ancestors lived, the messiness of the past environments borne out in the intricate genome.
So, if we consider the Universe as an evolved thing, what are the “replicators” and what forces affect their “survival”? What is the analog of genetic information, and how is it expressed? Are there
counterparts to the “selfish gene” model, group selection, viruses?
I had a wonderful thought about Smolin’s reproduction-by-black-hole idea combined with the Hsu–Zee CMB message, but this blog comment window is too narrow to contain it.
Posted by: Blake Stacey on February 28, 2007 1:31 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Privately, over a beer, I would enjoy speculating about ideas like the universe replicating itself through black holes, or some other far-fetched thing like that, but in the present context I feel a
little reluctant to do so, with already the plain facts leading to the kind of unfruitful discussion we see over the blogosphere.
In as far as I supported the idea of the data of the standard model as a result of an “evolution process” I had in mind just ordinary “time evolution”, expressing the mere fact that things do follow
their laws of motion over time.
While Darwinian evolution, like any mechanism with strong feedback channels, is a nice example for how a simple law can lead to highly involved output, I would not think that it is a good analogy for
the topic we are discussing.
The shape of the Alps, say, is certainly a result of a natural process, and of “time evolution”, but certainly not of a feedback process involving anything like mutation and selection. Still, nobody
would want any theory to predict, postdict or retrodict the shape of the Matterhorn, characteristic as it may be.
And, please note well: I am not saying that it is clear that the masses and couplings of the standard model are not more like the shape of a crystal than that of a mountain. I am just saying that, at
the moment, we have no good reason to expect to see more crystals than rocks.
Posted by: urs on February 28, 2007 4:16 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
So, it would be a better use of time to look at brane gas cosmology or something like that instead of a kooky model of black holes and baby universes?
Of course, this particular comment thread is not a good place to do either, and the Blagnet as a whole may be a bad place to study the latter.
Posted by: Blake Stacey on February 28, 2007 5:00 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
the Blagnet as a whole may be a bad place to study the latter.
The Blagoblag can be a great place to do such things – if used suitably.
Posted by: urs on February 28, 2007 5:33 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
I agree, the “feedback” aspect of Darwinian evolution, aka “natural selection” doesn’t necessarily apply to cosmic evolution. But the latter may well be analogous to a very different aspect of
evolutionary thought currently being applied in population genetics: the study of neutral markers.
Mitochondrial DNA, for example, is important not only because it doesn’t recombine, but because it is also presumed not to participate in the selection process, making it a much better index of
population history/lineage.
What apparently determines ones mtDNA is simply random mutations and the inheritance of same unchanged over many generations. What apparenty determines the mtDNA statistics of a population are things
like founder effects produced by population bottlenecks produced by very specific, often catastrophic, events. There would seem to be no equivalent in cosmic evolution to mutations, but there
certainly would be the possibility for founder effects/bottlenecks produced by catastrophic events during the very early stages of the Big Bang.
Posted by: Victor Grauer on March 4, 2007 3:05 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Having made some anti-string comments above, I want to balance those here. We have no right to expect any simpler explanation of the world than a random point in a space of possibilities. It will be
extremely cool if string theory (only!) gives us a framework for understanding the evolution of the universe; it would be interesting even if we couldn’t test it, but absolutely fantastic if we could
and it was successful. People should keep thinking about the Landscape and the other ideas of string theory; they just shouldn’t say that anything has been solved. (And of course, they should think
about other things too, but I believe that all of us here agree on that point.)
Posted by: Toby Bartels on February 28, 2007 4:52 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Hi Victor,
I’m not John, but have a comment about your sentence
I think those masses are the way they are because the universe evolved (THE universe, not some wacky multiverse) – and just as in biological evolution certain very unlikely things just happened and
then for one reason or another propagated themselves.
What exactly do you mean with ‘universe’ and are you sure you use the word in the same meaning as Lee does? See, we know that in our observable part of the universe the parameters of the standard
model can’t have varied very much. So where did the ‘unlikely things’ happen and where did they ‘propagate themselves’?
PS: I have a feeling that’s what Smolin thinks also, but for some reason he felt he needed to bring other stuff into it and the most interesting and significant and in fact revolutionary aspect got
lost in the mathematical fog.
In case you haven’t you should read the first book.
Posted by: Bee on February 28, 2007 5:49 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Hi B,
What exactly do you mean with ‘universe’ and are you sure you use the word in the same meaning as Lee does?
No, I’m not. And you’re right I should read his book about that, in fact I’m really excited to learn of its existence and will definitely check it out.
See, we know that in our observable part of the universe the parameters of the standard model can’t have varied very much. So where did the ‘unlikely things’ happen and where did they ‘propagate
I was referring to the Big Bang and what happened right after that. I guess the picture I have in my mind is of some set of contingencies very early on that could have determined those parameters in
ways we have no hope of reconstructing, at this late date. And once they were set, then they could have just remained fixed as the particles (or their precursors)exploded out beyond the point that
any similar types of contingency could affect them. Does this resemble Smolin’s (or anyone else’s) idea? I hope so, because it seems very logical to me.
What’s been on my mind is a question stemming from the “Out of Africa” theory of human evolution. I.e., if we are all descended from the same “Ur” homo sapien, or small group of homo sapiens, then
what is the source of all our genotypic and phenotypic differences? In very crude terms, why do Chinese people look Chinese and Europeans look European, etc.?
In his book “The Real Eve,” Stephen Oppenheimer offers an amazing answer in the form of a contingent event that we actually do happen to know a great deal about: the explosion of Mt. Toba, circa
72,000 years ago. This event, or something like it, happening in the early stages of the Out of Africa migration, could have led to a whole set of different population bottlenecks, and consequent
“founder effects” that might possibly have been the source of such differences.
It’s an interesting idea for physicists to contemplate because there may be a parallel between “Out of Africa” and the Big Bang, from an evolutionary standpoint.
So, soon after the Big Bang, maybe something drastic happened, like Mt. Toba, that could have led to something like a particle or protoparticle “bottleneck,” with subsequent “founder effects”
producing the particle zoo we contemplate today.
Geneticists can actually use the DNA of currently existing humans to make inferences about such bottlenecks.
So I guess the question here is whether physicists have some analogous historical tool(s) that would enable them to look back and make inferences about what happened just after the Big Bang. And I
guess they do, because they have in fact made such inferences. But as with genetics, it may only be possible to take the research so far and no farther.
Posted by: Victor Grauer on February 28, 2007 9:39 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Victor Grauer wrote:
Waddya think, John?
I think I don’t want to talk about this. But, I’ll say one thing.
Suddenly instead of simply evolution itself, which strikes me as being by far the most likely answer, he’s talking about some “multiverse” that as far as I can see is every bit as dubious and
unfalsifiable as any string theory. If I were a physicist I suppose it would look natural to me but I’m not so it doesn’t, it looks just very weird. As weird as all those strings.
Whether a theory ‘looks weird’ is not the main point. What matters more is that it delivers testable predictions, which turn out to be right.
I think string theory doesn’t yet deliver testable predictions. More precisely, I see no great chance of string theory delivering an event of type 1 or type 2 anytime soon. The best it can do is give
us type 3 events. Others may disagree, but to understand Smolin’s book you have to know that he agrees with this.
On the other hand, Smolin claims that his Darwinian cosmology has already delivered an event of type 2! He explains this here:
To summarize, he claims that his theory gives correct estimates on the masses of the proton, neutron, electron and neutrino, together with the weak, strong and electromagnetic coupling constants.
String theory does nothing like this. So, from his viewpoint, he’s ahead of string theory.
Others will disagree heatedly, of course.
I don’t feel like getting into this argument again, so I’ll quit here. I just want you to see where Smolin is coming from.
Posted by: John Baez on March 1, 2007 2:58 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Thanks a lot for that link, John. For me this is probably the “missing link” in my understanding of cosmic evolution. I’m printing it out right now.
Posted by: Victor Grauer on March 1, 2007 4:51 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
OK, I had a quick look at Smolin’s paper and found it very intriguing – though most of it was way over my head, natch. Actually the part about black holes giving rise to new “universes” interested me
VERY much, because it resonates with an old crackpot theory of my own – but I don’t want to get into any of that quite yet. :-)
The main problem I have with his version of cosmological evolution is that it seems to still, despite himself, depend too much on the overall bias of the anthropic principle. In other words, it seems
to me that both the anthropics and Smolin may be looking through the wrong end of the telescope.
Darwinian selection, i.e., “fitness,” is NOT about fitting something to the way things are at present, but about how things come to fit the environment in which they existed at the time the
adaptation occured. The anthropic principle, it seems to me, is an example of what could be called the “destiny” fallacy, the notion that all things evolved according to the rules of some preordained
system. So when Smolin looks at cosmic evolution from the viewpoint of how many black holes it would be likely to have produced and whether or not that jibes with the number likely to now exist, that
too strikes me as the wrong way to think about evolution. In fact, it seems as though the whole notion of all these many (infinite?) universes is forced by the need to avoid the notion of
predestination that’s already built in to the anthropic principle. To get around the very tendentious idea that the whole purpose of evolution was to produce US, it’s necessary to hypothesize
“billions and billions” of universese which evolved differently. That, to me, is a measure of the inherent weakness of the anthropic principle – including Smolin’s attempt to get around it.
NOT that I’m against extrapolating from present knowledge to a theory about what might have happened in the past, that seems perfectly legitimate. But using our present situation as a guide to the
meaning of evolution itself, THAT strikes me as a potentially serious fallacy.
Posted by: Victor Grauer on March 2, 2007 3:16 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Smolin claims that his Darwinian cosmology has already delivered an event of type 2!
Or another example: The theory called MOND produces lots of events of type 2 and 3.
Still, as it says in TWF 206,
MOND should instantly make any decent physicist cringe.
Apparently, just making lots of predictions is also not quite what we demand of a theory.
Posted by: urs on March 1, 2007 5:14 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
I don’t think anybody will recognize the right theory very quickly since good math and predictions apparently aren’t enough by themselves. There would probably have to be very precise predictions to
get recognized quickly and there’s a very good chance that precise predictions would be computationally difficult even for the right theory. That the Standard Model has good tree level calculations
to me means that string theory should have a GUT and all universes in a multiverse should be Standard Model ones. String theory or something else would be handling the corrections to tree level
“above” the GUT. A Gross-like emergent spacetime also sounds perhaps like something that should be down in the GUT of a good string theory. Maybe supersymmetry has not been seen cause there’s none in
the GUT and maybe if the GUT has its own spacetime then maybe the problems of the 26-dim bosonic string aren’t problems.
Posted by: John G on March 2, 2007 12:11 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
What was it Thoreau said? Something like, “There are a thousand hacking at the branches of evil to one who is striking at the root.” Doesn’t everyone agree with Smolin that the root of this problem
goes back to the beginning of science, to the mystery of how nature is both discrete and continuous?
Thousands are hacking at the branches of the unification problem, while precious few are taking aim at the root. It seems to me that one looking to the root is Abraham Fraenkel and company (see
Foundations of Set Theory, 1973). They write:
Bridging the gap between the domains of discreteness and of continuity, or between arithmetic and geometry, is a central, presumably even the central problem of the foundations of mathematics.
If it’s the central problem of mathematics, then no wonder it’s the central problem of physics. First, numbers were thought to be “all,” but then the surds came along. Then, as Fredkin points out,
matter was thought to be continuous, before discrete atoms were discovered. Electricity was thought to be a continous fluid, until discrete electrons were found. Light was thought to be continuous
waves, but then discrete photons popped out. Finally, angular momentum had to be quantized in units of quantum spin.
Now the search is on for discrete gravitons, but if gravity is a consequence of spacetime, and quantum gravity means quantum spacetime, then shouldn’t somebody be looking at number theory first,
where the whole continuous/discrete thing starts?
Maybe, the Cantor-Dedekind postulate that the geometric linear continuum corresponds to the arithmetized real continuum, is incorrect. Maybe, the way the physical continuum is modeled with abstract
discrete entities is incorrect. If so, it follows that what physicists need is a better number theory, not an ad hoc invention like vibrating strings. Volovich’s has some compelling ideas along this
line (see: V.S. Vladimirov, I.V. Volovich, E.I. Zelenov, p-Adic Analysis and Mathematical Physics (World Scientific, 1994).
Posted by: Doug on February 28, 2007 8:56 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
OK, great, thank you, Doug, because this is another very important aspect of the issue raised by Smolin that I wanted to address. But probably not in the manner you were anticipating – since I see no
hope whatsoever for reconciling continuity and (radical) discontinuity. In fact I am going to say something that I really hope doesn’t offend everyone here, because that is NOT my intention: the
fundamental issues pointed to by you – and by Smolin as well – cannot be addressed by physics and, in all likelihood, physicists are NOT prepared to deal with such issues. Not because physicists are
inherently incapable of that, but because it is something that is simply not part of either their training or their experience.
What is really at stake here is NOT the question of what space and time really really are in the most fundamental sense, but the even more fundamental question of how it is possible to represent
them. In other words, we are dealing with not only epistemology but semiotics, the “science” of representation.
The only physicist I know of who really deeply understood this was Bohr, whose notion of “complementarity” is, at base, not a part of physics or even science, but is an essentially semiotic principle
through and through.
That’s all I have time for now, but if anyone wants to pursue this line of thought, let me know and I’ll continue.
Posted by: Victor Grauer on February 28, 2007 9:56 PM | Permalink | Reply to this
Re: string theory; This Week’s Finds in Mathematical Physics (Week 246)
The appealing thing for string theory for me is the way the story is told for the very very general audience:
1. Replace point particles by more general geometric objects,
2. These geometric objects are strings, open or closed and later even more complicated gadgets.
3. To make the theory works without troubles, a new form of symmetry - supersymmetry is required.
4. For the theory to work requires additional six dimensions. Our universe has 10 (or even 11) dimensions.
5. Various remarkable symmetries and connections between different string theories are gradually discovered.
String theorists has the feeling of a big puzzle being slowly solved and the partial unfolded picture being beautiful: Their excitement and devotion is moving. It appears that string theorists agree
the puzzle is bigger than expected and the progress is slower than expected.
Those who do not think string theory will prevail may still think that some pieces will be useful for physics or at least for mathematics. For example, particles which are not point particles; extra
Two questions I am puzzled about. The first is: Can strings be fractals? In all the blogs/popular accounts the pictures describe strings as smooth and nice. Is it part of the theory or only part of
the pictures?
The second is: Is there such a nice short description that can appeal to a very very large audience for loop quantum gravity. OK, maybe it is a mistake to worry at all about the very general audience
and not the very savvy experts, maybe the popular accounts for string theory are misleading and maybe the true final physics theory cannot be explained at all to ordinary people, but still it is a
nice feature of string theory that it tells a very very nice story.
Posted by: Gina on March 2, 2007 7:38 AM | Permalink | Reply to this
Re:This Week’s Finds in Mathematical Physics (Week 246)
1. Yes, strings can be fractals: in fact, smooth strings are the exception. A bit more technically: in the path integral over string worldsheets, smooth worldsheets form a set of measure zero.
2. A good popularization of loop quantum gravity can be found (along with other things) in Lee Smolin’s previous book, Three Roads to Quantum Gravity. There is certainly a ‘nice story’ behind loop
quantum gravity, if that’s what one wants — and Smolin tells it pretty well.
Posted by: John Baez on March 2, 2007 5:05 PM | Permalink | Reply to this
Re:This Week’s Finds in Mathematical Physics (Week 246)
Posted by: Blake Stacey on March 2, 2007 9:15 PM | Permalink | Reply to this
Re:This Week’s Finds in Mathematical Physics (Week 246)
can we have a D2.7-brane?
Tell me what a 3.7-category is and I’ll hand you a 2.7-brane.
More seriously, somebody should remark that John’s comment # just expressed (as Blake certainly knows, but others might not) that we consider physical fields, like the embedding field on the string,
usually as elements in some $L^2$-space, rather than in some space of continuous or smooth functions. In particular, fields are often realized in terms of arbitrary Fourier series.
Posted by: urs on March 2, 2007 9:39 PM | Permalink | Reply to this
Re:This Week’s Finds in Mathematical Physics (Week 246)
Is it really fair to refer to elements of some L^2 space as ‘fractals’? Even if they’re not smooth, there is a specific integer that went into the specification of the space.
Posted by: Toby Bartels on March 2, 2007 10:03 PM | Permalink | Reply to this
Re:This Week’s Finds in Mathematical Physics (Week 246)
I don’t know for sure about these fractals. I am just saying that a typical configuration of a string in a background that looks like $\mathbb{R}^n$ can be thought of as an $n$-tuple of real Fourier
series on the circle.
This is not supposed to be something special to strings. Similar comments apply to any old QFT, say $\phi^4$-theory, or whatever.
Posted by: urs on March 2, 2007 10:29 PM | Permalink | Reply to this
Re:This Week’s Finds in Mathematical Physics (Week 246)
Thanks, guys! Imagine that physicists had the idea to replace point-particles with Cantor-like sets rather than with strings. Maybe we would have seen a little less geometry and algebra and a bit
more analysis in the HEP dish.
Posted by: gina on March 3, 2007 9:01 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Victor writes:
What is really at stake here is NOT the question of what space and time really really are in the most fundamental sense, but the even more fundamental question of how it is possible to represent
them. In other words, we are dealing with not only epistemology but semiotics, the “science” of representation.
Man, I hope this is not considered off-topic. I don’t want to get deleted again. I just want to stress that while Smolin’s thesis, the trouble with physics, is ultimately our inability to unify the
discrete and continuous theories, he asserts that there is more than this reflected in the string theory controversy. It’s as if the string controversy is the center piece of the table, focusing our
attention on the state of physics as a whole, not just the latest innovation, which might, or might not, have outlived it’s usefulness.
If string theory is justified on mathematical grounds, it’s not just because it is “beautiful mathematics,” but because it’s beautiful mathematics that, to some extent, unifies the discrete and
continuous theories. The fact that it has no contact with experiment and can’t predict anything right now, is overridden, in the minds of many, by the apparent achievement of a consistent unification
of the discrete and continuous, in a very compelling manner.
The details of how the development of string theory has led to the current prospect of “the end of a science” and consideration of the serious question “What comes next?” are not so important at this
point. What is important is gaining a clear understanding of the method of thinking that led us to this point, and without a doubt that thinking is best characterized as the history of developments
in the science of mathematics.
String theory must live in a minimum of ten dimensions, but what bothers Glashow, as quoted by Smolin, should bother all of us:
…Worst of all, superstring theory does not follow as a logical consequence of some appealing set of hypotheses about nature. Why, you may ask, do the string theorists insist that space is
nine-dimensional? Simply because string theory doesn’t make sense in other kind of space.
In other words, string theory is not inductive science, it’s inventive science, and the comments of Einstein, regarding the significance of epistemology in science, rise to the top of our thoughts,
like the cream in unhomogenized milk.
However, and this question must be asked, if string theory is basically an exercise in mathematics, and it is inventive science, then doesn’t that imply that Glashow’s criticism applies to the
science of mathematics represented by string theory?
I believe the conclusion that it does is just inescapable. Writing about this in a historical context, Hestenes sees the development of mathematics as the centuries-long effort to unify discrete
numbers with continous physcial magnitudes, which Euclid deliberately kept separate, proving theorems first with line segments and then with numbers.
Clearly, though, this history is a history of inventive science, not inductive science. String theory mathematics is simply a continuation to unify discrete numbers with continuous magnitudes.
Briefly, the three properties of physical magnitudes versus natural numbers are:
1) Continuous vs. discrete quantity
2) Two directions vs. one direction
3) Limited vs. unlimited dimensions
In the development of our inventive mathematical science over centuries, the ad hoc invention of the real numbers addresses number 1; the ad hoc invention of the imaginary numbers addresses number 2;
and the ad hoc invention of “compactified dimensions” adresses number 3.
Nevertheless, in the spirit of Glashow’s complaint, wouldn’t we rather have something that “follows as a logical consequence of some appealing set of hypotheses about nature?” But is this even
considered possible in mathematics anymore, or is formalism the only “game in town?”
Posted by: Doug on March 2, 2007 5:55 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
IMO, the real problem with the anthropic principle is that it is in itself a black hole, leading inexorably to the singularity known as “solipsism.” Once you set foot on that slippery slope you are
destined to fall hopelessly into that desolate place from which all possibility of communication with the outside world is lost.
Posted by: Victor Grauer on March 4, 2007 6:06 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Victor Grauer described solipsism as:
that desolate place from which all possibility of communication with the outside world is lost.
Why do you say this? Have you ever tried to communicate with a solipsist?
Posted by: Toby Bartels on March 5, 2007 2:52 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
I tried once, but all I heard was an echo.
Frankly, solipsism is such a convenient perspective that I’m surprised more people aren’t.
Posted by: John Armstrong on March 5, 2007 3:34 AM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Have I ever tried to communicate with a solipsist? Heh.
Well, for one thing, a true solipsist would have no reason to communicate with me – or anyone else. That would be pointless, no? Since from that perspective no one else actually exists, only the
And yes, it IS a “convenient perspective,” in fact it may well be irrefutable. But there is no way to attempt to communicate such a conviction without immediately contradicting oneself. So while it
may be a perfectly logical position it is also indefensible.
Now I have a question for all you physicists out there. If, in the context of the measurement problem, the waveform collapses when a measurement takes place, and a measurement can be meaningful only
when it registers in someone’s brain, and the question then is “whose brain,” is it possible to see this situation also as a black hole leading to the same singularity?
Posted by: Victor Grauer on March 5, 2007 3:34 PM | Permalink | Reply to this
Re: This Week’s Finds in Mathematical Physics (Week 246)
Victor Grauer wrote in part:
a true solipsist would have no reason to communicate with me
This is like saying that a true mathematical formalist (or other sort of mathematical fictionalist or anti-Platonist) would have no reason to think about mathematics. I communicate with you because
it is interesting; what other reason should there be?
But there is no way to attempt to communicate such a conviction without immediately contradicting oneself.
I don’t understand why you say this. Have you never mused about philosophy within your own mind? Why shouldn’t a solipsist do the same?
Now I have a question for all you physicists out there.
A very good question! It was thinking about this very question that led me on the path to becoming a solipsist.
Posted by: Toby Bartels on March 5, 2007 10:26 PM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2007/02/this_weeks_finds_in_mathematic_7.html","timestamp":"2014-04-19T09:25:43Z","content_type":null,"content_length":"234022","record_id":"<urn:uuid:9ba89bf7-166e-4e35-8555-2d9657f564c5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coalgebraic modal logic: Soundness, completeness and decidability of local consequence
Results 1 - 10 of 44
- IN LICS’06 , 2006
"... For lack of general algorithmic methods that apply to wide classes of logics, establishing a complexity bound for a given modal logic is often a laborious task. The present work is a step
towards a general theory of the complexity of modal logics. Our main result is that all rank-1 logics enjoy a sh ..."
Cited by 26 (15 self)
Add to MetaCart
For lack of general algorithmic methods that apply to wide classes of logics, establishing a complexity bound for a given modal logic is often a laborious task. The present work is a step towards a
general theory of the complexity of modal logics. Our main result is that all rank-1 logics enjoy a shallow model property and thus are, under mild assumptions on the format of their axiomatisation,
in PSPACE. This leads to a unified derivation of tight PSPACE-bounds for a number of logics including K, KD, coalition logic, graded modal logic, majority logic, and probabilistic modal logic. Our
generic algorithm moreover finds tableau proofs that witness pleasant prooftheoretic properties including a weak subformula property. This generality is made possible by a coalgebraic semantics,
which conveniently abstracts from the details of a given model class and thus allows covering a broad range of logics in a uniform way.
"... In recent years, a tight connection has emerged between modal logic on the one hand and coalgebras, understood as generic transition systems, on the other hand. Here, we prove that (finitary)
coalgebraic modal logic has the finite model property. This fact not only reproves known completeness result ..."
Cited by 24 (16 self)
Add to MetaCart
In recent years, a tight connection has emerged between modal logic on the one hand and coalgebras, understood as generic transition systems, on the other hand. Here, we prove that (finitary)
coalgebraic modal logic has the finite model property. This fact not only reproves known completeness results for coalgebraic modal logic, which we push further by establishing that every coalgebraic
modal logic admits a complete axiomatization of rank 1; it also enables us to establish a generic decidability result and a first complexity bound. Examples covered by these general results include,
besides standard Hennessy-Milner logic, graded modal logic and probabilistic modal logic.
- Concurrency Theory, CONCUR 04, volume 3170 of Lect. Notes Comput. Sci , 2004
"... Abstract. We present a modular approach to defining logics for a wide variety of state-based systems. We use coalgebras to model the behaviour of systems, and modal logics to specify behavioural
properties of systems. We show that the syntax, semantics and proof systems associated to such logics can ..."
Cited by 22 (7 self)
Add to MetaCart
Abstract. We present a modular approach to defining logics for a wide variety of state-based systems. We use coalgebras to model the behaviour of systems, and modal logics to specify behavioural
properties of systems. We show that the syntax, semantics and proof systems associated to such logics can all be derived in a modular way. Moreover, we show that the logics thus obtained inherit
soundness, completeness and expressiveness properties from their building blocks. We apply these techniques to derive sound, complete and expressive logics for a wide variety of probabilistic
systems. 1
, 2002
"... This paper studies coalgebras from the perspective of finite observations. We introduce the notion of finite step equivalence and a corresponding category with finite step equivalence-preserving
morphisms. This category always has a final object, which generalises the canonical model construction fr ..."
Cited by 14 (8 self)
Add to MetaCart
This paper studies coalgebras from the perspective of finite observations. We introduce the notion of finite step equivalence and a corresponding category with finite step equivalence-preserving
morphisms. This category always has a final object, which generalises the canonical model construction from Kripke models to coalgebras. We then turn to logics whose formulae are invariant under
finite step equivalence, which we call logics of rank . For these logics, we use topological methods and give a characterisation of compact logics and definable classes of models.
- IN STACS 2007, 24TH ANNUAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE, PROCEEDINGS , 2007
"... Coalgebras provide a unifying semantic framework for a wide variety of modal logics. It has previously been shown that the class of coalgebras for an endofunctor can always be axiomatised in
rank 1. Here we establish the converse, i.e. every rank 1 modal logic has a sound and strongly complete coal ..."
Cited by 14 (11 self)
Add to MetaCart
Coalgebras provide a unifying semantic framework for a wide variety of modal logics. It has previously been shown that the class of coalgebras for an endofunctor can always be axiomatised in rank 1.
Here we establish the converse, i.e. every rank 1 modal logic has a sound and strongly complete coalgebraic semantics. As a consequence, recent results on coalgebraic modal logic, in particular
generic decision procedures and upper complexity bounds, become applicable to arbitrary rank 1 modal logics, without regard to their semantic status; we thus obtain purely syntactic versions of these
results. As an extended example, we apply our framework to recently defined deontic logics.
- In Algebra and Coalgebra in Computer Science, volume 3629 of LNCS , 2005
"... Abstract. This paper studies finitary modal logics as specification languages for Set-coalgebras (coalgebras on the category of sets) using Stone duality. It is wellknown that Set-coalgebras are
not semantically adequate for finitary modal logics in the sense that bisimilarity does not in general co ..."
Cited by 13 (5 self)
Add to MetaCart
Abstract. This paper studies finitary modal logics as specification languages for Set-coalgebras (coalgebras on the category of sets) using Stone duality. It is wellknown that Set-coalgebras are not
semantically adequate for finitary modal logics in the sense that bisimilarity does not in general coincide with logical equivalence.
, 2003
"... Monotonic modal logics form a generalization of normal modal logics... ..."
- CMCS , 2008
"... We study sequent calculi for propositional modal logics, interpreted over coalgebras, with admissibility of cut being the main result. As applications we present a new proof of the (already
known) interpolation property for coalition logic and establish the interpolation property for the conditional ..."
Cited by 8 (7 self)
Add to MetaCart
We study sequent calculi for propositional modal logics, interpreted over coalgebras, with admissibility of cut being the main result. As applications we present a new proof of the (already known)
interpolation property for coalition logic and establish the interpolation property for the conditional logics CK and CK Id.
- IN FOUNDATIONS OF SOFTWARE SCIENCE AND COMPUTATION STRUCTURES, FOSSACS 09, VOLUME 5504 OF LNCS , 2009
"... We introduce a generic framework for hybrid logics, i.e. modal logics additionally featuring nominals and satisfaction operators, thus providing the necessary facilities for reasoning about
individual states in a model. This framework, coalgebraic hybrid logic, works at the same level of generality ..."
Cited by 8 (6 self)
Add to MetaCart
We introduce a generic framework for hybrid logics, i.e. modal logics additionally featuring nominals and satisfaction operators, thus providing the necessary facilities for reasoning about
individual states in a model. This framework, coalgebraic hybrid logic, works at the same level of generality as coalgebraic modal logic, and in particular subsumes, besides normal hybrid logics such
as hybrid K, a wide variety of logics with non-normal modal operators such as probabilistic, graded, or coalitional modalities and non-monotonic conditionals. We prove a generic finite model property
and an ensuing weak completeness result, and we give a semantic criterion for decidability in PSPACE. Moreover, we present a fully internalised PSPACE tableau calculus. These generic results are
easily instantiated to particular hybrid logics and thus yield a wide range of new results, including e.g. decidability in PSPACE of probabilistic and graded hybrid logics.
, 2008
"... Coalgebras provide a uniform framework for the semantics of a large class of (mostly non-normal) modal logics, including e.g. monotone modal logic, probabilistic and graded modal logic, and
coalition logic, as well as the usual Kripke semantics of modal logic. In earlier work, the finite model prop ..."
Cited by 7 (5 self)
Add to MetaCart
Coalgebras provide a uniform framework for the semantics of a large class of (mostly non-normal) modal logics, including e.g. monotone modal logic, probabilistic and graded modal logic, and coalition
logic, as well as the usual Kripke semantics of modal logic. In earlier work, the finite model property for coalgebraic logics has been established w.r.t. the class of all structures appropriate for
a given logic at hand; the corresponding modal logics are characterised by being axiomatised in rank 1, i.e. without nested modalities. Here, we extend the range of coalgebraic techniques to cover
logics that impose global properties on their models, formulated as frame conditions with possibly nested modalities on the logical side (in generalisation of frame conditions such as symmetry or
transitivity in the context of Kripke frames). We show that the finite model property for such logics follows from the finite algebra property of the associated class of complex algebras, and then
investigate sufficient conditions for the finite algebra property to hold. Example applications include extensions of coalition logic and logics of uncertainty and knowledge. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=252230","timestamp":"2014-04-20T17:55:49Z","content_type":null,"content_length":"36390","record_id":"<urn:uuid:a02a305f-2ac1-4eed-b4c6-e875baf7f3d2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are there Maass forms where the expected Galois representation is $\ell$-adic?
up vote 13 down vote favorite
Recall that by theorems of Deligne and Deligne--Serre, there is the following dichotomy:
1. Modular forms on the upper half plane of level $N$ and weight $k\geq 2$ correspond to representations $\rho:\operatorname{Gal}(\bar{\mathbb Q}/\mathbb Q)\to\operatorname{GL}(2,F_\lambda)$ for
some number field $F$ and prime $\lambda$ over $\ell$.
2. Modular forms on the upper half plane of level $N$ and weight $k=1$ correspond to representations $\rho:\operatorname{Gal}(\bar{\mathbb Q}/\mathbb Q)\to\operatorname{GL}(2,\mathbb C)$ satisfying
$\det\rho(\sigma)=-1$ ($\sigma$ is complex conjugation).
Now I hear that Maass forms of eigenvalue $\frac 14$ are conjectured to correspond with representations $\rho:\operatorname{Gal}(\bar{\mathbb Q}/\mathbb Q)\to\operatorname{GL}(2,\mathbb C)$
satisfying $\det\rho(\sigma)=1$ ($\sigma$ is complex conjugation). Is this still true (that is, conjectured) for Maass forms of higher weight? Or do they "turn $\ell$-adic" in higher weight?
add comment
3 Answers
active oldest votes
Here's some piece of the bigger picture. Maass forms and holomorphic modular forms are both automorphic representations for $GL(2)$ over the rationals. An automorphic representation is a
typically huge representation $\pi$ of an adele group (in this case $GL(2,\mathbf{A})$, with $\mathbf{A}$ the adeles of $\mathbf{Q}$). Because the adeles is the product of the finite
adeles and the infinite adeles, this representation $\pi$ is a product of a finite part $\pi_f$ and an infinite part $\pi_\infty$. The infinite part is a representation of $GL(2,\mathbf
{R})$ (loosely speaking -- there are technicalities but they would only cloud the water here).
The representation theory of $GL(2,\mathbf{R})$, in this context, is completely understood. The representations basically fall into four categories, which I'll name (up to twist):
1) finite-dimensional representations (these never show up in the representations attached to cusp forms).
2) Discrete series representations $D_k$, $k\geq2$ (these are the modular forms of weight 2 or more).
3) The limit of discrete series representation $D_1$ (these are the weight 1 forms).
4) The principal series representations (these are the Maass forms).
Now what does Langlands conjecture? He makes a conjecture which does not care which case you're in! He conjectures the existence of a "Galois representation" attached to $\pi$, and this
is a "Galois representation" in a very loose sense: it is a continuous 2-dimensional complex representation of the conjectural "Langlands group", attached to $\pi$. Note that there should
be a map from the Langlands group to the Galois group, and in the case of Maass forms and weight 1 forms Langlands' representation should factor through the Galois group. For modular
forms of weight 2 or more Langlands' conjecture has not been proved and in some sense it is almost not meaningful to try to prove it because no-one can define the group. In particular
Deligne did not prove Langlands' conjecture, he proved something else.
So Clozel came along in 1990 and tried to put Deligne's work into some context and he came up with the following: he formulated the notion of what it meant for $\pi_\infty$ to be
algebraic (in fact there are two notions of algebraic, which differ by a twist in this context, so let me write "$L$-algebraic" to make it clear which one I'm talking about) and
conjectured that if $\pi$ were $L$-algebraic then there should be an $\ell$-adic Galois representation $\rho_\pi$ attached to $\pi$. Maass forms with eigenvalue $1/4$, and holomorphic
eigenforms, are $L$-algebraic, and the $\ell$-adic Galois representation attached to the Maass forms/weight 1 forms is just the one you obtain by fixing an isomorphism $\mathbf{C}=\
up vote 30 overline{\mathbf{Q}}_\ell$. I should say that Clozel worked with $GL(n)$ not $GL(2)$ and also worked over an arbitrary number field base.
down vote
accepted Whether or not the image of $\rho_\pi$ is finite is something which is conjecturally determined by $\pi_\infty$: you can read it off from the infinitesimal character of $\pi_\infty$ and
also from the local Weil group representation attached to $\pi_\infty$ by the local Langlands conjectures, which are all theorems (of Langlands) for real reductive groups.
Put within this context your question becomes purely local: one has to figure out what Clozel's recipe gives in each case to get a handle on what your question is asking. You're asking
about principal series representations. If you work out Clozel's recipe in these cases you find that if $\lambda\not=1/4$ then $\pi_\infty$ is not $L$-algebraic (and so we don't even
expect a representation of the Galois group, we just expect a representation of the conjectural Langlands group), and if $\lambda=1/4$ then, up to twist, we expect the image to be always
finite, because, well, that's what the calculation gives us.
I learnt this by just doing all these calculations myself. I wrote them up in brief notes at http://www2.imperial.ac.uk/~buzzard/maths/research/notes/automorphic_forms_for_gl2_over_Q.pdf
and http://www2.imperial.ac.uk/~buzzard/maths/research/notes/local_langlands_for_gl2R.pdf (both available from http://www2.imperial.ac.uk/~buzzard/maths/research/notes/index.html ).
So why is there this asymmetry? Well actually this asymmetry is not surprising because it is predicted on the Galois side as well. If you look at an irreducible mod $p$ ordinary Galois
representation which is odd then its universal ordinary deformation is often known to be isomorphic to a Hecke algebra of the type defined by Hida (so in particular we get lots of
interesting $\ell$-adic Galois representations with infinite image). In particular its Krull dimension should be 2 (and this was already known to Mazur in the 80s). But the calculations
for these Krull dimensions involve local terms, and the local term at infinity depends on whether the representation is odd or even. If you consider deformations of an even Galois
representation then the calculations come out differently and the Krull dimension comes out one smaller. In particular one only expects to see finite image lifts, plus twists of such
lifts by powers of the cyclotomic character.
So in summary you see differences on both sides -- the automorphic side and the Galois side -- and they match up perfectly! You don't expect $\ell$-adic representations to show up in the
Maass form story and yet things are completely consistent anyway.
Toby Gee and I recently tried to figure out the complete conjectural picture about how automorphic representations and Galois representations were related. Our conclusions are at http://
www2.imperial.ac.uk/~buzzard/maths/research/papers/bgagsvn.pdf . But for $GL(n)$ this was all due to Clozel over 20 years ago (who would have known all those calculations that I linked to
earler; these are all standard).
At work today I realised that in some sense I could also offer a more low-level answer: here it is. The basic yoga for functional equations is well-understood. So here's the experiment
1 you might want to do. Think of the functional equation for a Maass form. If you understand what's going on then I guess it's clear that the factor at infinity is not going to be the
factor at infinity for the functional equation for an $\ell$-adic Galois representation, unless the eigenvalue is $1/4$, and in this case it's going to be the functional equation for a
Galois representations whose H-T weights are... – Kevin Buzzard Jun 6 '12 at 17:57
...the same! In particular there's no room for some "rich" theory like in the holomorphic case. Oh -- by "functional equation for a Maass form" I mean "functional equation for the
$L$-function of a Maass form" and similarly for Galois reps. – Kevin Buzzard Jun 6 '12 at 17:58
@Kevin: Is your "Algebraic automorphic representation" a generalization of Weil's "characters of type $(A_0)$" ? If so, could you explain the above picture ($\mathbb C$-adic vs
$l$-adic) in this $GL_1$ case ? Thank you – user4245 Jun 16 '12 at 8:09
In the abelian case the two notions of algebraic (C and L) that Toby Gee and I introduce coincide. It's only in the non-abelian case that one sees two "normalisations" occurring in the
literature, which Gee and I explicitly disentangle. – Kevin Buzzard Jun 18 '12 at 22:06
To point 3): I am not sure whether, one should distinguish limit of dicrete series from principal series here, since they are not a subquotient in the case of GL(2), only for SL(2). –
plusepsilon.de Feb 19 '13 at 9:39
add comment
@Kevin: The explanation of the asymmetry from the Galois side does not seem to work since by recent work of Calegari, "EVEN GALOIS REPRESENTATIONS AND THE FONTAINE MAZUR CONJECTURE", he
has shown that there exist universal deformation of even Galois representations with universal deformation ring of large dimension such that none of the corresponding Galois
up vote 0 representations are geometric.
down vote
2 You must be mistaken. What result of Calegari are you exactly referring to, and how does it contradict Kevin's argument? – Joël Apr 4 '13 at 17:33
add comment
I am referring to Corollary 5.2 of the work by Calegari EVEN GALOIS REPRESENTATIONS AND THE FONTAINE MAZUR CONJECTURE II. As you say I am probably mistaken and also not understanding properly
Kevin's argument. He says that for maass wave forms with eigenvalue 1/4 the image of the corresponding galois representation up to a twist should be finite and if it is not 1/4 then the
correspondence should be with a representation of the conjectural Langlands group instead of an even l-adic galois representation of infinite image, while modular forms always correspond to
up vote l-adic galois representations. This asymmetry between odd and even he says, can be seen on the Galois side in that deforming odd galois representations one obtains interesting l-adic
0 down representations of infinite image while deforming even galois representations one obtains representations with finite image up to a twist. But isn't Calegari's corollary 5.2 saying that one
vote can also obtain deforming even galois representations, even l-adic representations wich are not twist finite?
Kevin is (at least implicitly) only discussing geometric deformations. – user1125 Apr 5 '13 at 15:31
Dear Marcelo, I think you have misread Calegari's cor. 5.2. Firstly, he works in the context of a totally real field, not just $\mathbb Q$. Secondly, he looks at deformations of a $\
2 rhobar$ which is odd at some places and even at other places (but even at least one place). The dimension of the deformation ring is bounded below by (and presumably equals) $1 + 2r$ (plus
the Leopoldt defect, which conjecturally equals $0$), where $r$ is the number of odd places. In particular, if $r = 0$ (which is necessarily the case when we are working with $G_{\mathbb
Q}$, since $\mathbb Q$ has ... – Emerton Apr 6 '13 at 2:27
... only one real place), then the dimension we get is one, which is just given by twisting. So indeed the def. ring of an even $\overline{\rho}: G_{\mathbb Q} \to \GL_2(\overline{\mathbb
F}_p)$ is expected to equal $1$, and typically contains no geometric points at all (e.g. because maybe $\overline{\rho}$ has image too big to be lift to a finite subgroup of $\GL_2(\mathbb
C)$). The only way to get big dimension is to take higher degree totally real fields, and then to take $r$ larger. But again, the dimension Calegari computes is completely consistent with
Kevin's discussion above; ... – Emerton Apr 6 '13 at 2:29
... it is precisely the dimension given by the global Euler char. formula (just as in Mazur's original article on Galois defs.). Regards, – Emerton Apr 6 '13 at 2:30
P.S. In the above, $r$ is the number of real places where $\overline{\rho}$ is odd. – Emerton Apr 6 '13 at 2:31
add comment
Not the answer you're looking for? Browse other questions tagged modular-forms automorphic-forms rt.representation-theory maass-forms galois-representations or ask your own question. | {"url":"http://mathoverflow.net/questions/98915/are-there-maass-forms-where-the-expected-galois-representation-is-ell-adic/98930","timestamp":"2014-04-17T15:37:22Z","content_type":null,"content_length":"78937","record_id":"<urn:uuid:fd2d2dea-6021-4070-81ac-08c8756e3c2e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
CS: MMDS papers, a CS page, A2I Converters, CS in Space.
I just stumbled upon the
MMDS 2008. Workshop on Algorithms for Modern Massive Data Sets
that took place at Stanford University on June 25–28, 2008. All the abstracts are
located here
. All the presentations are
. Among the ones that caught my interest:
Piotr Indyk
has a presentation entitled
Sparse recovery using sparse random matrices/ Or: Fast and Effective Linear Compression
where he presents a joint work with:
Radu Berinde
Anna Gilbert
Piotr Indyk
Howard Karloff
Milan Ruzic
Martin Strauss
that was partially shown in
Anna Gilbert
at the
L1 meeting at Texas A&M
). The interesting new part of this presentation is the recent result with
Milan Ruzic
where it is shown that if a measurement matrix follows RIP-1, then OMP converges. The previous results was on LP-decoding only working. The abstract reads:
Over the recent years, a new *linear* method for compressing high-dimensional data (e.g., images) has been discovered. For any high-dimensional vector x, its *sketch* is equal to Ax, where A is
an m x n matrix (possibly chosen at random). Although typically the sketch length m is much smaller than the number of dimensions n, the sketch contains enough information to recover an
*approximation* to x. At the same time, the linearity of the sketching method is very convenient for many applications, such as data stream computing and compressed sensing.
The major sketching approaches can be classified as either combinatorial (using sparse sketching matrices) or geometric (using dense sketching matrices). They achieve different trade-offs,
notably between the compression rate and the running time. Thus, it is desirable to understand the connections between them, with the ultimate goal of obtaining the "best of both worlds"
In this talk we show that, in a sense, the combinatorial and geometric approaches are based on different manifestations of the same phenomenon. This connection will enable us to obtain several
novel algorithms and constructions, which inherit advantages of sparse matrices, such as lower sketching and recovery times.
Anna Gilbert
had a presentation entitled
Combinatorial Group Testing in Signal Recovery.
The abstract reads:
Traditionally, group testing is a design problem. The goal is to construct an optimally efficient set of tests of items such that the test results contains enough information to determine a small
subset of items of interest. It has its roots in the statistics community and was originally designed for the Selective Service to find and to remove men with syphilis from the draft. It appears
in many forms, including coin-weighing problems, experimental designs, and public health. We are interested in both the design of tests and the design of an efficient algorithm that works with
the tests to determine the group of interest. Many of the same techniques that are useful for designing tests are also used to solve algorithmic problems in analyzing and in recovering
statistical quantities from streaming data. I will discuss some of these methods and briefly discuss several recent applications in high throughput drug screening.
Joint work with
Radu Berinde
Piotr Indyk
Howard Karloff
Martin Strauss
Raghu Kainkaryam
Peter Woolf.
By the way, it's not a rat, it's a Chef.
Tong Zhang
An Adaptive Forward/Backward Greedy Algorithm for Learning Sparse Representations
(the technical report is here:
Forward-Backward Greedy Algorithm for Learning Sparse Representations
Consider linear least squares regression where the target function is a sparse linear combination of a set of basis functions. We are interested in the problem of identifying those basis
functions with non-zero coefficients and reconstructing the target function from noisy observations. This problem is NP-hard. Two heuristics that are widely used in practice are forward and
backward greedy algorithms. First, we show that neither idea is adequate. Second, we propose a novel combination that is based on the forward greedy algorithm but takes backward steps adaptively
whenever necessary. We prove strong theoretical results showing that this procedure is effective in learning sparse representations. Experimental results support our theory.
Nir Ailon
Efficient Dimension Reduction
. The abstract reads:
The Johnson-Lindenstrauss dimension reduction idea using random projections was discovered in the early 80's. Since then many "computer science friendly" versions were published, offering
constant factor but no big-Oh improvements in the runtime. Two years ago Ailon and Chazelle showed a nontrivial algorithm with the first asymptotic improvement, and suggested the question: What
is the exact complexity of J-L computation from d dimensions to k dimensions? An O(d log d) upper bound is implied by A-C for k up to d^{1/3} (in the L2 to L2) case. In this talk I will show how
to achieve this bound for k up to d^{1/2} combining techniques from the theories of error correcting codes and probability in Banach spaces. This is based on joint work with Edo Liberty.
Yoram Singer
, Efficient Projection Algorithms for Learning Sparse Representations from High Dimensional Data.
Many machine learning tasks can be cast as constrained optimization problems. The talk focuses on efficient algorithms for learning tasks which are cast as optimization problems subject to L1 and
hyper-box constraints. The end result are typically sparse and accurate models. We start with an overview of existing projection algorithms onto the simplex. We then describe a linear time
projection for dense input spaces. Last, we describe a new efficient projection algorithm for very high dimensional spaces. We demonstrate the merits of the algorithm in experiments with
large scale image and text classification.
Kenneth Clarkson
Tighter Bounds for Random Projections of Manifolds
(this is the report). The abstract reads:
The Johnson-Lindenstrauss random projection lemma gives a simple way to reduce the dimensionality of a set of points while approximately preserving their pairwise Euclidean distances. The most
direct application of the lemma applies to a finite set of points, but recent work has extended the technique to affine subspaces, smooth manifolds, and sets of bounded doubling dimension; in
each case, a projection to a sufficiently large dimension k implies that all pairwise distances are approximately preserved with high probability. Here the case of random projection of a smooth
manifold (submanifold of R^m) is considered, and a previous analysis is sharpened, giving an upper bound for k that depends on the surface area, total absolute curvature, and a few other
quantities associated with the manifold, and in particular does not depend on the ambient dimension m or the reach of the manifold.
Sanjoy Dasgupta
Random Projection Trees and Low Dimensional Manifolds
. The abstract reads:
The curse of dimensionality has traditionally been the bane of nonparametric statistics (histograms, kernel density estimation, nearest neighbor search, and so on), as reflected in running times
and convergence rates that are exponentially bad in the dimension. This problem is all the more pressing as data sets get increasingly high dimensional. Recently the field has been rejuvenated
substantially, in part by the realization that a lot of real-world data which appears high-dimensional in fact has low "intrinsic" dimension in the sense of lying close to a low-dimensional
manifold. In the past few years, there has been a huge interest in learning such manifolds from data, and then using the learned structure to transform the data into a lower dimensional space
where standard statistical methods generically work better.
I'll exhibit a way to benefit from intrinsic low dimensionality without having to go to the trouble of explicitly learning its fine structure. Specifically, I'll present a simple variant of the
ubiquitous k-d tree (a spatial data structure widely used in machine learning and statistics) that is provably adaptive to low dimensional structure. We call this a "random projection tree" (RP
Along the way, I'll discuss different notions of intrinsic dimension -- motivated by manifolds, by local statistics, and by analysis on metric spaces -- and relate them. I'll then prove that RP
trees require resources that depend only on these dimensions rather than the dimension of the space in which the data happens to be situated. This is work with Yoav Freund (UC San Diego).
Lars Kai Hansen
Generalization in High-Dimensional Matrix Factorization
. The abstract reads:
While the generalization performance of high-dimensional principal component analysis is quite well understood, matrix factorizations like independent component analysis, non-negative matrix
factorization, and clustering are less investigated for generalizability. I will review theoretical results for PCA and heuristics used to improve PCA test performance, and discuss extensions to
high-dimensional ICA, NMF, and clustering.
Holly Jin
(at LinkedIn !),
Exploring Sparse NonNegative Matrix Factorization
. The abstract reads:
We explore the use of basis pursuit denoising (BPDN) for sparse nonnegative matrix factorization (sparse NMF). A matrix A is approximated by low-rank factors UDV', where U and V are sparse with
unit-norm columns, and D is diagonal. We use an active-set BPDN solver with warm starts to compute the rows of U and V in turn. (Friedlander and Hatz have developed a similar formulation for both
matrices and tensors.) We present computational results and discuss the benefits of sparse NMF for some real matrix applications. This is joint work with Michael Saunders.
Compressed Counting and Stable Random Projections
Ping Li
. The abstract reads:
The method of stable random projections has become a popular tool for dimension reduction, in particular, for efficiently computing pairwise distances in massive high-dimensional data (including
dynamic streaming data) matrices, with many applications in data mining and machine learning such as clustering, nearest neighbors, kernel methods etc.. Closely related to stable random
projections, Compressed Counting (CC) is recently developed to efficiently compute Lp frequency moments of a single dynamic data stream. CC exhibits a dramatic improvement over stable random
projections when p is about 1. Applications of CC include estimating entropy moments of data streams and statistical parameter estimations in dynamic data using low memory.
Ronald Coifman, Diffusion Geometries and Harmonic Analysis on Data Sets
. The abstract reads:
We discuss the emergence of self organization of data, either through eigenvectors of affinity operators, or equivalently through a multiscale ontology. We illustrate these ideas on images and
audio files as well as molecular dynamics.
Thomas Blumensath
Mike Davies
have set up a larger web page on Compressed Sensing and their
attendant research in that field
This one is a little old but it just came out on my radar screen and it is not on the
Rice page yet
. It features an A2I paper by
Sami Kirolos
Tamer Ragheb
Jason Laska
Marco F. Duarte
Yehia Massoud
Richard Baraniuk
Practical Issues in Implementing Analog-to-Information Converters
. The abstract reads:
The stability and programmability of digital signal processing systems has motivated engineers to move the analog-to-digital conversion (ADC) process closer and closer to the front end of many
signal processing systems in order to perform as much processing as possible in the digital domain. Unfortunately, many important applications, including radar and communication systems, involve
wideband signals that seriously stress modern ADCs; sampling these signals above the Nyquist rate is in some cases challenging and in others impossible. While wideband signals by definition have
a large bandwidth, often the amount of information they carry per second is much lower; that is, they are compressible in some sense. The first contribution of this paper is a new framework for
wideband signal acquisition purpose-built for compressible signals that enables sub-Nyquist data acquisition via an analog-to-information converter (AIC). The framework is based on the recently
developed theory of compressive sensitng in which a small number of non-adaptive, randomized measurements are sufficient to reconstruct compressible signals. The second contribution of this paper
is an AIC implementation design and study of the tradeoffs and nonidealities introduced by real hardware. The goal is to identify and optimize the parameters that dominate the overall system
Finally, CS in Space
The eighth annual NASA Earth Science Technology Conference (
) was held June 24-26. 2008, at the University of Maryland University College and showcased a wide array of technology research and development related to NASA's Earth science endeavors. The papers
. Of particular interest is this presentation:
Novel Distributed Wavelet Transforms and Routing Algorithms for Efficient Data Gathering in Sensor Webs
. Antonio Ortega, G. Shen S. Lee S.W. Lee S. Pattem A. Tu B. Krishnamachari, M. Cheng S. Dolinar A. Kiely M. Klimesh, H. Xie
on page 13, there is : Compressed Sensing for Sensor Networks.
The GRETSI conference occured a year ago. This is one of the papers (in French with an English abstract) entitled
Quelques Applications du Compressed Sensing en Astronomie
David Mary
and Olivier Michel. The abstract reads:
We investigate in this communication how recently introduced “ Compressed Sensing” methods can be applied to some important problems in observational Astronomy. The mathematical background is
first outlined. Some examples are then described in stellar variability and in image reconstruction for space-based observations. We finally illustrate the interest of such techniques for direct
imaging through stellar interferometry.
Nous proposons dans cette communication d'évaluer le potentiel des méthodes de « Compressed Sensing » récemment introduites, à travers leur application à quelques problèmes importants en
astrophysique observationelle. Après avoir rapidement décrit les bases mathématiques de ces approches, des exemples sont développés dans la cadre de la variabilité stellaire, de la reconstruction
d'images satellitaire et enfin dans le cadre plus prospectif des futures possibilités offertes par les grands projets d'interféromètres à multiples bases pour l'imagerie directe dans le plan de
No comments: | {"url":"http://nuit-blanche.blogspot.com/2008/07/cs-mmds-papers-cs-page-a2i-converters.html","timestamp":"2014-04-17T01:46:05Z","content_type":null,"content_length":"361107","record_id":"<urn:uuid:740d623b-f4e8-496a-8a47-feefc54d9750>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding Tangent between Point in space and Parabolic Curve
July 8th 2010, 11:38 AM #1
Jul 2010
Finding Tangent between Point in space and Parabolic Curve
First off I am not 100% sure I am in the correct forum, so I apologize in advance if I am not...
The problem I have is in reference to the psychometric chart used in HVAC applications. On the psychometric chart, the saturation curve (100% humidity) is a parabolic shaped curve, so I would
think it can be defined in an equation. I also know a point in space which is not at 100% humidity, so this point sits off the parabolic saturation curve, let’s call this point "A".
What I want to find out is from point "A", how can I find the tangent where point "A" and the tangent point would form a straight line tangent to the parabolic curve?
I am sure I am not explaining this well and I can provide more detail as/if questions come up but I am not sure what more is needed...
Thanks in advance for any help!
suppose the curve is the graph of y=x^2, A(x1,y1). Suppose the tangent point is P(x0,y0), then the equation of the tangent line
is y0+y=2*x0*x. Since A is on this line, y0+y1=2*x0*x1. Together with the equation y0=x0^2, you can solve them and get the value of (x0,y0)
xxp9...thanks for the response...
I am not sure I am following your example...would there not still be two unkowns if I try to solve as you suggest/
Would it be possible to show me an example of what you mean?
let A= ( 0.5, 0 ), we need to find P(x0,y0) with the following two equation:
y0 = x0
y0 = x0^2
we get (0,0) and (1,1) as the result.
July 8th 2010, 06:03 PM #2
Senior Member
Mar 2010
Beijing, China
July 12th 2010, 08:05 AM #3
Jul 2010
July 12th 2010, 08:37 PM #4
Senior Member
Mar 2010
Beijing, China | {"url":"http://mathhelpforum.com/calculus/150408-finding-tangent-between-point-space-parabolic-curve.html","timestamp":"2014-04-20T03:36:25Z","content_type":null,"content_length":"37910","record_id":"<urn:uuid:bd85aca9-264c-47aa-a29c-02f06b78bd4a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Triple Integral Help
Given function f(x,y,z)=y Region bounded by x+y+z=2, x^2+z^2=1, and y=0 Integrate given function over indicated region.
Lets re-arrange, so in effect what we have is a plane and a cylinder of radius 1 $z=2-x-y$ $z= \sqrt{1-x^2}$ To find our domain in the XY plane let these equal eachother such that $2-x-y= \sqrt {1-x^
2}$ $y= 2-x- \sqrt {1-x^2}$ Now, what we are looking for is $\iiint ydV$ We know our Z bounds (between the 2 equations we formatted). But what about X and Y. Well we might want to evaluate Y last
because we have a function Y as our density. So let us find when x=0 $y= 2- \sqrt {1}$ So now we know Y runs from 0 to the above. Find your X bounds, and which Z is bigger (either the cylinder or the
plane) and put in your bounds accordingly. We are done!
From that, I got that the limits of integration are: z= [-sqrt(1-x^2) , sqrt(1-x^2)] y= [0 , 2-x-sqrt(1-x^2)] x= [-1,1] The limits on z are from the cylinder equation, bounded on the y axis by the
intersection of the cylinder and plane. Does this sound right? Then the limits on x are just from "end to end" of the cylinder.
Typically, when we have an integral such as $\iint ydV$ It is easiar to evaluate y in terms of constant bounds (i.e integrate y last) but you don't have to. In this case it might be easiar to find
the bounds by integrating x last. So let us do that. $<br /> y= 2-x- \sqrt {1-x^2}<br />$ Set y=0 and find your bounds for X. Then y of course will be from $<br /> 0 \le y \le 2-x- \sqrt {1-x^2}<br
/>$ In the z plane you are bounded by both of the equations we derived. So in fact, our bounds would look something like $\int_{ \sqrt{1-x^2}}^{2-x-y} dz$ Of course since I havent actually figured
out the bounds for x, i am only guessing that $\sqrt{1-x^2} \le 2-x-y$ for region R Edit- it is 3am and this is the most I can muster for tonight, I will actually compute this tomorrow is somebody
else hasent done so already.
Thanks much for your help, I think I may be able to run with it now! | {"url":"http://mathhelpforum.com/calculus/139099-triple-integral-help.html","timestamp":"2014-04-20T23:47:09Z","content_type":null,"content_length":"43337","record_id":"<urn:uuid:2e3f09dc-ba77-4536-81fa-8f6493145a55>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
2D Rotation
Example of a 2D rotation through an angle w where the coordinates x, y go into x', y'. Note that w is positive for a counterclockwise rotation and that that rotation is about the origin (0, 0).
Derive the formula for rotation
(old coordinates are (x, y) and the new coordinates are (x', y'))
q = initial angle, f = angle of rotation.
x = r cos q
y = r sin q
x' = r cos ( q + f ) = r cos q cos f - r sin q sin f
y' = r sin ( q + w ) = r sin q cos f + r cos q sin f
x' = x cos f - y sin f
y' = y cos f + x sin f
What if we want to rotate about another point rather than the origin, e.g., the center of an object? Then we have the same problem as with scaling. A solution to this problem is to perform several
transformations rather tnan just one.
Now we could apply the 3 transformations to the object one at a time. But this is inefficient, especially when the object has many points, as we will see later. It would be nice to be able to compose
the transformations into one and then apply this total transformation to the object.
Last changed June 02, 1999, G. Scott Owen, owen@siggraph.org | {"url":"http://www.siggraph.org/education/materials/HyperGraph/modeling/mod_tran/2drota.htm","timestamp":"2014-04-17T06:46:49Z","content_type":null,"content_length":"3086","record_id":"<urn:uuid:7237ef50-f2f1-4c4d-95a3-d141367fc5e7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
Born-Oppenheimer approximation
Born-Oppenheimer approximation
In quantum mechanics, the Born–Oppenheimer approximation is a method for approximating the energy a quantum mechanical system by identifying “light” and “heavy” degrees of freedom and then treating
the solving for the “light” dynamics as if depending only on the static configuration of the “heavy” degrees of freedom.
Historically this is motivated from, and still heavily used in practice, for the computation of energy spectra of molecules?, where the atomic nuclei? are much heavier than the electrons, so that
their dynamics can be split off to a good degree of approximation.
The original reference is
• Max Born, Robert Oppenheimer, Zur Quantentheorie der Molekeln, Annalen der Physik. 389, Nr. 20 (1927), p 457–484, doi:10.1002/andp.19273892002.
An early textbook-like account is
• J.C. Slater, Quantum Theory of Molecules and Solids, Vol. 1: Electronic Structure of Molecules American Journal of Physics. 32, (1964), S. 65, doi:10.1119/1.1970097.
An review is for instance in section 2.2 The Born-Oppenheimer approximation of
• Peter David Haynes, Linear-scaling methods in ab initio quantum-mechanical calculations PhD thesis (1998) (web)
Revised on June 3, 2012 05:48:21 by
Toby Bartels | {"url":"http://www.ncatlab.org/nlab/show/Born-Oppenheimer+approximation","timestamp":"2014-04-17T18:36:47Z","content_type":null,"content_length":"30620","record_id":"<urn:uuid:a4925706-bd1c-4bc6-aa5c-23fd291f347e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Tutors
Roanoke, TX 76262
Physics, Chemistry, and Math Tutor
...have had a career in astronomy which included Hubble Space Telescope operations, where I became an expert in Excel and SQL, and teaching college-level astronomy and
. This also involved teaching and using geometry, algebra, trigonometry, and calculus. Recently...
Offering 10+ subjects including physics | {"url":"http://www.wyzant.com/Grapevine_TX_physics_tutors.aspx?g=3OHW","timestamp":"2014-04-19T07:43:04Z","content_type":null,"content_length":"60788","record_id":"<urn:uuid:d6a57f55-4d81-40e5-b02f-786d80da5256>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Courses
NW 262-PH. Concepts of Physical Science
A one-semester study of selected topics in physics and the mathematical analysis of physical problems. The student should be already competent with algebra, and a few additional mathematical tools
will be introduced as needed. Four class periods and two hours of laboratory per week. Fee (U) (5)
PH 107/108. Elementary Physics
A two-semester course based on algebra and elementary trigonometry. This course is suitable preparation to meet the entrance requirements of most dental, medical and pharmacy schools. Three class
periods and two hours of laboratory per week. PH 108 must be preceded by PH 107. Fee (U) (4,4)
PH 152. Preparatory Analytical Physics
A course in physical-problem analysis and solution using calculus and other mathematical tools required for PH 201. Recommended for science and mathematics majors who need/wish to study PH 201, but
whose mathematical and physical-problem solving experience is limited.
Pre- or Co-requisite: MA 106. Fee (U) (4)
PH 200. Physics for the Health Sciences
A survey of topics in physics applied to the human body and to medical diagnostic and treatment devices. Fee (U) (3)
PH 201/202. Introduction to Analytical Physics I and II
An introduction to Newtonian mechanics, thermal physics, waves, electromagnetism and optics using calculus. Familiarity with algebra, trigonometry and calculus is assumed. Four lectures and two hours
of laboratory per week, plus one hour of recitation per week. Prerequisite: MA 106 (may be concurrent) or permission of instructor. Fee (U) (5, 5)
PH 301. Modern Physics
The special theory of relativity is developed along with the introduction of basic ideas and equations of quantum physics. Topics include Lorentz transformations, relativistic mechanics, collisions
and conservation of energy-meomentum, electromagnetism and relativity, blackbody radiation, photoelectric effect, Compton effect, and the Schrödinger equation. Prerequisites: MA 107 and PH 202 or
permission of the instructor. Fee (U) (3)
PH 303. Electromagnetic Waves and Optics
A study of geometric and wave optics, interference, diffraction and polarization of electromagnetic waves. Two lectures and two hours of laboratory per week. Prerequisites: PH 202 and MA 208 or
permission of instructor. Fee (U) (3)
PH 311. Experimental Modern Physics
The student performs a number of experiments to explore and verify experimental implications of relativity and quantum mechanics. Experiments include determining Planck's constant, speed of light,
charge-to-mass ratio of electron, Franck-Hertz experiment, Bragg scattering, Rutherford scattering, and radioactive decay processes. Prerequisite: PH 301 or permission of instructor. Fee (U) (3)
PH 315/316. Mathematical Methods for Physics
Mathematical methods for physics: differential equations; coordinate systems and differential geometry; special functions; linear operators, groups and representation theory; complex analysis;
Fourier series and integral transforms. Applications to problems in electromagnetic theory, classical mechanics and quantum mechanics will be presented. Four lectures per week. Prerequisite MA 208
and PH 201/202. Fee (U) (4,4)
PH 321. Intermediate Classical Mechanics
A study of the classical dynamics of oscillators, gravitational systems, calculus of variations and the lagrangian and hamiltonian formalisms. Three lectures per week. Prerequisites: PH 202 and MA
208 or permission of instructor. Fee (U) (4)
PH 325. Thermodynamics and Statistical Physics
A study of the theory and applications of the first and second laws of thermodynamics, thermodynamic potentials, kinetic theory, classical and quantum statistical mechanics and ensemble theory to
thermodynamic systems. Four lecture hours per week. Prerequisites: PH 202 and MA 107 or permission of instructor. Fee (U-G) (4)
PH 331. Electromagnetic Theory I
The theory of classical electric and magnetic fields is developed covering such topics as electrostatics, magnetostatics, scalar and vector potentials, fields in matter, electrodynamics and Maxwell's
equations, conservation laws and radiation. Prerequisites: MA 208 and PH 301 or permission of the instructor. Fee (U) (4)
PH 351. Analog Electronics I
Survey of electronic devices. Measurement of continuously varying quantities in time and frequency domains. Rectifiers, amplifiers, feedback, with emphasis on operational amplifiers and their uses.
Three lectures and three hours of laboratory per week. Prerequisite: PH 201 or permission of instructor. Fee (U) (4)
PH 352. Analog Electronics II
Continuation of PH 351. Use of computer-aided design programs. Complex frequency plane, resonance, scaling, and coupled circuits. Laplace transform methods. Fourier Series and Fourier transforms.
Two-port network. Fee (U) (3)
PH 411/412. Theoretical Physics
A study of mathematical methods of physics, including boundary-value problems, special functions, linear operators and group theory, with applications to problems in electromagnetic theory, classical
and quantum mechanics. Three lectures per week. Prerequisites: PH 331 and MA 334 or permission of instructor. Fee (U-G) (3, 3)
PH 421. Quantum Theory I
The mathematical foundations of quantum mechanics are presented with treatment of simple systems such as barriers, square wells, harmonic oscillator, and central potentials with the development of
approximation methods and the theory of angular momentum for single particles. Prerequisites: MA 208 and PH 301 or permission of the instructor. Fee (U) (4)
PH 422. Quantum Theory II
Applications of quantum mechanics to multi-particle systems. Time dependent perturbation theory, angular momentum coupling, atomic spectra, quantum statistics, radiation and scattering theory, and
introduction to relativistic quantum theory. Prerequisite: PH 421 or permission of the instructor. Fee (U) (4)
PH 427/428. General Relativity and Gravity
Tensor analysis in classical field theory, Einstein's field equations, the Schwarzschild solution, linearized field equations, experimental gravitation, cosmological models and gravitational
collapse. Prerequisites: PH 322 and PH 332 or permission of instructor. Fee (U-G) (3, 3)
PH 461. Computational Physics I
An introduction to numerical methods frequently used in physics for solving problems which cannot be solved analytically in a closed mathematical form. Topics include numerical solution of problems
dealing with oscillatory motion, gravitation, electrical fields, fluid dynamics, heat conduction, Schrödinger equation, and elastic wave motion. Prerequisites are PH 321 and PH 331. Fee (U) (3)
PH 480. Special Topics
By arrangement with appropriate staff. Fee (U-G) (3)
PH 491, 492, 493. Undergraduate Tutorial and Research Fee (U) (3,6,9)
PH 495. Senior Seminar
This seminar, for junior and senior physics majors, features student presentations on special research projects and selected readings in current literature. Fee (U) (1)
PH 499. Honors Thesis: Fee (U) (3)
Astronomy Courses
AS 100. The Astronomical Universe
A descriptive study of basic astronomy including the planets and the apparent motions of celestial objects, the seasons, constellations, comets and meteors, stars, galaxies and large-scale structure
of the universe, plus current events in space exploration. There will be planetarium demonstrations and telescope observations. Some hands-on lab experiences are provided. (U) (3)
NW 263-AS. Modern Astronomy with Laboratory (same as AS 102)
A one-semester survey of astronomy including ancient Greek astronomy, the motions of the night sky, the solar system, other solar systems, the lives of stars including the Sun, and the origin and
fate of the universe. This will be a four lecture hour/two hour lab course. (U) (5)
AS 301. Modern Astronomical Techniques
Introduction to techniques and equipment used in modern astronomy with emphasis on detection and analysis of electromagnetic radiation and the fundamental properties of telescopes and detectors.
Lectures and laboratory. Laboratories focus on observational techniques and data reduction. Prerequisites: AS 102 and PH 202. (U) (3)
AS 311. Stellar Astrophysics
The first semester of an introductory course on stellar astrophysics using nearly every branch of physics. Emphasis is on the underlying physical principles; including the nature of stars, stellar
energy generation, stellar structure and evolution, astrophysical neutrinos, binary stars, white dwarfs, neutron stars and pulsars, and novae and supernovae. Prerequisites: AS 102 and PH 202. (U) (3)
AS 312. Galaxies and Cosmology
A continuation of AS 311. The course covers the application of physical principles to the inter-stellar medium, the kinematics and dynamics of stars and stellar systems, galactic structure, formation
and evolution of galaxies, relativity, Big Bang and inflationary models of the origin of the universe, and the large-scale structure and ultimate fate of the universe. Prerequisite: AS 311. (U) (3)
AS 461. Computational Astrophysics
An introduction to numerical methods frequently used in astrophysics for solving problems which cannot be solved analytically in a closed mathematical form. Prerequisites are PH 321 and PH 331. (U)
AS 480. Special Topics
By arrangement with appropriate staff. (U-G) (3)
AS 491, 492, 493. Undergraduate Tutorial and Research:
(U) (3,6,9)
AS 495. Senior Seminar
This seminar, for junior and senior physics majors, features student presentations on special research projects and selected readings in current literature. (U) (1)
AS 499. Honors Thesis (U) (3) | {"url":"http://www.butler.edu/physics/courses/","timestamp":"2014-04-17T06:43:36Z","content_type":null,"content_length":"24969","record_id":"<urn:uuid:b50e9f4c-bbed-4a0c-962a-cbc9fedccd07>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
RF Power Amplifier Design Software
Class A, AB & C Operation of Single-Ended Triode RF Power Amplifiers
Author: R.J.Edwards G4FGQ © 27th March 2002
This program assists with the design of, and analyzes the performance of, triode RF power amplifiers. Cathodes are grounded. The output circuit is either a tuned tank circuit with a link coupling to
a 50-ohm load, or a Pi-matching network. A Cga neutralising circuit is omitted but can be included by centre-tapping the grid tuning coil with negligible effect on other amplifier performance
aspects. It is necessary to enter in this program data from tube manufacturers' sheets or characteristic curves. If users are not familiar with triode tube characteristics this program will be of
educational value.
Basic Equation
Peak cathode milliamps, Ipeak = Perveance*[Vac/Mu + Vgc]^alpha
Perveance is related to conductance. It depends on cathode surface area, on its temperature and on the electron emitting properties of the cathode materials. Doubling perveance is equivalent to
placing two identical tubes in parallel.
Alpha is an exponent between 1 to 2. It sets a tube characteristics' curvature. If alpha=1 the characteristics are linear. If 2 it is a parabola. Theoretically alpha = 1.5 but is typically between
1.0 and 1.6 If unknown guess at 1.25
Mu is the tube's amplification factor. It is the number of times grid voltage affects cathode current more than anode voltage. Depending on tube geometry, Mu can lie between 3 and 150. Low Mu tubes
have a lower internal impedance and for the same power levels operate at higher currents and lower voltages. If a tube manufacturer does not explicitly give a value for Mu then it should be obtained
by inspecting the Anode-Current vs Grid-Voltage curves. Mu is the ratio of the anode voltage to the -ve grid voltage needed to just cut off the anode current. It varies a little between extremes of
DC anode voltage. Use the average value.
Vac is the lowest instantaneous +ve anode voltage relative to cathode. It is the anode DC supply volts less the peak value of the anode's RF signal voltage.
Vgc is the greatest instantaneous +ve grid voltage relative to cathode. It is the peak RF signal amplitude at the grid minus the value of the -ve grid bias.
Vac min and Vgc max occur simultaneously. Grid and anode sinewave voltages are anti-phase. Vgc must not exceed Vac or the grid will draw excessive current. The program prevents this. A large safety
factor is normal, E.g., Vgc = Vac/2.
D is the fraction of peak cathode current which is intercepted by the control grid. It depends on diameter and spacing of grid wires. D tends to be smaller for small values of Mu. D affects grid RF
drive power and grid dissipation.
Anode Current Operating Angle
A complete sinewave cycle extends over a time-span of 360 degrees. The operating angle divided by 360 is the fraction of time during which anode current flows. At other instants grid voltage is more
negative than the anode current cut-off value, Va/Mu. A 360 degree operating angle gives Class-A conditions. 180 to 200 degrees gives Class-AB linear conditions. Anything less than 180 degrees is
Class-C. Operating angles for high efficiency Class-C operation vary from 80 to 150 degrees, typically 120. Small angles offer highest efficiency but available power output falls because cathode
emission is not sufficient to provide the short but very heavy pulses of current.
Grid Current Operating Angle
Grid current operating angle is computed - not an entered value. Grid op-angle is considerably smaller than anode op-angle but is directly related to it.
Entering Data
To match the characteristics of a particular tube it is not possible to enter a suitable set of data in one operation. An initial set will include values such as Mu, alpha, DC supply volts, op-ang,
Qi, Qo, which the user may wish to remain fixed for a time. The most important input data are Vac(min) and Vgc(max) which set up the grid and anode signal amplitudes. IT IS IMPORTANT TO ADJUST
PERVEANCE SUCH THAT COMPUTED PEAK ANODE AND GRID CURRENTS ACCORD WITH THE MNFR's DATA AT THE VALUES OF Vac AND Vgc PREVIOUSLY ENTERED IN THE PROGRAM. The ratio D, grid current/cathode current, can
also be found from these values.
Modeling Tube Characteristics
Modeling tube characteristics involves inserting in the program values of Mu, Fraction D, alpha and perveance which are proper to the tube itself and which remain fixed while setting up operating
conditions and evaluating performance. Examine the tube's anode current versus grid or anode voltage curves and decide whether the graphs are fairly straight or are markedly curved. Note the anode
current which flows when the anode voltage Vac is, say, 500 volts and the grid voltage Vgc is zero. Also note the smaller current which flows when the anode voltage is reduced to, say, 200 volts
(also at zero grid volts). Note that the basic equation is simplified by putting Vgc=0. Alpha is obtained more easily.
Now return to the program. Insert some sensible data including Mu. Set the grid voltage to zero and make a sensible guess for alpha. Set the anode voltage to 500 volts and vary perveance until peak
anode current is the same value as is on the manufacturers' data sheets. Then reduce anode voltage to 200 volts and check peak anode current falls to the lower manufacturer's value. If not, then
READJUST ALPHA AND REPEAT READJUSTMENT OF PERVEANCE UNTIL THE PROGRAM TRACKS THE ANODE CURRENT VERSUS ANODE VOLTAGE CHARACTERISTIC BY VARYING ANODE VOLTAGE ONLY with Vgrid=0.
The foregoing must be done using PEAK anode or cathode currents. However, tube performance is insensitive to small changes in curvature. So perveance should always be the last parameter to be
adjusted. Grid and anode signal amplitudes can then be adjusted independently of basic tube characteristics.
The fraction D of cathode current drawn to the grid is small when Mu is smaller than about 15 and when Vgc is small relative to Vac it is crudely in the range 0.01 to 0.07 For medium values of Mu,
say 15 to 60, D is crudely 0.07 to 0.25. For high Mu values, and when Vgc is not much smaller than Vac, D can be as high as 0.25 to 0.4 In general, in higher power tubes due to secondary emission at
the grid, grid current is an uncertain quantity. It is necessary to allow considerable latitude in grid drive power requirements. But do not overlook Fraction D in this program.
Manufacturers do not have a standard method of presenting tube data. But always look for grid and anode currents which flow at Vgc max and Vac min. If not available all is not lost. Estimate
perveance and alpha such that computed performance matches manufacturer's claimed data including drive power, anode loss and power output under one set of conditions. Other conditions can then be
The Pi-tank circuit matches to 50-ohms. When the tune capacitor has the same C as the LC tank both circuits have the same operating Q. If the tune capacitors differ then the Pi-tank has a higher Q
than asked for because in some circumstances a low Q may not be possible. But the computed 50-ohm match is always OK. Note that a choke-capacitor coupling is needed between anode and Pi-network.
The tuned anode tank has a link-coupling to a 50-ohm load resistance. The link is assumed to be placed over the cold RF end of the tank coil. Computed step-down turns ratio is an approximation. The
number of turns on the link may need to be increased relative to the number of turns on the tank coil.
Before finalising a design check all input and output data items. Always check anode and grid dissipations are within their specified ratings. If dissipation is too high reduce either or both Vgc max
and Vac min.
Negative grid bias is obtained from DC grid current flowing through a resistor. If a DC power supply of the same voltage is used set R = 0. Computed RF drive power, peak signal volts, grid
dissipation, etc., will remain unchanged. It is assumed the bias resistor is RF-bypassed when computing grid input impedance.
The 3-500Z is a high-Mu triode. On hitting T it is first set up as a high efficiency Class-C, grounded-cathode amplifier. Reduce the DC supply to 3000 volts and change op-angle to 201 degrees to
convert to a Class-AB2 zero-bias linear amplifier. Max ratings: Anode=500 watts, 4000 VDC, 0.4 amps DC, Grid=20 watts
L(owMu) uses a small, low-Mu, receiving-type tube as an efficient Class-C power amplifier or power oscillator.
Run this Program from the Web or Download and Run it from Your Computer
This program is self-contained and ready to use. It does not require installation. Click this link Triode1 then click Open to run from the web or Save to save the program to your hard drive. If you
save it to your hard drive, double-click the file name from Windows Explorer (Right-click Start then left-click Explore to start Windows Explorer) and it will run.
Search other ham radio sites with Ham Radio Search | {"url":"http://www.smeter.net/amplifiers/triode-rf-power-amplifiers.php","timestamp":"2014-04-16T21:57:07Z","content_type":null,"content_length":"21124","record_id":"<urn:uuid:e0607a62-b192-4021-95bc-946cb8d5dcf7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find an eigenfunction for the finite square well eigenproblem , , , where is a square well potential of height 5.
Start by plotting solutions of for .
The true eigenfunctions satisfy . These can be approximated by finding functions with . Because the equation is symmetric in , only solutions with need to be found. Plot for .
The root is the approximate eigenvalue.
Plot the approximate eigenfunction together with solutions for nearby . | {"url":"http://wolfram.com/mathematica/new-in-9/parametric-differential-equations/eigenproblems.html","timestamp":"2014-04-21T02:05:00Z","content_type":null,"content_length":"7549","record_id":"<urn:uuid:df4f9a49-4a71-4cbc-88f2-199036c6d24f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factor the following polynomial completely
Hello, You can note that 32 is 4x8 So factorize. Then, you'll have something in the form aČ-bČ. You know that you can factorize it into (a+b)(a-b)
Nope, $8x^2-32=8(x^2-4)$ As $x^2-4=x^2-2^2$, $x^2-4=(x-2)(x+2)$ You're asked to factorise, so don't develop 8 to the brackets :-)
how am i suposed to know when to break things up in to the () or to factor and how ia m confused on how to get the final answer or which set of numbers you gave me is the final answer
I don't like to give the entire solution directly, so i give the elements... If you want it... : $8x^2-32=8*x^2-4*8=8(x^2-4)=8(x^2-2^2)=8(x-2)(x+2)$ | {"url":"http://mathhelpforum.com/algebra/32028-factor-following-polynomial-completely.html","timestamp":"2014-04-20T19:24:59Z","content_type":null,"content_length":"44306","record_id":"<urn:uuid:f7c0e47f-992e-4ae1-bb77-d14bb345439f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bernoulli equation
Flow in pipe - diameter, velocity, Reynolds number, Bernoulli equation, friction factor
Velocity of fluid in pipe is not uniform across section area. Therefore a mean velocity is used and it is calculated by the continuity equation for the steady flow as:
Calculate pipe diameter for known flow rate and velocity. Calculate flow velocity for known pipe diameter and flow rate. Convert from volumetric to mass flow rate. Calculate volumetric flow rate of
ideal gas at different conditions of pressure and temperature.
Pipe diameter can be calculated when volumetric flow rate and velocity is known as:
where is: D - internal pipe diameter; q - volumetric flow rate; v - velocity; A - pipe cross section area.
If mass flow rate is known than diameter can be calculated as:
where is: D - internal pipe diameter; w - mass flow rate; ρ - fluid density; v - velocity.
If the velocity of fluid inside the pipe is small, streamlines will be in straight parallel lines. As the velocity of fluid inside the pipe gradually increase, streamlines will continue to be
straight and parallel with the pipe wall until velocity is reached when the streamlines will waver and suddenly break into diffused patterns. The velocity at which this occurs is called "critical
velocity". At velocities higher than "critical", the streamlines are dispersed at random throughout the pipe.
The regime of flow when velocity is lower than "critical" is called laminar flow (or viscous or streamline flow). At laminar regime of flow the velocity is highest on the pipe axis, and on the wall
the velocity is equal to zero.
When the velocity is greater than "critical", the regime of flow is turbulent. In turbulent regime of flow there is irregular random motion of fluid particles in directions transverse to the
direction on main flow. Velocity change in turbulent flow is more uniform than in laminar.
In the turbulent regime of flow, there is always a thin layer of fluid at pipe wall which is moving in laminar flow. That layer is known as the boundary layer or laminar sub-layer. To determine flow
regime use Reynolds number calculator.
The nature of flow in pipe, by the work of Osborne Reynolds, is depending on the pipe diameter, the density and viscosity of the flowing fluid and the velocity of the flow. Dimensionless Reynolds
number is used, and is combination of these four variables and may be considered to be ratio of dynamic forces of mass flow to the shear stress due to viscosity. Reynolds number is:
where is: D - internal pipe diameter; v - velocity; ρ - density; ν - kinematic viscosity; μ - dynamic viscosity;
Calculate Reynolds number with this easy to use calculator. Determine if flow is laminar or turbulent. Applicable for liquids and gases.
This equation can be solved using and fluid flow regime calculator.
Flow in pipes is considered to be laminar if Reynolds number is less than 2320, and turbulent if the Reynolds number is greater than 4000. Between these two values is "critical" zone where the flow
can be laminar or turbulent or in the process of change and is mainly unpredictable.
When calculating Reynolds number for non-circular cross section equivalent diameter (four time hydraulic radius d=4xRh) is used and hydraulic radius can be calculated as:
Rh = cross section flow area / wetted perimeter
It applies to square, rectangular, oval or circular conduit when not flowing with full section. Because of great variety of fluids being handled in modern industrial processes, a single equation
which can be used for the flow of any fluid in pipe offers big advantages. That equation is Darcy formula, but one factor - the friction factor has to be determined experimentally. This formula has a
wide application in the field of fluid mechanics and is used extensively throughout on this web site.
If friction losses are neglected and no energy is added to, or taken from a piping system, the total head, H, which is the sum of the elevation head, the pressure head and the velocity head will be
constant for any point of fluid streamline.
This is the expression of law of head conservation to the flow of fluid in a conduit or streamline and is known as Bernoulli equation:
where is: Z[1,2] - elevation above reference level; p[1,2] - absolute pressure; v[1,2] - velocity; ρ[1,2] - density; g - acceleration of gravity
Bernoulli equation equation is used in several calculators on this site like pressure drop and flow rate calculator, Venturi tube flow rate meter and Venturi effect calculator and orifice plate
sizing and flow rate calculator.
From Bernoulli equation all other practical formulas are derived, with modifications due to energy losses and gains.
As in real piping system, losses of energy are existing and energy is being added to or taken from the fluid (using pumps and turbines) these must be included in the Bernoulli equation.
For two points of one streamline in a fluid flow, equation may be written as follows:
where is: Z[1,2] - elevation above reference level; p[1,2] - absolute pressure; v[1,2] - velocity; ρ[1,2] - density; h[L] - head loss due to friction in the pipe; H[p] - pump head; H[T] - turbine
head; g - acceleration of gravity;
Flow in pipe is always creating energy loss due to friction. Energy loss can be measured like static pressure drop in the direction of fluid flow with two gauges. General equation for pressure drop,
known as Darcy's formula expressed in meters of fluid is:
where is: h[L] - head loss due to friction in the pipe; f - friction coefficient; L - pipe length; v - velocity; D - internal pipe diameter; g - acceleration of gravity;
To express this equation like pressure drop in newtons per square meter (Pascals) substitution of proper units leads to:
Calculator based on Darcy equation. Calculate pressure drop for known flow rate or calculate flow rate for known pressure drop. Friction factor calculation included. Applicable for laminar and
turbulent flow, circular or rectangle pipe.
where is: Δ p - pressure drop due to friction in the pipe; ρ - density; f - friction coefficient; L - pipe length; v - velocity; D - internal pipe diameter; Q - volumetric flow rate;
The Darcy equation can be used for both laminar and turbulent flow regime and for any liquid in a pipe. With some restrictions, Darcy equation can be used for gases and vapors. Darcy formula applies
when pipe diameter and fluid density is constant and the pipe is relatively straight.
Physical values in Darcy formula are very obvious and can be easily obtained when pipe properties are known like D - pipe internal diameter, L - pipe length and when flow rate is known, velocity can
be easily calculated using continuity equation. The only value that needs to be determined experimentally is friction factor. For laminar flow regime Re < 2000, friction factor can be calculated, but
for turbulent flow regime where is Re > 4000 experimentally obtained results are used. In the critical zone, where is Reynolds number between 2000 and 4000, both laminar and turbulent flow regime
might occur, so friction factor is indeterminate and has lower limits for laminar flow, and upper limits based on turbulent flow conditions.
If the flow is laminar and Reynolds number is smaller than 2000, the friction factor may be determined from the equation:
where is: f - friction factor; Re - Reynolds number;
When flow is turbulent and Reynolds number is higher than 4000, the friction factor depends on pipe relative roughness as well as on the Reynolds number. Relative pipe roughness is the roughness of
the pipe wall compared to pipe diameter e/D. Since the internal pipe roughness is actually independent of pipe diameter, pipes with smaller pipe diameter will have higher relative roughness than
pipes with bigger diameter and therefore pipes with smaller diameters will have higher friction factors than pipes with bigger diameters of the same material.
Most widely accepted and used data for friction factor in Darcy formula is the Moody diagram. On Moody diagram friction factor can be determined based on the value of Reynolds number and relative
The pressure drop is the function of internal diameter with the fifth power. With time in service, the interior of the pipe becomes encrusted with dirt, scale, tubercles and it is often prudent to
make allowance for expected diameter changes. Also roughness may be expected to increase with use due to corrosion or incrustation at a rate determined by the pipe material and nature of the fluid.
When the thickness of laminar sub layer (laminar boundary layer δ) is bigger than the pipe roughness e the flow is called flow in hydraulically smooth pipe and Blasius equation can be used:
where is: f - friction factor; Re - Reynolds number;
The boundary layer thickness can be calculated based on the Prandtl equation as:
where is: δ - boundary layer thickness; D - internal pipe diameter; Re - Reynolds number;
For turbulent flow with Re < 100 000 (Prandtl equation) can be used:
For turbulent flow with Re > 100 000 (Karman equation) can be used:
where is: f - friction factor; Re - Reynolds number; D - internal pipe diameter; k[r] - pipe roughness;
Above equations are used for pressure drop and flow rate calculator.
Static pressure is pressure of fluid in flow stream. Total pressure is pressure of fluid when it is brought to rest, i.e. velocity is reduced to 0.
Total pressure can be calculated using Bernoulli theorem. Imagining that flow is in one point of stream line stopped without any energy loss Bernoulli theorem can be written as:
If velocity at point 2 v[2]=0, pressure at point 2 is than total p[2]=p[t]:
where is: p - pressure; p[t] - total pressure; v - velocity; ρ - density;
The difference between total and static pressure represents fluid kinetic energy and it is called dynamic pressure.
Dynamic pressure for liquids and incompressible flow where the density is constant can be calculated as:
where is: p - pressure; p[t] - total pressure; p[d] - dynamic pressure; v - velocity; ρ - density;
If dynamic pressure is measured using instruments like Prandtl probe or Pitot tube velocity can be calculated in one point of stream line as:
where is: p - pressure; p[t] - total pressure; p[d] - dynamic pressure; v - velocity; ρ - density;
For gases and larger Mach numbers than 0.1 effects of compressibility are not negligible.
For compressible flow calculation gas state equation can be used. For ideal gases, velocity for Mach number M < 1 is calculated using following equation:
where is: M - Mach number M=v/c - relation between local fluid and local sound velocity; γ - isentropic coefficient;
It should be said that for M > 0.7 given equation is not totally accurate.
If Mach number M > 1, than normal shock wave will occur. Equation for velocity in front of the wave is given bellow:
where is: p - pressure; p[ti] - total pressure; v - velocity; M - Mach number; γ - isentropic coefficient;
Above equations are used for Prandtl probe and Pitot tube flow velocity calculator.
Note: You can download complete derivation of given equations
The flow rate of fluid required for the thermal energy - heat power transfer can be calculated as:
where is: q - flow rate [m^3/h]; ρ - density of fluid [kg/m^3]; c - specific heat of fluid [kJ/kgK]; Δ T - temperature difference [K]; P - power [kW];
Calculate heat energy and thermal power for known flow rate. Calculate flow rate for known heat energy or thermal power. Applicable for boilers, heat exchangers, radiators, chillers, air heaters.
This relation can be used to calculate required flow rate of, for example, water heated in the boiler, if the power of boiler is known. In that case temperature difference in above equation is the
change of temperature of fluid in front and after the boiler. It should be said that efficiency coefficient should be included in above equation, for precise calculation. | {"url":"http://www.pipeflowcalculations.com/pipe-valve-fitting-flow/flow-in-pipes.php","timestamp":"2014-04-19T17:21:55Z","content_type":null,"content_length":"24856","record_id":"<urn:uuid:fc3b3dc0-14c0-4c80-ae8d-e29201cd7bfb>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
problem 2
Is there a reason why you could not just type the problem in yourself? Many people are reluctant to open attachments for fear of viruses. Here is what you have: Consider the differential operator T:
$D^3+ 3D- 5I: R_3[X]\to R_3[X]$. a) Is this a linear operator? If so what is the representing matrix of this mapping? (I presume you mean "in the standard basis".) b) Find at least one vector in the
pre-image set $T^{-1}(\left{0\right})$. c) (Bonus) Find the pre-image set $T^{-1}(\left{0\right})$. ( $T^{-1}(\left{0\right})$ is also called the "nullspace" of T.) Now, what have you done on these
yourself? A good method of finding the matrix representing a mapping, in a given basis, is to apply the mapping to the basis vectors in turn, writing the result in terms of the basis. The
coefficients are the columns of the matrix. The standard basis for $R_3[X]$ is {1, x, $x^2$, $x^3$}. What is T(1)? What is T(x)? As for finding vectors in the null space, take a general polynomial, p
(x), in $R_3[X]$, apply T to it and set it equal to 0. What must the coefficients of p be so that T(p)= 0? | {"url":"http://mathhelpforum.com/advanced-algebra/117144-problem-2-a.html","timestamp":"2014-04-16T06:39:14Z","content_type":null,"content_length":"40648","record_id":"<urn:uuid:8542c6c5-c6c7-4b40-9138-d6e72615ae12>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probing the Universe with Weak Lensing - Y. Mellier
Annu. Rev. Astron. Astrophys. 1999. 37: 127-189
Copyright © 1999 by . All rights reserved
2.2. Relation with Observable Quantities
Let us assume that, to first approximation, faint galaxies can be described as ellipses. Their shape can be expressed as a function of their weighted second moments which fully define the properties
of an ellipse,
where the subscripts ij denote the axes (1, 2) of coordinates S(^C is the center of the source.
Since the surface brightness of the source is conserved through the gravitational lensing effect (Etherington 1933), it is easy to show that, if one assumes that the magnification matrix is constant
across the image (lensed source), the relation between the shape of the source, M^S and the lensed image, M^I is
Therefore, to first approximation, the gravitational lensing effect on a circular source changes its size (magnification) and transforms it into an ellipse (distortion) with axis ratio given by the
ratio of the two eigenvalues of the magnification matrix. The shape of the lensed galaxies can then provide information about these quantities. The approximation that the magnification matrix is
constant over the image area is always valid in the weak-lensing regime, because the spatial scale variation of the magnification is much larger than the typical size of the lensed galaxies (a few
arcseconds). This is not the case when the magnification tends to infinity, but this case is beyond the scope of this review (see Schneider et al 1992, Fort & Mellier 1994).
The relation between the lens quantities described in Section 2.1 and the shape parameters of lensed galaxies is not immediately apparent. Although [1] and [2] describe the anisotropic distortion of
the magnification, they are not directly related to observables (except in the weak-shear regime). It is preferable to use the reduced complex shear, g, and the complex polarization (or distortion),
because a^I and b^I of the image, I, produced by a circular source S:
In this case, the two components of the complex polarization are easily expressed with the second moments:
where Tr(M) is the trace of the magnification matrix. For non-circular sources, from Equations (8) and (11) it is possible to relate the ellipticity of the image ^I to the ellipticity of the lensed
source, ^S. In the general case, it depends on the sign of Det (A) (that is the position of the source with respect to the caustic lines) which expresses whether images are radially or tangentially
elongated. In most cases of interest, Det(A) > 0 (the external regions, where the weak lensing regime applies) and:
(Seitz & Schneider 1996), but when Det(A) > 0:
Equations 14 and 15 summarize most of the cases that will be discussed in this review. | {"url":"http://ned.ipac.caltech.edu/level5/March03/Mellier/Mellier2_2.html","timestamp":"2014-04-18T23:20:57Z","content_type":null,"content_length":"7579","record_id":"<urn:uuid:e40b2f48-99bf-4f14-a821-b986e6989ca9>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/deadshot/medals","timestamp":"2014-04-16T19:35:08Z","content_type":null,"content_length":"93693","record_id":"<urn:uuid:2ad1e3cf-de45-41d0-ac5a-a90a58530a03>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
Note that 8 - x and x - 8 only differ by signs, in other words they are opposites of each other. In that case, you can factor a -1 out of one of those factors and rewrite it with opposite signs, as
shown in line 3 above.
To find the value(s) needed to be excluded from the domain, we need to ask ourselves, what value(s) of x would cause our denominator to be 0?
Looking at the denominator x + 1, I would say it would have to be x = -1. Don't you agree?
-1 would be our excluded value. | {"url":"http://www.wtamu.edu/academic/anns/mps/math/mathlab/col_algebra/col_alg_tut8_simrat_ans.htm","timestamp":"2014-04-17T16:22:41Z","content_type":null,"content_length":"25642","record_id":"<urn:uuid:da974867-9082-4eff-ad4c-91a1ea95cb79>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Baldwin Hills, CA Calculus Tutor
Find a Baldwin Hills, CA Calculus Tutor
...Further, I have always had a great interest in grammar, so my Spanish grammar continues to improve in advanced grammar topics. I am a math major at Caltech currently doing research in graph
theory and combinatorics with a professor at Caltech. I have taken several discrete math courses and I spent a summer solving hard problems in discrete math with a friend.
28 Subjects: including calculus, Spanish, French, chemistry
...In the time since college, I have coached for eight consecutive seasons, while also competing as a professional middle-distance runner. I have coached elite runners to sub-4 minute miles and
coached collegians to all-American status and high schoolers to CA State Championships. In my own competitions, I have registered personal bests of 1:49.4 in the 800m and 4:00.38 in the Mile.
58 Subjects: including calculus, English, reading, writing
...A Math Mentor at any level? I am a caring, intelligent and entertaining tutor with over 7 years of experience working with high schoolers in SAT prep and in all levels of Math from Algebra I
and Geometry through Calculus. I have a deep love of all things mathematical, and I find teaching reading strategies, writing techniques and even grammar rules to be a delight.
26 Subjects: including calculus, Spanish, English, reading
...I am confident that if we work together we will maximize your potential and you will do really well. I have experience teaching probabilities and statistics at the high school and college
freshman level. I can help you understand the concepts of probability, conditional probabilities, binomial probabilities, combinations, and permutations.
16 Subjects: including calculus, French, geometry, piano
...To do well on the quantitative sections of the GRE, you must know more than just the mathematics of algebra and geometry. The newly revamped test has four types of questions on the
Quantitative sections. These questions will require out-of-the-box thinking, beyond the skills taught in high school.
13 Subjects: including calculus, geometry, algebra 1, GRE
Related Baldwin Hills, CA Tutors
Baldwin Hills, CA Accounting Tutors
Baldwin Hills, CA ACT Tutors
Baldwin Hills, CA Algebra Tutors
Baldwin Hills, CA Algebra 2 Tutors
Baldwin Hills, CA Calculus Tutors
Baldwin Hills, CA Geometry Tutors
Baldwin Hills, CA Math Tutors
Baldwin Hills, CA Prealgebra Tutors
Baldwin Hills, CA Precalculus Tutors
Baldwin Hills, CA SAT Tutors
Baldwin Hills, CA SAT Math Tutors
Baldwin Hills, CA Science Tutors
Baldwin Hills, CA Statistics Tutors
Baldwin Hills, CA Trigonometry Tutors
Nearby Cities With calculus Tutor
Bicentennial, CA calculus Tutors
Cimarron, CA calculus Tutors
Hancock, CA calculus Tutors
Hollyglen, CA calculus Tutors
La Tijera, CA calculus Tutors
Lennox, CA calculus Tutors
Mar Vista, CA calculus Tutors
Pico Heights, CA calculus Tutors
Playa Vista, CA calculus Tutors
Rancho Park, CA calculus Tutors
Sanford, CA calculus Tutors
View Park, CA calculus Tutors
Westchester, CA calculus Tutors
Westvern, CA calculus Tutors
Windsor Hills, CA calculus Tutors | {"url":"http://www.purplemath.com/baldwin_hills_ca_calculus_tutors.php","timestamp":"2014-04-19T02:31:15Z","content_type":null,"content_length":"24675","record_id":"<urn:uuid:77289400-14b8-4eef-8254-a3db95bec38e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
may I have a solution
January 17th 2010, 04:11 AM #1
Oct 2009
may I have a solution
may I have a solution 4 those:
.Determine those primes which can be written both as a sum and a difference of two primes.
.If a, b are digits; if a >= 1 and if (10a + b) is prime, then (2a – b) is also a prime.
.Prove that there is no infinite arithmatic progression having all its terms primes.
.Determine all primes P for which P+4, P+24, P^2+10, P^2+44 are also primes.
5.prove that n is prime if and only if Φ(n) = n-1, where Φ(n) is Eular's totint function.
6.If p prime, if a Î{1,2,3,….}, then p|(p-1)! a^p + a.
7.If p > 8 is a prime, find (p-5)! = ?(mod p)
8.For every n >1, show that (2^4n + 1)/ 5 is integer.
9.If b|a(a-1) , show that gcd (2a-1,b)=1.
Note: b|a and if (a,c)=1, then (b,c)=1.
10.Find three integers in arithmetic progression such that the product of each two of them is square.
11.Let {Ai,….} i=1,2,…,12 be integers, show that there are two of them whose difference is a multiple of 10.
12. if I have a square with length =3m, 10 point are in the square. Show that there are 2 points with distance < 1.5 m .
13.Concider an infinite chess table, each square has a positive integer in it
a= (b+c+d+e)/4
show that all integers are the same
(use the property of N(natural numbers): every nonempty set of N has a small element).
may I have a solution 4 those:
.Determine those primes which can be written both as a sum and a difference of two primes.
.If a, b are digits; if a >= 1 and if (10a + b) is prime, then (2a – b) is also a prime.
.Prove that there is no infinite arithmatic progression having all its terms primes.
.Determine all primes P for which P+4, P+24, P^2+10, P^2+44 are also primes.
5.prove that n is prime if and only if Φ(n) = n-1, where Φ(n) is Eular's totint function.
6.If p prime, if a Î{1,2,3,….}, then p|(p-1)! a^p + a.
7.If p > 8 is a prime, find (p-5)! = ?(mod p)
8.For every n >1, show that (2^4n + 1)/ 5 is integer.
9.If b|a(a-1) , show that gcd (2a-1,b)=1.
Note: b|a and if (a,c)=1, then (b,c)=1.
10.Find three integers in arithmetic progression such that the product of each two of them is square.
11.Let {Ai,….} i=1,2,…,12 be integers, show that there are two of them whose difference is a multiple of 10.
12. if I have a square with length =3m, 10 point are in the square. Show that there are 2 points with distance < 1.5 m .
13.Concider an infinite chess table, each square has a positive integer in it
a= (b+c+d+e)/4
show that all integers are the same
(use the property of N(natural numbers): every nonempty set of N has a small element).
This looks like you're looking for someone to do your homework for you: too many questions, zero self-work shown.
Do some effort, show what you've achieved and then ask for help where you're stuck...and please, no more than 1 question per thread.
Please show us what you have done yet. These are typical homework questions that can be solved with a bit of effort. For example, the first one :
5.prove that n is prime if and only if Φ(n) = n-1, where Φ(n) is Eular's totint function.
What is the Euler function defined as ? It is the number of integers less than n that are relatively prime with n. In the case of n prime, n cannot be divided by any integer less than n, except
one (definition of a prime number). This leads to the conclusion : if n is prime, then Φ(n) = n-1.
Do you see the reasoning ?
Now please go back to your lessons, read them thoroughly and carefully, then try to do the problems (they are really easy, you should be able to do them) and come back with a question if you are
stuck on one particular point on a problem (how can I express this as ...)
4 Tonio, Bacterius
If out teacher teaches anything I wouldn't be asking these questions, which are the remaining of many other questions.
thank you anyway
Please don't post more than two questions in a thread. Otherwise the thread can get convoluted and difficult to follow. Start new threads as necessary for remaining questions. eg. If you have
five questions, post two of them in two threads and start a new thread for the remaining one etc.
And if the question has more than two parts to it, it is best to post only that question and its parts in the thread and start a new thread for other questions.
Thread closed.
January 17th 2010, 04:31 AM #2
Oct 2009
January 17th 2010, 06:21 AM #3
January 17th 2010, 07:30 AM #4
Oct 2009
January 17th 2010, 01:01 PM #5 | {"url":"http://mathhelpforum.com/number-theory/124121-may-i-have-solution.html","timestamp":"2014-04-20T11:18:38Z","content_type":null,"content_length":"50067","record_id":"<urn:uuid:60ab0fa2-984a-4b60-9e86-e067726b31a1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
Looks like a simple limit, help. :)
$\lim_{n\to \infty}(n^{\frac{3}{x^{8}}}+\frac{1}{n})^{n}$ Note that this looks strikingly similar to the definition of e: $\lim_{n \to \infty}(1+\frac{1}{n})^{n}$ So lets see what $n^{\frac{3}{n^
{8}}}$ is doing as x goes to infinity. Lets set it equal to some value, we'll choose y to be that value $y=\lim_{x\to \infty} n^{\frac{3}{n^{8}}}$ Take the natural log in order to get rid of that
ugly exponent. $ln(y)=\lim_{x\to \infty} ln(n^{\frac{3}{n^{8}}})$ By exponential logarithmic identities $ln(y)=\lim_{x\to \infty} \frac{3}{n^{8}}ln(n)$ You can see that the numerator is going to
infinity and the denominator is going to infinity, so we can use L'Hospital's rule $ln(y)=\lim_{x\to \infty} \frac{3\frac{1}{n}}{8n^{7}}$ Simplify $ln(y)=\lim_{x\to \infty} \frac{3}{8n^{8}}$ Send x
to infinity $ln(y)=\frac{3}{8\infty^{8}}$ Simplify $ln(y)=\frac{3}{\infty}$ Simplify $ln(y)=0$ Solve for y $y=e^0$ Simplify $y=1$ Therefore: $y=\lim_{x\to \infty} n^{\frac{3}{n^{8}}}=1$ ----- So we
know that as x goes to infinity, the first term goes to 1, so let's substitute that into our equation. $\lim_{n\to \infty}(1+\frac{1}{n})^{n}$ And now we see that as x goes to infinity, the equation
goes to the definition of e, so the answer is e.
hey wait a minute, you had found the limit to be 1, but it does not mean the expression without the limit will also equal 1, right? Can you explain this? thanks, T
How can you take limits directly, for example: lim (1+1/n+1/n)^n = e^2 and lim (1+1/logn+1/n)^n = infinity. I mean, you had clearly showed for the expression that the limit as n approaches 0 will
equal to 1, but I was just wondering how it can directly equal to the expression. I understand that the limit is 1 but I don't understand why the expression without the limit is also 1?.... thanks, T
Hmm, I can't say for certain, but if I had to guess, I'd say it's because $n^{\frac{3}{n^{8}}}$ goes to 1 quicker than the outside exponent goes to infinity. As n grows, n^8 grows infinitely faster
than n, so the outside exponent doesn't affect it. My book, in the L'Hospital's rule section says "There is a struggle between numerator and denominator. If the numerator wins, the limit will be
infinity, if the denominator wins the answer will be zero. Or there may be some compromise, in which case the answer may be some finite positive number." I think in this case, we have a "struggle"
between the function going to 1 and the exponent going to infinity. As you can see, the function going to 1 goes to 1 exponentially quicker than the function going to infinity, so I expect that it
"wins" and becomes equal to 1. (if you do this equation with out the +1/n, you get 1 as your answer.) I expect there is a better way to solve this that doesn't invite such ambiguity, but I can't
think of what it would be.
$\lim n^{1/n^8} = 1$ because, $1\leq n^{1/n^{8}} \leq n^{1/n}$ and $\lim n^{1/n} = 1$. --- Best way to solve this problem is to use the fact that $(1+x_n/n)^{n} \to e^x$ where $\lim x_n = x$.
I think I can, so I'll give it a shot (feel free to correct me PH, if I err) $\lim_{n\to\infty}\left( 1\leq n^{3/n^{8}} \leq n^{1/n}\right)$ First lets show that $\lim_{n\to\infty}\left(1\leq n^{3/n^
{8}}\right)$ Because n is going to infinity, the base, n, will always be greater than 1. And the only way to make a number greater than 1 be less than 1 with exponents is to take it to a negative
power. However $\lim_{n\to\infty}\frac{3}{n^{8}}$ can never be negative, because 3 is positive, and n is positive (and a number taken to an even power is always positive). So you have a base always
greater than one equal to an exponent which is always positive, so this value will never be less than one. We know that it can be equal to one because any positive number taken to the zero power is
1, so if the exponent equals zero, then the limit equals one. Now for $\lim_{n\to\infty}\left( n^{3/n^{8}} \leq n^{1/n}\right)$ $\lim_{n\to\infty}\left( \frac{3}{n^{8}}ln(n)\leq \frac{1}{n}ln(n)\
right)$ $\lim_{n\to\infty}\left( \frac{3}{n^{8}}\leq \frac{1}{n}\right)$ $\lim_{n\to\infty}\left( \frac{3}{n^{7}}\leq 1\right)$ Which, we can see is true, so because the base is the same and the
exponent goes to zero quicker on the LHS, the LHS can never be greater than the RHS. And now $\lim_{n\to\infty} n^{\frac{1}{n}} = 1$ $\lim_{n\to\infty} n^{\frac{1}{n}} = y$ $\lim_{n\to\infty} \frac
{1}{n}\ln(n) = ln(y)$ L'Hospital's Rule $\lim_{n\to\infty} \frac{\frac{1}{n}}{1} = ln(y)$ $\lim_{n\to\infty} \frac{1}{n} = ln(y)$ $\frac{1}{\infty} = ln(y)$ $0 = ln(y)$ $e^{0} = y$ $1 = y = \lim_{n\
to\infty} n^{\frac{1}{n}}$ ----- CONCLUSION: So we have shown that $\lim_{n\to\infty}\left( 1\leq n^{3/n^{8}}\right)$ And that $\lim_{n\to\infty}\left(n^{3/n^{8}} \leq n^{1/n}\right)$ And thus $\lim_
{n\to\infty}\left( 1\leq n^{3/n^{8}} \leq n^{1/n}\right)$ And we have shown $\lim_{n\to\infty} n^{\frac{1}{n}} = 1$ Thus And thus $\lim_{n\to\infty}\left( 1\leq n^{3/n^{8}} \leq 1\right)$ So $\lim_{n
\to\infty}n^{3/n^{8}}$ must equal one. ---- PROBLEMS: Now that I am done, I think I did a good job showing that the limit must be n, but I am confused by how this means it is not affected by the
outside going to infinity. For example, I could use the same method to prove $\lim_{n\to\infty} \frac{1}{n}=0$ but we can plainly see in this equation that it is affected by being taken to the power
of infinity. As an example, I entered $\lim_{n\to \infty}\left(n^{\frac{3}{x^{8}}}+\frac{1}{n}\right )^{n}$ into Function calculator and it agreed that the limit is e, but when I changed the equation
to $\lim_{n\to \infty}\left(n^{\frac{3}{x}}+\frac{1}{n}\right)^{n }$ the limit became infinity, even though we know that $\lim_{n\to\infty} n^{3/n}=1$ This would seem to seem to indicate that it is
subject to being effected by being taken to the power of infinity, but that it simply overpowers this effect due to it's large exponent (being taken to the power of 8), which causes it to go to 1
faster than the exponent can affect it. So I guess my problem is that I don't understand how my proof shows that it's limit can be found independently of the rest of the equation, when it is nested
within another function which could effect it's result.
I can see it is fairly easy to show the limit to be 1 for (n^3)^(1/n^3) as n approaches infinity. But the problem is that, how can you say the limit of the whole expression is e, as you only know
that the limit of the (n^3)^(1/n^3) is 1. I am wondering how you can take limits separately.?... thanks, T | {"url":"http://mathhelpforum.com/calculus/22613-looks-like-simple-limit-help.html","timestamp":"2014-04-18T19:12:33Z","content_type":null,"content_length":"84263","record_id":"<urn:uuid:c11263e2-620a-4d86-bd06-6277b717601a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
A differential equation
up vote 0 down vote favorite
let $g(s)$ be real-valued function defined on $[0,T]$ such that $g(T)=0$ and suppose that $g$ is a "nice function" Assume that $0<\gamma<1$, $v$ is a positive number, and $$\frac{dg}{ds}+(v\gamma) g
+(1-\gamma)(e^{\rho s}g)^{\frac{1}{\gamma-1}}g=0$$
Find a closed form for $g$?
differential-equations fa.functional-analysis ca.analysis-and-odes
Up to an implicit algebraic equation, yes. Ask Maple. The answer is large enough that I won't paste it here. But it's not so hard to do even by hand! – Jacques Carette Aug 4 '10 at 16:02
3 Please provide some context: why are you interested in this equation? Why do you particularly want a closed form (given that so many ODEs don't have closed forms)? What have you done already to
try to find one? – Andrew Stacey Aug 4 '10 at 16:20
2 If possible, please give more information in the title of your question. Titles on MO can be up to 240 characters --- almost two tweets. – Theo Johnson-Freyd Aug 4 '10 at 19:37
1 This reads like homework. I'm voting to close. I echo Theo's plea for a more descriptive title. – José Figueroa-O'Farrill Aug 4 '10 at 22:31
Hey guys, thanks for your comments, by letting $h=(e^{\rho s}g)^{\frac{1}{1-\gamma}}$ I got the solution for this ! – Lam Aug 5 '10 at 14:43
add comment
1 Answer
active oldest votes
This seems to be a Bernoulli differential equation. Please cf. http://en.wikipedia.org/wiki/Bernoulli_differential_equation for the solution (in your case $n= \frac{\gamma}
up vote 0 down vote {\gamma-1}$).
add comment
Not the answer you're looking for? Browse other questions tagged differential-equations fa.functional-analysis ca.analysis-and-odes or ask your own question. | {"url":"https://mathoverflow.net/questions/34520/a-differential-equation/34538","timestamp":"2014-04-18T11:00:30Z","content_type":null,"content_length":"55784","record_id":"<urn:uuid:d067adb2-80b9-40ae-a75c-e5753d539492>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
normal contact force with angles
August 12th 2011, 01:03 AM #1
normal contact force with angles
A small smooth ring R, of mass 0.6 kg, is threaded on a light inextensible string. One end of the string is attached to the fixed point A and the other end is attached to a ring B, of mass 0.2
kg, which is threaded on a fixed rough horizontal wire which passes through A (see diagram (excuse the obviously unequal angles)). The system is in equilibrium, with B about to slip and with the
part AR of the string making an angle of 60° to the wire.
a. Explain, with reference to the fact that ring R is smooth, why the part BR of the string is inclined at 60° to the wire.
b. Show that the normal contact force between B and the wire has magnitude 5 N.
c. Find the coefficient of friction between B and the wire.
For a. I was unsure of what wording to use exactly, and answered the following:
a. Ring R being smooth and ring B being on the same horizontal plane as fixed point A, ring R will have slipped to the midpoint of the string: angles ABR and RAB are therefore equal.
Angle RAB being given as 60°, this means that the part BR of the string is inclined at 60° to the wire.
For b. however I have several problems. Firstly, of how the 6N weight of R is distributed over both string segments (RB and AR). I would assume that the each segment carries half the load, but
I’m unsure of how to prove this. Secondly, even taking 3cos30 N (half of R’s weight resolved for the slope of the string) as the tension in the string segment BR, I end up with normal contact
force = 2 (weight of B) + 3cos30(cos30) = 4.25 N
Any suggestions?
Re: normal contact force with angles
b. Why did you take 3cos(30)cos(30)?
What is the component of the weight of ring R downwards?
August 12th 2011, 07:35 AM #2 | {"url":"http://mathhelpforum.com/math-topics/186018-normal-contact-force-angles.html","timestamp":"2014-04-23T17:14:40Z","content_type":null,"content_length":"34377","record_id":"<urn:uuid:31a93394-a2c6-49bf-b592-5baa42b7b18c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
HMM Methods in Speech Recognition
Next: 1.6: Language Representation Up: Spoken Language Input Previous: 1.4
Modern architectures for Automatic Speech Recognition (ASR) are mostly software architectures generating a sequence of word hypotheses from an acoustic signal. The most popular algorithms implemented
in these architectures are based on statistical methods. Other approaches can be found in [WL90] where a collection of papers describes a variety of systems with historical reviews and mathematical
A vector is computed every 10 to 30 msec. Details of this component can be found in section . Various possible choices of vectors together with their impact on recognition performance are discussed
in [HUGN93].
Sequences of vectors of acoustic parameters are treated as observations of acoustic word models used to compute the probability of observing a sequence W is pronounced. Given a sequence
For large vocabularies, search is performed in two steps. The first generates a word lattice of the n-best word sequences with simple models to compute approximate likelihoods in real-time. In the
second step more accurate likelihoods are compared with a limited number of hypotheses. Some systems generate a single word sequence hypothesis with a single step. The search produces an hypothesized
word sequence if the task is dictation. If the task is understanding then a conceptual structure is obtained with a process that may involve more than two steps. Ways for automatically learning and
extracting these structures are described in [KDMM94].
In a statistical framework, an inventory of elementary probabilistic models of basic linguistic units (e.g., phonemes) is used to build word representations. A sequence of acoustic parameters,
extracted from a spoken utterance, is seen as a realization of a concatenation of elementary processes described by hidden Markov models (HMMs). An HMM is a composition of two stochastic processes, a
hidden Markov chain, which accounts for temporal variability, and an observable process, which accounts for spectral variability. This combination has proven to be powerful enough to cope with the
most important sources of speech ambiguity, and flexible enough to allow the realization of recognition systems with dictionaries of tens of thousands of words.
A hidden Markov model is defined as a pair of stochastic processes observations.
Two formal assumptions characterize HMMs as used in speech recognition. The first-order Markov hypothesis states that history has no influence on the chain's future evolution if the present is
specified, and the output independence hypothesis states that neither chain evolution nor past observations influence the present observation if the last chain transition is specified.
with the following definitions:
A useful tutorial on the topic can be found in [Rab89].
HMMs can be classified according to the nature of the elements of the B matrix, which are distribution functions.
Distributions are defined on finite spaces in the so called discrete HMMs. In this case, observations are vectors of symbols in a finite alphabet of N different elements. For each one of the Q vector
components, a discrete density shows an example of a discrete HMM with one-dimensional observations. Distributions are associated with model transitions.
Figure: Example of a discrete HMM. A transition probability and an output distribution on the symbol set is associated with every transition.
Another possibility is to define distributions as probability densities on continuous observation spaces. In this case, strong restrictions have to be imposed on the functional form of the
distributions, in order to have a manageable number of statistical parameters to estimate. The most popular approach is to characterize the model transitions with mixtures of base densities g of a
family G having a simple parametric form. The base densities or Laplacian, and can be parameterized by the mean vector and the covariance matrix. HMMs with these kinds of distributions are usually
referred to as continuous HMMs. In order to model complex distributions in this way a rather large number of base densities has to be used in every mixture. This may require a very large training
corpus of data for the estimation of the distribution parameters. Problems arising when the available corpus is not large enough can be alleviated by sharing distributions among transitions of
different models. In semicontinuous HMMs [HAJ90], for example, all mixtures are expressed in terms of a common set of base densities. Different mixtures are characterized only by different weights.
A common generalization of semicontinuous modeling consists of interpreting the input vector y as composed of several components
Computation of probabilities with discrete models is faster than with continuous models, nevertheless it is possible to speed up the mixture densities computation by applying vector quantization (VQ)
on the gaussians of the mixtures [Boc93].
Parameters of statistical models are estimated by iterative learning algorithms [Rab89] in which the likelihood of a set of training data is guaranteed to increase at each step.
[BDFK92] propose a method for extracting additional acoustic parameters and performing transformations of all the extracted parameters using a Neural Network (NN) architecture whose weights are
obtained by an algorithm that, at the same time, estimates the coefficients of the distributions of the acoustic models. Estimation is driven by an optimization criterion that tries to minimize the
overall recognition error.
Words are usually represented by networks of phonemes. Each path in a word network represents a pronunciation of the word.
The same phoneme can have different acoustic distributions of observations if pronounced in different contexts. Allophone models of a phoneme are models of that phoneme in different contexts. The
decision as to how many allophones should be considered for a given phoneme may depend on many factors, e.g., the availability of enough training data to infer the model parameters.
A conceptually interesting approach is that of polyphones [STNE]. In principle, an allophone should be considered for every different word in which a phoneme appears. If the vocabulary is large, it
is unlikely that there are enough data to train all these allophone models, so models for allophones of phonemes are considered at a different level of detail (word, syllable, triphone, diphone,
context independent phoneme). Probability distributions for an allophone having a certain degree of generality can be obtained by mixing the distributions of more detailed allophone models. The loss
in specificity is compensated by a more robust estimation of the statistical parameters due to the increasing of the ratio between training data and free parameters to estimate.
Another approach consists of choosing allophones by clustering possible contexts. This choice can be made automatically with Classification and Regression Trees (CART). A CART is a binary tree having
a phoneme at the root and, associated with each node YES or NO) there is a link to another node with which other questions are associated. There are algorithms for growing and pruning CARTs based on
automatically assigning questions to a node from a manually determined pool of questions. The leaves of the tree may be simply labeled by an allophone symbol. Papers by [BdSG] and [HL91] provide
examples of the application of this concept and references to the description of a formalism for training and using CARTs.
Each allophone model is an HMM made of states, transitions and probability distributions. In order to improve the estimation of the statistical parameters of these models, some distributions can be
the same or tied. For example, the distributions for the central portion of the allophones of a given phoneme can be tied reflecting the fact that they represent the stable (context-independent)
physical realization of the central part of the phoneme, uttered with a stationary configuration of the vocal tract.
In general, all the models can be built by sharing distributions taken from a pool of, say, a few thousand cluster distributions called senones. Details on this approach can be found in [HH93].
Word models or allophone models can also be built by concatenation of basic structures made by states, transitions and distributions. These units, called fenones, were introduced by [BBdS]. Richer
models of the same type but using more sophisticated building blocks, called multones, are described in [BBdS].
Another approach consists of having clusters of distributions characterized by the same set of Gaussian probability density functions. Allophone distributions are built by considering mixtures with
the same components but with different weights [DM94].
The probability
Motivations for this approach and methods for computing these probabilities are described in the following section.
H2>1.5.4: Generation of Word Hypotheses
Generation of word hypotheses can result in a single sequence of words, in a collection of the n-best word sequences, or in a lattice of partially overlapping word hypotheses.
This generation is a search process in which a sequence of vectors of acoustic features is compared with word models. In this section, some distinctive characteristics of the computations involved in
speech recognition algorithms will be described, first focusing on the case of a single-word utterance, and then considering the extension to continuous speech recognition.
In general, the speech signal and its transformations do not exhibit clear indication of word boundaries, so word boundary detection is part of the hypothesization process carried out as a search. In
this process, all the word models are compared with a sequence of acoustic features. In the probabilistic framework, ``comparison'' between an acoustic sequence and a model involves the computation
of the probability that the model assigns to the given sequence. This is the key ingredient of the recognition process. In this computation, the following quantities are used:
probability of observing the partial sequence i at time t
probability of having observed the partial sequence i at time t:
An approximation for computing this probability consists of following only the path of maximum probability. This can be done with the
The computations of all the above probabilities share a common framework, employing a matrix called a trellis, depicted in Figure . For the sake of simplicity, we can assume that the HMM in
Figure represents a word and that the input signal corresponds to the pronunciation of an isolated word.
Every trellis column holds the values of one of the just introduced probabilities for a partial sequence ending at different time instants, and every interval between two columns corresponds to
an input frame. The arrows in the trellis represent model transitions composing possible paths in the model from the initial time instant to the final one. The computation proceeds in a
column-wise manner, at every time frame updating the scores of the nodes in a column by means of recursion formulas which involve the values of an adjacent column, the transition probabilities of
the models, and the values of the output distributions for the corresponding frame. For ) or ( ). For the
The algorithm for computing Viterbi algorithm, and can be seen as an application of dynamic programming for finding a maximum probability path in a graph with weighted arcs. The recursion formula
for its computation is the following:
By keeping track of the state j giving the maximum value in the above recursion formula, it is possible, at the end of the input sequence, to retrieve the states visited by the best path, thus
performing a sort of time-alignment of input frames with models states.
All these algorithms have a time complexity M is the number of transitions with non-zero probability and T is the length of the input sequence. M can be at most equal to S is the number of states
in the model, but is usually much lower, since the transition probability matrix is generally sparse. In fact, a common choice in speech recognition is to impose severe constraints on the allowed
state sequences, for example j<i, j>i+2, as is the case of the model in Figure .
In general, recognition is based on a search process which takes into account all the possible segmentations of the input sequence into words, and the a-priori probabilities that the LM assigns
to sequences of words.
Good results can be obtained with simple LMs based on bigram or trigram probabilities. As an example, let us consider a bigram language model. This model can be conveniently incorporated into a
finite state automaton as shown in Figure , where dashed arcs correspond to transitions between words with probabilities of the LM.
Figure: Bigram LM represented as a weighted word graph.
After substitution of the word-labeled arcs with the corresponding HMMs, the resulting automaton becomes a big HMM itself, on which a Viterbi search for the most probable path, given an
observation sequence, can be carried out. The dashed arcs are to be treated as empty transitions, i.e., transitions without an associated output distribution. This requires some generalization of
the Viterbi algorithm. During the execution of the Viterbi algorithm, a minimum of backtracking information is kept to allow the reconstruction of the best path in terms of word labels. Note that
the solution provided by this search is suboptimal in the sense that it gives the probability of a single state sequence of the composite model and not the total emission probability of the best
word model sequence. In practice, however, it has been observed that the path probabilities computed with the above mentioned algorithms exhibit a dominance property, consisting of a single state
sequence accounting for most of the total probability [ME91].
The composite model grows with the vocabulary, and can lead to large search spaces. Nevertheless the uneven distribution of probabilities among different paths can help. It turns out that, when
the number of states is large, at every time instant, a large portion of states have an accumulated likelihood which is much less than the highest one, so that it is very unlikely that a path
passing through one of these states would become the best path at the end of the utterance. This consideration leads to a complexity reduction technique called beam search [NMNP92], consisting of
neglecting states whose accumulated score is lower than the best one minus a given threshold. In this way, computation needed to expand bad nodes is avoided. It is clear from the naivety of the
pruning criterion that this reduction technique has the undesirable property of being not admissible, possibly causing the loss of the best path. In practice, good tuning of the beam threshold
results in a gain in speed by an order of magnitude, while introducing a negligible amount of search errors.
When the dictionary is of the order of tens of thousands of words, the network becomes too big, and others methods have to be considered.
At present, different techniques exist for dealing with very large vocabularies. Most of them use multi-pass algorithms. Each pass prepares information for the next one, reducing the size of the
search space. Details of these methods can be found in [AHH93,ADNS94,MBDW93,KAM].
In a first phase a set of candidate interpretations is represented in an object called word lattice, whose structure varies in different systems: it may contain only hypotheses on the location of
words, or it may carry a record of acoustic scores as well. The construction of the word lattice may involve only the execution of a Viterbi beam-search with memorization of word scoring and
localization, as in [ADNS94], or may itself require multiple steps, as in [AHH93,MBDW93,KAM]. Since the word lattice is only an intermediate result, to be inspected by other detailed methods, its
generation is performed with a bigram language model, and often with simplified acoustic models.
The word hypotheses in the lattice are scored with a more accurate language model, and sometimes with more detailed acoustic models. Lattice rescoring may require new calculations of HMM
probabilities [MBDW93], may proceed on the basis of precomputed probabilities only [ADNS94,AHH93], or even exploit acoustic models which are not HMMs [KAM]. In [AHH93], the last step is based on
an search [Nil71] on the word lattice, allowing the application of a long distance language model, i.e., a model where the probability of a word may not only depend on its immediate predecessor.
In [ADNS94] a dynamic programming algorithm, using trigram probabilities, is performed.
A method which does not make use of the word lattice is presented in [Pau94]. Inspired by one of the first methods proposed for continuous speech recognition (CSR) [Jel69], it combines both
powerful language modeling and detailed acoustic modeling in a single step, performing an .
Interesting software architectures for ASR have been recently developed. They provide acceptable recognition performance almost in real time for dictation of large vocabularies (more than 10,000
words). Pure software solutions require, at the moment, a considerable amount of central memory. Special boards make it possible to run interesting applications on PCs.
There are aspects of the best current systems that still need improvement. The best systems do not perform equally well with different speakers and different speaking environments. Two important
aspects, namely recognition in noise and speaker adaptation, are discussed in section . They have difficulty in handling out of vocabulary words, hesitations, false starts and other phenomena
typical of spontaneous speech. Rudimentary understanding capabilities are available for speech understanding in limited domains. Key research challenges for the future are acoustic robustness,
use of better acoustic features and models, use of multiple word pronunciations and efficient constraints for the access of a very large lexicon, sophisticated and multiple language models
capable of representing various types of contexts, rich methods for extracting conceptual representations from word hypotheses and automatic learning methods for extracting various types of
knowledge from corpora.
Next: 1.6: Language Representation Up: Spoken Language Input Previous: 1.4 | {"url":"http://www.cslu.ogi.edu/HLTsurvey/ch1node7.html","timestamp":"2014-04-16T21:59:00Z","content_type":null,"content_length":"31934","record_id":"<urn:uuid:c7db6fd1-dda8-4ce4-ad12-42cc776011b2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Monodromy and stability for nilpotent critical points.
(English) Zbl 1088.34021
The authors study the behavior near an isolated singularity at the origin of an analytic vector field
on the plane for which the linear part
has both eigenvalues zero, but is not itself zero (a “nilpotent singularity”). They give a new proof of a theorem of Andreev that characterizes which of such critical points are monodromic, meaning
that there is a first return map defined on a section of the flow of the vector field with one endpoint at the origin. After introducing a special form for vector fields having a monodromic nilpotent
singularity at the origin, they characterize, for several specialized cases, those singularities of this type that are in fact centers. Techniques used include index theory and the generalized
trigonometric functions of Lyapunov.
34C05 Location of integral curves, singular points, limit cycles (ODE)
34C23 Bifurcation (ODE)
37C10 Vector fields, flows, ordinary differential equations | {"url":"http://zbmath.org/?q=an:1088.34021","timestamp":"2014-04-17T13:11:48Z","content_type":null,"content_length":"21431","record_id":"<urn:uuid:e64fc37c-70c5-42da-a6dd-643be95e5b23>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
Flood World
Population of the PreFlood World
by Tom Pickett
Very little is known about the people who lived before the Great Flood. Genesis chapters 2 through 7 provide a synopsis of that age. Ancient Jewish writings provide more possible detail. However, no
archeological discoveries have been made which would reveal additional information about this lost society. Therefore, scripture is all we have to rely on. Of the many issues concerning the flood, a
common question that has often been asked is: "What was the world population when the flood occurred?" Most people have been taught that the population might have been in the tens of millions or
perhaps in the hundreds of millions. Using an electronic spreadsheet, an analysis was made of the effects of population size using the numerical values provided in the book of Genesis for age of
childbirth, number of children per family and life-spans.
This study is based upon a time span of 1656 years, beginning with the creation of Adam and extending to the time the Great Flood occurred. Although arguments can be made against this amount of time,
it is widely accepted among Bible scholars. When you add up the genealogies in the book of Genesis you get 1656 years. However, it should be noted that other ancient Jewish writings such as the
Septuagint indicate a longer time period . (1)
The book of Genesis identifies the life-spans of the antediluvians as approximately 900 years (Massoretic Text). This is obviously different from the life-spans of today. According to Scripture, the
ages at which the antediluvians had children range from 65 to over 100 years of age. Also, their families would most likely be much larger than today's. With 900 year life-spans, it would seem that
the effects of disease on their society would be much different than ours today. These variations would seem to change the traditional generation lengths and family sizes used in today's population
growth calculations.
Description of Calculations
Based on the numerical values in Genesis, the life-spans, generation lengths, and childbearing ages were analyzed using an electronic spreadsheet called Mathcad 7 (2). The results were plotted and
range of probable values determined, representing the world population at the time the flood occurred. Figure 1 shows a chart of the antediluvian life-spans, childbirth ages, and number of minimum
children per family. The chart covers from the creation of Adam out to 1656 years where the flood is believed to have occurred. All information contained in the chart was taken from the Book of
The population was calculated using the formula given below. This formula was published by Henry Morris (3) and calculates the population of the world at the time of the Flood. P(n) is the population
after n generations which begins with one man and one woman. The number of generations is represented by n. A value for n can be obtained by dividing the total time period by the number of years per
The number of generations that are alive when P(n) is evaluated is represented by x. For example, if x equals 2, the generations that are alive are generations n and n-1. The value of c represents
half the number of children in the family. If each family has only two children c would equal 1 and the population growth rate would be zero.
The calculations were made using Mathcad 7 and the resulting data points are plotted on graphs with diamonds indicating each data point. Both x and y scales are logarithmic. The number of children
per family are varied from 10 to 3. Several "runs" are made using different values for n. The child bearing ages are obtained by dividing 1656 by n. The result is used to establish the average age
for childbirth and therefore, the length of each generation. The population is evaluated in five groups, consisting of the following:
1. Sixteen generations (n=16), each generation is 103.5 years. The life-spans are 900 years (x=9). The calculations begin with 10 children per family and are minimized to 3 children per family.
2. Eighteen generations (n=18), each generation is 92 years. The life-spans are 900 years (x=9). The calculations begin with 10 children per family and are minimized to 3 children per family.
3. Twenty generations (n=20), each generation is 82.8 years. The life-spans are 900 years (x=9). The calculations begin with 10 children per family and are minimized to 3 children per family.
4. Twenty-two generations (n=22), each generation is 75.3 years. The life-spans are 900 years (x=9). The calculations begin with 10 children per family and are minimized to 3 children per family.
Additional data points were generated, this time assuming that the population may have been between 1 to 40 billion. The same life-spans were used as in the previous (900 years). The generation
lengths are also the same (16 to 22). The range of children per family is varied from 8.6 to 4.8. These values appear to be more in line with what is mentioned in scripture. (Although, scripture
isn't clear as to what all cases may have been.)
Although it is difficult to obtain an actual value of world population at the time of the flood, 5 to17 billion people would appear to be reasonable populations, with an average of around 10 billion.
The best ages for childbirth would be 80.8 to 92 years with 6 to 7 children per family. This would be 20 to 18 generations produced from Adam to the Flood in 1656. The Book of Genesis indicates
(Chapter 5) that each family had at least 5 children. Adam and Eve had a total of 7 (including Abel). However, Noah apparently had only 3 children. (It is possible that he could have had sons and
daughters that aren't recorded and who weren't on the ark.)
Genesis Chapter 5 states that each person had "sons and daughters" in addition to the son whose chronology is given. Since a plural is used to describe the number of sons and daughters, a minimum of
two sons and two daughters are assumed. Therefore, a reasonable value would appear to be a range of 5 to 8 children per family. As previously stated, Adam and Eve had seven children. Using 5 to 8
children per family, the population falls with in a range of ~2 billion to 11.5 billion (over the range of 16 to 22 generations). (Refer to last four tables at end of section.) It is interesting that
today's population of approximately 6 billion fall within this category.
The Bible indicates that the mean life span was 900 years before the Flood. However, the calculations indicate that this value has little effect on the total population size. This appears to be due
to the small number of generations used (n) and large values of c. If longer generations were used and fewer children born per family, I would expect x to be a greater factor. Therefore, the number
of children per family (during each generation) has a far greater effect on the population. I believe the longer life-span had more to do with ones impact on society such as accumulation of
knowledge. (Imagine Edison or Einstein living 900 years!)
A reasonable value for the antediluvian childbearing age appears to be approximately 90 years. Genesis uses a range of 65 to 500 years, for the first born in the families that are listed. Noah is the
only one mentioned who waited 500 years before starting his family. We are not told why. Therefore, Noah is an exception to the standard. (As mentioned above, Noah could have had other sons and
daughters who are not recorded.) The other oldest were Methuselah (187 years), Lamech (182 years) and Jared (162 years). The rest began their families at between 65 and 105 years of age (see
time-line chart). Adam is listed as being 130 when Seth was born. We don't know his age at the time Cain and Abel were born. The age of the antediluvians when they started their families is rather
strange in contrast to today. These ages appear to indicate that for some unknown reason, they may have been incapable of reproduction until around the age of 65 years.
As previously mentioned, other ancient documents as well as others such as Barry Setterfield have suggested a time span of 2256 years between creation and the Flood. If this were true, then it would
effect the population results, driving the population far higher. In this case, it would mean the population was much higher or the family sizes were lower than scripture indicates.
If the population reached over a billion, there would tend to be some logistical problems in feeding and caring for the population (clothing, housing, jobs, etc). This indicates that they would have
required a higher level of technology than what we currently give them credit for. Had their population reached over 10 billion, they would have required similar technology as we have today (rail,
refrigerated shipping, sophisticated farming methods, fast and reliable communication, etc).
1. Henry M. Morris, The Genesis Record, Baker Book House, 1976, p. 154
2. Mathcad 7 is a product of Math Soft Inc. Cambridge MA.
3.. Henry M. Morris, The Biblical Basis for Modern Science, Baker Book House, 1984, p.417. Excerpt.
See also World Population Since Creation, also on this web site.
Email: Tom Pickett
April 8, 1998. Revised, April 16, 1998, August 12, 1998. | {"url":"http://www.everlastinglifeministries.com/genesis/preflood.asp","timestamp":"2014-04-16T13:46:35Z","content_type":null,"content_length":"15209","record_id":"<urn:uuid:300fb388-70f9-4fbb-bb98-aaec9828cc1a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
0525304002 isbn/isbn13 $$ Compare Prices at 110 Bookstores! Gay Neck: The Story of a Pigeon discount, buy, cheap, used, books & textbooks
Search Results: displaying 1 - 1 of 1 --> 0525304002 ( ISBN )
Gay Neck: The Story of a Pigeon
Author(s): Dhan Gopal Mukerji
ISBN: 0525304002 ISBN-13: 9780525304005
Format: Hardcover Pages: 192 Pub. Date: 1968-07-15 Publisher: Dutton Juvenile
List Price: $20.99
Click link below to compare 110+ bookstores prices! Get up to 90% off list price!
[Detail & Customer Review from Barnes & Noble]
[Detail & Customer Review from Amazon]
Book Description
Gay Neck: The Story of a Pigeon
Book Details:
• Format: Hardcover
• Publication Date: 7/15/1968
• Pages: 192
• Reading Level: Age 8 and Up
Recent Book Searches:
/ Sl (2) Representations of Finitely Presented Groups (Contemporary Mathematics) / Gregory W. Brumfiel, H. M. Hilden
/ Groups and Symmetry: A Guide to Discovering Mathematics (Mathematical World, Vol. 5) (Mathematical World) / David W. Farmer
/ Lectures on Operator Theory and Its Applications (Fields Institute Monographs, 3) / Aad Dijksma, Heinz Langer, Michael A. Dritschel, James Rovnyak, M. A. Kaashoek
/ Proceedings of the Steklov Institute of Mathematics: Number Theory and Analysis (Proceedings of the Steklov Institute of Mathematics) / V. I. Blagodatskikh
/ Orders of a Quartic Field (Memoirs of the American Mathematical Society) / Jin Nakagawa
/ New Results in the Theory of Topological Classification of Integrable Systems (Proceedings of the Steklov Institute of Mathematics) / A. T. Fomenko
/ Recent Developments in the Inverse Galois Problem: A Joint Summer Research Conference on Recent Developments in the Inverse Galols Problem July 17-23, ... Seattle (Contemporary Mathematics) /
/ Symplectic Geometry and Quantization: Two Symposia on Symplectic Geometry and Quantization Problems July 1993 Japan (Contemporary Mathematics) / Japan) Taniguchi International Symposium 1993 (Sanda
/ Equivariant Homotopy and Cohomology Theory: Dedicated to the Memory of Robert J. Piacenza (Cbms Regional Conference Series in Mathematics) / J. Peter May, M. Cole, G. Comezana, S. Costenoble,
Anthony D. Elmendorf, J. P. C. Greenlees, L. G. Lewis, Robert John Piacenza, G. Triantafillou, S. Waner
/ J-Holomorphic Curves and Quantum Cohomology (University Lecture Series) / Dusa McDuff, Dietmar Salamon
/ Fine Regularity of Solutions of Elliptic Partial Differential Equations (Mathematical Surveys and Monographs) / Jan Maly, William P. Ziemer
/ Finite Rational Matrix Groups (Memoirs of the American Mathematical Society) / G. Nebe, Wilhelm Plesken
/ Stable Networks and Product Graphs (Memoirs of the American Mathematical Society) / Tomas Feder
/ Intuitive Topology (Mathematical World, Vol 4) (Mathematical World) / V. V. Prasolov
/ Integer-Valued Polynomials (Mathematical Surveys and Monographs) / Paul-Jean Cahen, Jean-Luc Chabert
/ The Classification of the Finite Simple Groups: Part Ii, Chapter G : General Group Therapy (Mathematical Surveys and Monographs) / Daniel Gorenstein, Richard Lyons, Ronald Solomon
/ Monotone Dynamical Systems: An Introduction to the Theory of Competitive and Cooperative Systems (Mathematical Surveys and Monographs) / Hal L. Smith
/ Excluding Infinite Clique Minors (Memoirs of the American Mathematical Society, No. 566) / Neil Robertson, Paul D. Seymour, Robin Thomas
/ Representations of Infinite-Dimensional Groups (Translations of Mathematical Monographs) / R. S. Ismagilov
/ Parallel Algorithms: Third Dimacs Implementation Challenge : October 17-19, 1994 (Contemporary Mathematics) / Dimacs (Group), Challenge Workshop (1991 Dimacs)
/ Discrete Mathematics in the Schools (Dimacs Series in Discrete Mathematics and Theoretical Computer Science) /
/ Lie Groups and Lie Algebras: E.B. Dynkin's Seminar (American Mathematical Society Translations Series 2) /
/ Linear Infinite-Particle Operators (Translations of Mathematical Monographs) / V. A. Malyshev, R. A. Minlos
/ Jerusalem Combinatorics '93: An International Conference in Combinatorics, May 9-17, 1993, Jerusalem, Israel (Contemporary Mathematics) / Helene Barcel
/ Homotopy Theory and Its Applications: A Conference on Algebraic Topology in Honor of Samuel Gitler August 9-13, 1993 Cocoyoc, Mexico (Contemporary Mathematics) /
/ Topological Invariants of Plane Curves and Caustics (University Lecture Series) / V. I. Arnold
/ What's Happening in the Mathematical Sciences, 1995-1996 (What's Happening in the Mathermatical Sciences) / Barry Cipra
/ The Method of Layer Potentials for the Heat Equation in Time-Varying Domains (Memoirs of the American Mathematical Society) / John L. Lewis, Margaret A. M. Murray
/ Compact Connected Lie Transformation Groups on Spheres With Low Cohomogeneity, I (Memoirs of the American Mathematical Society) / Eldar Straume
/ Index Theory, Coarse Geometry and Topology of Manifolds (Cbms Regional Conference Series in Mathematics) / John Roe
Browse ISBN Directory:
9780262082662-9780262100403 9780262100410-9780262111171 9780262111188-9780262112628 9780262112635-9780262121705 9780262121712-9780262130837 More...
More Info about Buying Books Online:
Make Sure To Compare Book Prices before Buy
The goal of this website is to help shoppers compare book prices from different vendors / sellers and find cheap books and cheap college textbooks. Many discount books and discount text books are put
on sale by discounted book retailers and discount bookstores everyday. All you need to do is to search and find them. This site also provides many book links to some major bookstores for book details
and book coupons. But be sure not quickly jump into any bookstore site to buy. Always click "Compare Price" button to compare prices first. You would be happy that how much you would save by doing
book price comparison.
Buy Used Books and Used Textbooks
It's becoming more and more popular to buy used books and used textbooks among college students for saving. Different second hand books from different sellers may have different conditions. Make sure
to check used book condition from the seller's description. Also many book marketplaces put books for sale from small bookstores and individual sellers. Make sure to check store review for seller's
reputation if possible. If you are in a hurry to get a book or textbook for your class, you should choose buying new books for prompt shipping.
Buy Books from Foreign Country
Our goal is to quickly find the cheapest books and college textbooks for you, both new and used, from a large number of bookstores worldwide. Currently our book search engines fetch book prices from
US, Canada, UK, New Zealand, Australia, Netherlands, France, Ireland, Germany, and Japan. More bookstores from other countries will be added soon. Before buying from a foreign book store or book
shop, be sure to check the shipping options. It's not unusual that shipping could take two to three weeks and cost could be multiple of a domestic shipping charge.
Please visit Help Page for Questions regarding ISBN / ISBN-10 / ISBN10, ISBN-13 / ISBN13, EAN / EAN-13, and Amazon ASIN | {"url":"http://www.alldiscountbooks.net/_0525304002_i_.html","timestamp":"2014-04-16T12:14:04Z","content_type":null,"content_length":"34256","record_id":"<urn:uuid:c72b38ef-c105-4b83-8d5a-3f887f7bbd4c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haciendas Constancia, PR Trigonometry Tutor
Find a Haciendas Constancia, PR Trigonometry Tutor
...I got my undergraduate degrees in physics and economics and my doctor of naturopathic medicine degree from SCNM. I believe that a good tutor gets to know their student and understands what
THEIR problems are and attends to them instead of just trying to hammer all the concepts into their head. ...
14 Subjects: including trigonometry, chemistry, physics, calculus
...I have worked for several years in the satellite business and been involved in electrical power subsystems, solid-state devices, electrical ground support equipment, and other related
disciplines. My undergraduate degree is in aerospace engineering and my masters degree is in systems engineering...
62 Subjects: including trigonometry, English, reading, writing
I have my Associate of Science Degree in Mathematics, and I spent several semesters as a Mathematics Supplemental Instructor holding free sessions for students outside of the classroom (which I
also attended). Though the sessions were free for the students, I was paid an hourly wage. My attendance ...
17 Subjects: including trigonometry, reading, calculus, algebra 1
...Work with me so I can put you on the FAST track to physics success! I recently taught a trigonometry-based physics course at the college level, with students ranging from pre-med to aviation.
I also tutored a student who was rusty on his trig and after 1 session he achieved a perfect 100 on his next 2 quizzes!
20 Subjects: including trigonometry, chemistry, calculus, physics
...I teach in the local community colleges where students wished I had taught them math since elementary school and I found that I truly enjoyed teaching and tutoring as well. I enjoy pointing
the relevancy of numbers in our every day lives though we do not consciously think that we do - seeing the...
16 Subjects: including trigonometry, chemistry, physics, calculus
Related Haciendas Constancia, PR Tutors
Haciendas Constancia, PR Accounting Tutors
Haciendas Constancia, PR ACT Tutors
Haciendas Constancia, PR Algebra Tutors
Haciendas Constancia, PR Algebra 2 Tutors
Haciendas Constancia, PR Calculus Tutors
Haciendas Constancia, PR Geometry Tutors
Haciendas Constancia, PR Math Tutors
Haciendas Constancia, PR Prealgebra Tutors
Haciendas Constancia, PR Precalculus Tutors
Haciendas Constancia, PR SAT Tutors
Haciendas Constancia, PR SAT Math Tutors
Haciendas Constancia, PR Science Tutors
Haciendas Constancia, PR Statistics Tutors
Haciendas Constancia, PR Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Chandler, AZ trigonometry Tutors
Cordes Lakes, AZ trigonometry Tutors
Dudleyville trigonometry Tutors
Haciendas De Borinquen Ii, PR trigonometry Tutors
Haciendas De Tena, PR trigonometry Tutors
Haciendas Del Monte, PR trigonometry Tutors
Haciendas El Zorzal, PR trigonometry Tutors
Higley trigonometry Tutors
Litchfield, AZ trigonometry Tutors
Peeples Valley, AZ trigonometry Tutors
Redington, AZ trigonometry Tutors
Rock Springs, AZ trigonometry Tutors
Saddlebrooke, AZ trigonometry Tutors
Spring Valley, AZ trigonometry Tutors
Strawberry, AZ trigonometry Tutors | {"url":"http://www.purplemath.com/Haciendas_Constancia_PR_trigonometry_tutors.php","timestamp":"2014-04-21T02:38:12Z","content_type":null,"content_length":"25098","record_id":"<urn:uuid:bbbf5b9d-af26-442c-bd21-ce2c633160cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
An object that has a mass of 0.500 kg is suspended by a wire. An upward force of 5.0 N is applied to the wire. ... - Homework Help - eNotes.com
An object that has a mass of 0.500 kg is suspended by a wire. An upward force of 5.0 N is applied to the wire.
What is the unbalanced force acting on the object?
What is the objects acceleration?
An object that has a mass of 0.5 kg is suspended by a wire. An upward force of 5.0 N is applied to the wire.
There are two forces acting on the object. One of them is a downward force due to the gravitational attraction between the Earth and the object, this is equal to 0.5*9.8 = 4.9 N. The second force is
the upward force of 5 N applied on the object. The unbalanced force acting on the object is 0.1 N in an upward direction.
The acceleration due to this force is equal to 0.1/0.5 = 0.2 m/s^2 in the upward direction.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/an-object-that-has-mass-0-500-kg-suspended-by-wire-438991","timestamp":"2014-04-25T04:19:14Z","content_type":null,"content_length":"26116","record_id":"<urn:uuid:501f3518-6f3a-4dbb-920b-2289070a7dd1>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
DOCUMENTA MATHEMATICA, Extra Vol. ICM III (1998), 593-599
DOCUMENTA MATHEMATICA
, Extra Volume ICM III (1998), 593-599
Frank Hoppensteadt and Eugene Izhikevich
Title: Canonical Models in Mathematical Neuroscience
Our approach to mathematical neuroscience is not to consider a single model but to consider a large family of neural models. We study the family by converting every member to a simpler model, which
is referred to as being canonical. There are many examples of canonical models \cite{wcnn}. Most of them are derived for families of neural systems near thresholds; that is, near transitions between
the rest state and the state of repetitive spiking. The canonical model approach enables us to study frequency and timing aspects of networks of neurons using frequency domain methods \cite{imn2}. We
use canonical (phase) models to demonstrate our theory of FM interactions in the brain: Populations of cortical oscillators self-organize by frequencies \cite{imn2}; same-frequency sub-population of
oscillators can interact in the sense that a change in phase deviation in one will be felt by the others in the sub-population \cite{wcnn}; and oscillators operating at different frequencies do not
interact in this way. In our theory, sub-networks are identified by the firing frequency of their constituents. Network elements can change their sub-population membership by changing their
frequency, much like tuning to a new station on an FM radio. Also discussed here are mechanisms for changing frequencies obtained in our recent work using similar models to study spatial patterns of
theta and gamma rhythm phase locking in the hippocampus.
1991 Mathematics Subject Classification: Primary 11E16; Secondary 11D09, 11E04, 15A63.
Keywords and Phrases: Neuroscience, canonical models, phase locked loops.
Full text: dvi.gz 11 k, dvi 24 k, ps.gz 98 k.
Home Page of DOCUMENTA MATHEMATICA | {"url":"http://www.emis.de/journals/DMJDMV/xvol-icm/16/Hoppensteadt.MAN.html","timestamp":"2014-04-20T16:22:38Z","content_type":null,"content_length":"2640","record_id":"<urn:uuid:66161165-e7b5-41bd-96a7-16ed45cd6cf1>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US7024559 - Method of elliptic curve digital signature using expansion in joint sparse form
This invention relates to cryptography and, more particularly, to the generation and verification of a discrete logarithm based digital signature on an elliptic curve using expansion in joint sparse
The field of cryptography has spawned numerous devices and methods such as scramblers, symmetric-key encryptors, and public-key encryptors.
A scrambler is a device that receives an unencrypted message (i.e., plaintext) and produces an encrypted message (i.e., ciphertext). The encryption function of a scrambler is fixed in hardware and
does not change from message to message. One of the problems with a scrambler is that the same plaintext will produce the same ciphertext. An adversary may collect ciphertext messages from a
particular scrambler and compare them against each other in order to analyze a particular ciphertext message. To overcome this problem, the users may change the function of the scrambler
periodically. Such a solution is time consuming and expensive.
Another solution to the problem associated with a scrambler is symmetric-key encryption. A symmetric-key encryptor has two inputs (i.e., plaintext and a cryptographic key). A cryptographic key is a
message, or number, that should appear random to an adversary. A symmetric-key encryptor combines the cryptographic key with the plaintext using a scrambling function in order to generate ciphertext.
The same plaintext may produce different ciphertext if the cryptographic key is changed. Since the cryptographic key is a message, or a number, it is much easier to change than the function of the
scrambler which is built into hardware. In fact, the cryptographic key may be changed on a message-to-message basis without much difficulty. This method is called symmetric-key encryption because the
intended recipient must possess the cryptographic key used to generate the ciphertext in order to recover the plaintext. The intended recipient must also possess a function that performs the inverse
of the scrambling function used to generate the ciphertext. Typically, the inverse of the scrambling function may be achieved by operating the scrambling function in reverse. If this is the case, the
intended recipient must possess the same cryptographic key and scrambling function used to generate the ciphertext in order to recover the plaintext.
Even though symmetric-key encryptors make the fastest encryptors, they suffer from a few problems. The first problem is distributing cryptographic keys to authorized users in a secure fashion. A
courier may be required to deliver the first cryptographic key to the users. This is time consuming and expensive. The second problem is knowing whether or not ciphertext came from a particular
person. Anyone knowing the cryptographic key may encrypt or decrypt a message produced using a symmetric-key encryptor as long as they know the cryptographic key, the scrambling function, and the
descrambling function. U.S. Pat. No. 4,200,770, entitled “CRYPTOGRAPHIC APPARATUS AND METHOD,” discloses a device for and method of performing a cryptographic key exchange over a public channel. The
method is often called a public-key key exchange method or the Diffie-Hellman key exchange method after the first two named inventors of U.S. Pat. No. 4,200,770. The Diffie-Hellman key exchange
method uses the exponentiation function to allow two users to conceal and transmit their secret information to the other user. The users then combine what they received with their secret information
in order to generate the same cryptographic key. To recover the secret information that was transmitted and construct the cryptographic key, an adversary would have to find the logarithm of what was
transmitted. If the values involved are large enough the logarithm, or discrete log, problem is believed to be intractable. U.S. Pat. No. 4,200,770 is hereby incorporated by reference into the
specification of the present invention. The Diffie-Hellman key exchange method offers a solution to the symmetric-key key distribution problem, but it does not solve the problem of verifying the
identity of the sender of the ciphertext.
Asymmetric-key, or public-key, encryption was proposed as a solution to identifying the sender of the ciphertext. This problem is often referred to as being able to provide, and verify, a digital
signature. Two different, but mathematically related, cryptographic keys are used in asymmetric-key, or public-key, encryption. Typically, a first, or secret, key is used to generate ciphertext while
a second, or public, key is used to recover the plaintext. Each user possesses their own secret key and mathematically related public key. Each user keeps their secret key secret and makes their
public key public. A first user may now generate ciphertext using their secret key and a second user may recover the corresponding plaintext using the corresponding public key. If the first user is
the only person who knows the first user's secret key then the second user is assured that the ciphertext came from the first user.
In the example just given, anyone knowing the first user's public key, which is everyone, could recover the corresponding plaintext. If two users wish to communicate securely with some assurance that
the message is from a particular person, the first user would encrypt the plaintext using the first user's secret key then the intended recipient's public key to encrypt the ciphertext and something
to identify the first user. The recipient would then use their secret key to recover the ciphertext and the identification material. The identification material is then used to identify the public
key of the first user. The first user's public key is then used to recover the plaintext. If the first user is the only one who knows the first user's secret key and the intended recipient is the
only one who knows the recipient's secret key then the recipient is the only one who can recover the plaintext and is assured that the ciphertext came from the first user.
U.S. Pat. No. 4,405,829, entitled “CRYPTOGRAPHIC COMMUNICATIONS SYSTEM AND METHOD,” discloses one type of public-key encryption device and method known as RSA after the three named inventors, Messrs.
Rivest, Shamir, and Adleman. Although RSA uses exponentiation, an adversary is required to factor the product of two prime numbers used to generate the secret key from the chosen public key in order
to recover plaintext. If the prime numbers are large enough, it is believed that the factoring problem is intractable. U.S. Pat. No. 4,405,829 is hereby incorporated into the specification of the
present invention.
Taher ElGamal developed a public-key digital signature scheme based on the extended Euclidean algorithm. In this scheme, a first user generates a secret value x as the first user's secret key. The
first user uses exponentiation to conceal the secret key and publishes the result (i.e., y=g^x mod p) as the first user's public key. The first user then generates a random number k and uses
exponentiation to conceal the random number (i.e., r=g^k mod p). The result r is one of two values that will be used as a signature for a message m from the first user. Next, the first user generates
an equation that includes the message m, the secret key x, the random number k, the first half of the signature r, and a variable that represents the second half of the signature s (i.e., m=xa+ks
(mod p−1)). The first user then solves the equation for s and transmits the message, the public key, and the two halves of the signature (i.e., r,s) to the recipient. The recipient, knowing p and g,
checks to see if (y^r)(r^s) mod p=g^m mod p. If so, the recipient is assured that the transmission came from the first user.
The math associated with the ElGamal's digital signature scheme is complex and the digital signature is rather long. U.S. Pat. No. 4,995,082, entitled “METHOD FOR IDENTIFYING SUBSCRIBERS AND FOR
GENERATING AND VERIFYING ELECTRONIC SIGNATURES IN A DATA EXCHANGE SYSTEM,” discloses a method of generating a shorter digital signature in a secure manner that using different and less complex
mathematics. U.S. Pat. No. 4,995,082 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 5,231,668, entitled “DIGITAL SIGNATURE ALGORITHM,” improves upon the digital signature of ElGamal by reducing the size of the digital signature but maintaining the mathematical
complexity. U.S. Pat. No. 5,231,668 is hereby incorporated by reference into the specification of the present invention.
U.S. Pat. No. 5,497,423, entitled “METHOD OF IMPLEMENTING ELLIPTIC CURVE CRYPTOSYSTEMS IN DIGITAL SIGNATURES OR VERIFICATION AND PRIVACY COMMUNICATION”; U.S. Pat. No. 5,581,616, entitled “METHOD AND
APPARATUS FOR DIGITAL SIGNATURE AUTHENTICATION”; U.S. Pat. No. 5,600,725, entitled “DIGITAL SIGNATURE METHOD AND KEY AGREEMENT METHOD”; U.S. Pat. No. 5,604,805, entitled “PRIVACY-PROTECTED TRANSFER
OF ELECTRONIC INFORMATION”; U.S. Pat. No. 5,606,617, entitled “SECRET-KEY CERTIFICATES”: and U.S. Pat. No. 5,761,305, entitled “KEY-AGREEMENT AND TRANSPORT PROTOCOL WITH IMPLICIT SIGNATURES,”
disclose either an elliptic curve version of the above-identified digital signature schemes or a different digital signature scheme. None of these elliptic curve digital signature schemes disclose a
method of generating and verifying a digital signature such that the number of elliptic curve operations is minimizes as does the present invention.
The cryptographic strength of any method based on the Digital Signature Algorithm is based on the apparent intractability of finding a discrete logarithm, or discrete log, under certain conditions.
In order for an adversary to recover concealed information, the adversary must be able to perform the inverse of exponentiation (i.e., a logarithm). There are mathematical methods for finding a
discrete logarithm (e.g., the Number Field Sieve), but these algorithms cannot be done in any reasonable time using sophisticated computers if certain conditions are met during the construction of a
transmission that conceals information (e.g., the numbers involved are large enough).
More precisely, the cryptographic strength of the Digital Signature Algorithm is based on the difficulty of computing discrete logs in a finite cyclic group. Mathematically, the discrete log problem
is as follows. Let G be a finite cyclic group of order q, where g is a generator of G. Let r be a secret number such that 0<r<q. Given G, q, g, and g^r, where “^” denotes exponentiation, find r,
where r is the discrete logarithm, or discrete log, of g^r. The discrete log problem is to find r.
In a Diffie-Hellman key exchange, two users (e.g., User A and User B) agree on a common G, g, and q. In practice, the most common choice for G is the integers mod n, where n is an integer.
Large digital signatures pose problems not only for the adversary but also for the users. Large digital signatures require large amounts of computational power and require large amounts of time in
order to generate and use the digital signature. Cryptographers are always looking for ways to quickly generate the shortest digital signatures possible that meet the cryptographic strength required
to protect the digital signature. The payoff for finding such a method is that cryptography can be done faster, cheaper, and in devices that do not have large amounts of computational power (e.g.,
hand-held smart-cards).
The choice of the group G is critical in a cryptographic system. The discrete log problem may be more difficult in one group and, therefore, cryptographically stronger than in another group, allowing
the use of smaller parameters but maintaining the same level of security. Working with small numbers is easier than working with large numbers. Small numbers allow the cryptographic system to be
higher performing (i.e., faster) and requires less storage. So, by choosing the right group, a user may be able to work with smaller numbers, make a faster cryptographic system, and get the same, or
better, cryptographic strength than from another cryptographic system that uses larger numbers.
The classical choice for G in a digital signature scheme are integers mod n, where n is an integer as well. In 1985, Victor Miller and Neal Koblitz each suggested choosing G from elliptic curves. It
is conjectured that choosing such a G allows the use of much smaller parameters, yet the discrete log problem using these groups is as difficult, or more difficult, than integer-based discrete log
problems using larger numbers. This allows the users to generate a digital signature that has the same, or better, cryptographic strength as a digital signature generated from an integer G and is
shorter than the integer-based digital signature. Since shorter digital signatures are easier to deal with, a cryptographic system based on a shorter digital signature may be faster, cheaper, and
implemented in computationally-restricted devices. So, an elliptic curve Digital Signature Algorithm is an improvement over an integer-based Digital Signature Algorithm.
More precisely, an elliptic curve is defined over a field F. An elliptic curve is the set of all ordered pairs (x,y) that satisfy a particular cubic equation over a field F, where x and y are each
members of the field F. Each ordered pair is called a point on the elliptic curve. In addition to these points, there is another point O called the point at infinity. The infinity point is the
additive identity (i.e., the infinity point plus any other point results in that other point). For cryptographic purposes, elliptic curves are typically chosen with F as the integers mod p for some
large prime number p (i.e., F[p]) or as the field of 2^m elements (i.e., F[2]m).
Multiplication or, more precisely, scalar multiplication is the dominant operation in elliptic curve cryptography. The speed at which multiplication can be done determines the performance of an
elliptic curve method.
Multiplication of a point P on an elliptic curve by an integer k may be realized by a series of additions (i.e., kP=P+P+ . . . +P, where the number of Ps is equal to k). This is very easy to
implement in hardware since only an elliptic adder is required, but it is very inefficient. That is, the number of operations is equal to k which may be very large.
The classical approach to elliptic curve multiplication is a double and add approach. For example, if a user wishes to realize kP, where k=25 then 25 is first represented as a binary expansion of 25.
That is, 25 is represented as a binary number 11001. Next, P is doubled a number of times equal to the number of bits in the binary expansion minus 1. For ease in generating an equation of the number
of operations, the number of doubles is taken as m rather than m−1. The price for simplicity here is being off by 1. In this example, the doubles are 2P, 4P, 8P, and 16P. The doubles correspond to
the bit locations in the binary expansion of 25 (i.e., 11001), except for the 1s bit. The doubles that correspond to bit locations that are 1s are then added along with P if the 1s bit is a 1. The
number of adds equals the number of 1s in the binary expansion. In this example, there are three additions since there are three is in the binary expansion of 25 (i.e., 11001). So, 25P=16P+8P+P.
On average, there are m/2 1s in k. This results in m doubles and m/2 additions for a total of 3 m/2 operations. Since the number of bits in k is always less than the value of k[1 ]the double and add
approach requires fewer operations than does the addition method described above. Therefore, the double and add approach is more efficient (i.e., faster) than the addition approach.
While working on an elliptic curve allows smaller parameters relative to a modular arithmetic based system offering the same security, some of the efficiency advantage of smaller parameters is offset
by the added complexity of doing arithmetic on an elliptic curve as opposed to ordinary modular arithmetic. For purposes of determining efficiency, elliptic doubles and elliptic additions are often
grouped and considered elliptic operations. To gain even more efficiency advantages by going to elliptic curves, cryptographers seek ways to reduce the cost of an elliptic curve operation, or reduce
the number of elliptic operations required. An elliptic curve method that requires fewer operations, or more efficiently executable operations, would result in an increase in the speed, or
performance, of any device that implements such a method.
It is no more costly to do elliptic curve subtractions than it is to do elliptic curve additions. Therefore, a doubles and add approach to doing elliptic curve multiplication may be modified to
include subtraction where appropriate. There are an infinite number of ways to represent an integer as a signed binary expansion. The negative 1s in a signed binary expansion indicate subtraction in
a double/add/subtract method while the positive 1s in the signed binary expansion indicate addition in the double/add/subtract method. For example, 25 may be represented as an unsigned binary number
11001 (i.e., 16+8+1=25) or as one possible signed binary number “1 0−1 0 0 1” (i.e., 32−8+1=25).
In an article entitled “Speeding Up The Computations On An Elliptic Curve Using Addition-Subtraction Chains”, authored by Francois Morain and Jorge Olivos, published in Theoretical Informatics and
Applications, Vol. 24, No. 6, 1990, pp. 531–544, the authors disclose an improvement to the double-add-subtract method mentioned above by placing a restriction on the signed binary expansion that
results in fewer elliptic additions being required to do an elliptic curve multiplication and, therefore, increase the performance (i.e., speed) of elliptic curve multiplication. Messrs. Morain and
Olivos proposed generating a signed binary expansion such that no two adjacent bit locations in the signed binary expansion are non-zero (i.e., two is, irrespective of polarity, may not be next to
each other). Such a signed binary expansion is called a non-adjacent form (NAF) of a signed binary expansion. It has been shown that a NAF signed binary expansion is unique (i.e., each integer has
only one NAF signed binary expansion) and contains the minimum number of 1s, irrespective of polarity. By minimizing the 1s, the number of additions is minimized. The improvement proposed by Messrs.
Morain and Olivos still requires m doubles but only requires an average of m/3 additions for a total of 4m/3 elliptic curve operations. This is less than the 3m/2 elliptic curve operations required
by the classical double and add method described above.
The most expensive part of the digital signature verification process is that of computing the expressions cW+gR, where c and g are integers and W and R are points on the curve. Thus, it is
particularly important to optimize the efficiency of this operation.
The most straightforward way to evaluate cW+gR is to evaluate cW and gR separately and add the results. However, it turns out to be more efficient to evaluate the entire expression at once. Such a
method is commonly referred to as twin multiplication.
The simplest twin multiplication method was first disclosed by E. G. Straus and later rediscovered by A. Shamir and disclosed in an article by T. ElGamal entitled “A Public Key Cryptosystem and a
Signature Scheme Based on Discrete Logarithms,” IEEE Transactions On Information Theory, Vol. IT-31, No. 4, July 1985. The method is based on the binary method which uses an ordinary binary expansion
of c and g. Therefore, the Straus-Shamir method is a double-add method for twin multiplication. It is more efficient to use the analogous method that works with signed binary expansions; this is
called the double-add-subtract method for twin multiplication. Like the binary method, the double-add-subtract method for twin multiplication works in a general group setting.
It is an object of the present invention to generate and verify a cryptographic digital signature in a manner that minimizes the number of elliptic curve operations.
It is another object of the present invention to generate and verify a cryptographic digital signature in a manner that minimizes the number of elliptic curve operations using binary expansion in
joint sparse form.
The present invention is a method of generating and verifying a cryptographic digital signature using joint sparse expansion.
The first step of the method is selecting, by a signer, a finite field, an elliptic curve, a point P on the elliptic curve, an integer w, and an integer k.
The second step of the method is generating a point W=wP and a point K=kP.
The third step of the method is transforming K to a bit string K*.
The fourth step of the method is combining K*, W, and a message M in a first manner to produce h.
The fifth step of the method is combining K*, W, and M in a second manner to produce c.
The sixth step of the method is generating s.
The seventh step of the method is forming the cryptographic digital signature as (K*,s).
The eighth step of the present method is acquiring, by a verifier, the finite field F, the elliptic curve E, the point P, the point W, the message M, and the cryptographic digital signature (K*,s).
The ninth step of the present is computing h and c.
The tenth step of the present method is selecting (n[0], n[1]).
The eleventh step of the method is generating binary expansions of n[0 ]and n[1 ]in joint sparse form.
The twelfth step of the method is computing Q=n[0]P+n[1]W via twin multiplication and a double-add-subtract method with the binary expansions in joint sparse form.
The thirteenth step of the present method is transforming, by the verifier, Q to Q* in the same manner as K was transformed to K* in the third step 3.
The fourteenth, and last, step of the method is verifying the digital signature if Q*=K*.
FIG. 1 is a list of steps of the digital signature method of the present invention for a first type of digital signature;
FIG. 2 is a list of steps of the binary expansion in joint sparse form of the present invention;
FIG. 3 is a list of steps of selecting u[0,j];
FIG. 4 is a list of steps of selecting u[1,j];
FIG. 5 is a list of steps of updating values; and
FIG. 6 is a list of steps of the digital signature method of the present invention for a first type of digital signature.
The present invention is a method of generating and verifying a cryptographic digital signature using joint sparse expansion. The present invention uses two families of elliptic curves to generate
and verify two types of digital signatures. FIG. 1 lists the steps of the present invention for generating and verifying the first type of digital signature.
The first step 1 of the present method is acquiring or selecting, by a signer, a finite field F, an elliptic curve E, a point P on the elliptic curve, an integer w, and an integer k. The elliptic
curve is defined over the finite field F. The number of points on the elliptic curve is divisible by q, where q is a prime number. The point P on the elliptic curve is of order q. Each user (i.e.,
signer and verifier) knows the order q. E, P, and q may be publicly known parameters.
The second step 2 of the present method is generating, by the signer, a point W=wP and a point K=kP.
The third step 3 of the present method is transforming, by the signer, K to a bit string K*. A suitable transformation is to make K* the x coordinate of the point K.
The fourth step 4 of the present method is combining, by the signer, K*, W, and a message M in a first manner to produce h, where h is an integer modulo q.
The fifth step 5 of the present method is combining, by the signer, K*, W, and the message M in a second manner to produce c, where c is an integer modulo q.
The sixth step 6 of the present method is generating, by the signer, s using one of the following equations:
s=hw+ck(mod q),
s=(hw+c)/k(mod q), and
s=(hk+c)/w(mod q).
The seventh step 7 of the present method is forming, by the signer, the cryptographic digital signature as (K*,s).
The eighth step 8 of the present method is acquiring, by a verifier, the finite field F, the elliptic curve E, the point P, the point W, the message M, and the cryptographic digital signature (K*,s).
The ninth step 9 of the present method is computing, by the verifier, h and c in the same manner as the signer did in the fourth step 4 and the fifth step 5, respectively.
The tenth step 10 of the present method is selecting, by the verifier, a pair of components (n[0], n[1]) from the following pairs of components:
(n [0] , n [1])=(sc ^−1(mod q), −hc ^−1(mod q)),
(n [0] , n [1])=(cs ^−1(mod q), hs ^−1(mod q)), and
(n [0] , n [1])=(−ch ^−1(mod q), sh ^−1(mod q)).
The pair of components selected in the tenth step 10 corresponds, according to position, to the equation selected in the sixth step 6. For example, if the first equation in the list of equations was
selected in the sixth step 6 then the first pair of components in the list of pairs of components is selected in the tenth step 10.
The eleventh step 11 of the present method is generating, by the verifier, binary expansions of n[0 ]and n[1 ]to minimize a number of nonzero columns for the binary expansions. Such an expansion is
referred to in the present invention as a binary expansion in joint sparse form. FIG. 2, described below, lists steps for performing a binary expansion on (n[0], n[1]) to minimize the number of
nonzero columns. FIG. 2 is described in detail below.
The twelfth step 12 of the present method listed in FIG. 1 is computing, by the verifier, a point Q=n[0]P+n[1]W via twin multiplication and a double-add-subtract method with the binary expansions
generated in the eleventh step 11. A double-add-subtract method is described in the Background section above.
The thirteenth step 13 of the present method is transforming, by the verifier, Q to Q* in the same manner as K was transformed to K* in the third step 3.
The fourteenth, and last, step 14 of the present method is verifying, by the verifier, the cryptographic digital signature (K*,s) by determining whether or not Q*=K*. If Q*=K* the digital signature
is verified. Otherwise, the digital signature is not verified and is rejected.
FIG. 2 is a list of steps for generating the binary expansions of n[0 ]and n[1 ]in joint sparse form in the eleventh step 11 listed in FIG. 1 and described above.
The first step 21 of the method of generating a binary expansion in joint sparse form is setting k[0]=n[0], k[1]=n[1], j=0, d[0]=0, and d[1]=0.
If d[0]+k[0]=0 and d[1]+k[1]=0 then the second step 22 of the method of generating a binary expansion in joint sparse form is setting m=j−1 and putting out (u[0,m], u[0,m-1], . . . , u[0,0]) as the
binary expansion for n[0 ]and (u[1,m], u[1,m-1], . . . , u[1,0]) as the binary expansion for n[1 ]and stopping. Otherwise, proceeding to the next step.
The third step 23 of the method of generating a binary expansion in joint sparse form is selecting u[0,j]. The steps for selecting u[0,j ]are listed in FIG. 3 and described below.
The fourth step 24 of the method of generating a binary expansion in joint sparse form is selecting u[1/j]. The steps for selecting u[1,j ]are listed in FIG. 4 and described below.
The fifth step 25 of the method of generating a binary expansion in joint sparse form is updating d[0 ]and k[0]. The steps for updating d[0 ]and k[0 ]are listed in FIG. 5 and described below.
The sixth step 26 of the method of generating a binary expansion in joint sparse form is updating d[1 ]and k[1]. The steps for updating d[1 ]and k[1 ]are listed in FIG. 5 and described below.
The seventh, and last, step 27 of the method of generating a binary expansion in joint sparse form is setting j=j+1 and returning to the second step 22.
FIG. 3 is a list of steps for selecting u[0,j ]in the method of generating the binary expansions in joint sparse form listed in FIG. 2 and described above.
If d[0]+k[0 ]is even then the first step 31 of the method of selecting u[0,j ]is setting u[0,j]=0.
If d[1]+k[1]=2 (mod 4) and d[0]+k[0]=1 (mod 8) then the second step 32 of the method of selecting u[0,j ]is setting u[0,j]=1.
If d[1]+k[1]=2 (mod 4) and d[0]+k[0]=3 (mod 8) then the third step 33 of the method of selecting u[0,j ]is setting u[0,j]=1.
If d[1]+k[1]=2 (mod 4) and d[0]+k[0]=5 (mod 8) then the fourth step 34 of the method of setting u[0,j ]is setting u[0,j]=−1.
If d[1]+k[1]=2 (mod 4) and d[0]+k[0]=7 (mod 8) then the fifth step 35 of the method of setting u[0,j ]is setting u[0,j]=1.
If d[1]+k[1 ]is not equal to 2 (mod 4) and d[0]+k[0]=1 (mod 4) then the sixth step 36 of the method of setting u[0,j ]is setting u[0,j]=1.
If d[1]+k[1 ]is not equal to 2 (mod 4) and d[0]+k[0]=3 (mod 4) then the seventh, and last, step 37 of the method of setting u[0,j ]is setting u[0,j]=−1.
FIG. 4 is a list of steps for selecting u[1,j ]in the method of generating the binary expansions in joint sparse form listed in FIG. 2 and described above.
If d[1]+k[1 ]is even then the first step 41 of the method of selecting u[1,j ]is setting u[1,j]=0.
If d[0]+k[0]=2 (mod 4) and d[1]+k[1]=1 (mod 8) then the second step 42 of the method of selecting u[1,j ]is setting u[1,j]=1.
If d[0]+k[0]=2 (mod 4) and d[1]+k[1]=3 (mod 8) then the third step 43 of the method of selecting u[1,j ]is setting u[1,j]=1.
If d[0]+k[0]=2 (mod 4) and d[1]+k[1]=5 (mod 8) then the fourth step 44 of the method of selecting u[1,j ]is setting u[1,j]=−1.
If d[0]+k[0]=2 (mod 4) and d[1]+k=7 (mod 8) then the fifth step 45 of the method of selecting u[1,j ]is setting u[1,j]=−1.
If d[0]+k[0 ]is not equal to 2 (mod 4) and d[1]+k[1]=1 (mod 4) then the sixth step 46 of the method of selecting u[1,j ]is setting u[1,j]=1.
If d[0]+k[0 ]is not equal to 2 (mod 4) and d[1]+k[1]=3 (mod 4) then the seventh, and last, step 47 of the method of selecting u[1,j ]is setting u[1,j]=−1.
FIG. 5 is a list of steps for updating d[0 ]and k[1 ](i.e., step 25) and updating d[1 ]and k[1](i.e., step 26) in the method of generating the binary expansions in joint sparse form listed in FIG. 2
and described above. For updating d[0 ]and k[0 ](i.e., step 25), subscript i is set to 0. For updating d[1 ]and k[1](i.e., step 26), subscript i is set to 1.
If d[i]=0 and u[i,j]=−1, then the first step 51 of the method of updating d[i ]and k[i ]is setting d[i]=1.
If d[i ]and u[i,j]=1, then the second step 52 of the method of updating d[i ]and k[i ]is setting d[i]=0.
If k[i ]is odd then the third step 53 of the method of updating d[i ]and k[i ]is setting k[i]=k[i]−1.
The fourth, and last, step 54 of the method of updating d[i ]and k[i ]is setting k[0i]=k[0i]/2.
FIG. 6 lists the steps of the present invention for generating and verifying the second type of digital signature.
The first step 61 of the present method is acquiring or selecting, by a signer, a finite field F, an elliptic curve E, a point P on the elliptic curve, an integer w, and an integer k. The elliptic
curve is defined over the finite field F. The number of points on the elliptic curve is divisible by q, where q is a prime number. The point P on the elliptic curve is of order q. Each user (i.e.,
signer and verifier) knows the order q. E, P, and q may be publicly known parameters.
The second step 62 of the present method is generating, by the signer, a point W=wP and a point K=kP.
The third step 63 of the present method is transforming, by the signer, K to a bit string K*. A suitable transformation is to make K* the x coordinate of the point K.
The fourth step 64 of the present method is combining, by the signer, K*, W, and a message M in a first manner to produce h, where h is an integer modulo q.
The fifth step 65 of the present method is combining, by the signer, K*, W, and the message M in a second manner to produce c, where c is an integer modulo q.
The sixth step 66 of the present method is generating, by the signer, s using one of the following equations:
s=hw+ck(mod q),
s=(hw+c)/k(mod q), and
s=(hk+c)/w(mod q).
The seventh step 67 of the present method is forming, by the signer, the cryptographic digital signature as (h,s).
The eighth step 68 of the present method is acquiring, by a verifier, the finite field F, the elliptic curve E, the point P, the point W, the message M, and the cryptographic digital signature (h,s).
The ninth step 69 of the present method is computing, by the verifier, c in the same manner as the signer did in the fifth step 65, respectively.
The tenth step 70 of the present method is selecting, by the verifier, a pair of components (n[0], n[1]) from the following pairs of components:
(n [0] , n [1]) (sc ^−1(mod q), −hc ^−1(mod q)),
(n [0] , n [1])=(−cs ^−1(mod q), hs^−1(mod q)), and
(n [0] , n [1])=(−ch ^−1(mod q), sh ^−1(mod q)).
The pair of components selected in the tenth step 70 corresponds, according to position, to the equation selected in the sixth step 66. For example, if the first equation in the list of equations was
selected in the sixth step 66 then the first pair of components in the list of pairs of components is selected in the tenth step 70.
The eleventh step 71 of the present method is generating, by the verifier, binary expansions of n[0 ]and n[1 ]to minimize a number of nonzero columns for the binary expansions. FIG. 2, described
above, lists the steps for performing a binary-expansion on (n[0], n[1]) to minimize the number of nonzero columns.
The twelfth step 72 of the present method listed in FIG. 6 is computing, by the verifier, a point Q=n[0]P+n[1]W via twin multiplication and a double-add-subtract method with the binary expansions
generated in the eleventh step 71. A double-add-subtract method is described in the Background section above.
The thirteenth step 73 of the present method is transforming, by the verifier, Q to Q* in the same manner as K was transformed to K* in the third step 63.
The fourteenth step 74 of the present method is combining, by the verifier, M, Q*, and W to produce h* in the same manner as M, K*, and W were combined in the fourth step 64.
The fifteenth, and last, step 75 of the present method is verifying the cryptographic digital signature (h,s) by determining whether or not h=h*. If h=h* then verify the digital signature. Otherwise,
reject the digital signature and do not verify it. | {"url":"http://www.google.com/patents/US7024559?ie=ISO-8859-1&dq=patent:7076806","timestamp":"2014-04-20T14:21:57Z","content_type":null,"content_length":"105863","record_id":"<urn:uuid:a1bc99f2-ed22-4b2a-893c-0aedc179c0ee>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lagrange Points of the Earth-Moon System
A mechanical system with three objects, say the Earth, Moon and Sun, constitutes a three-body problem. The three-body problem is famous in both mathematics and physics circles, and mathematicians in
the 1950s finally managed an elegant proof that it is impossible to solve. However, approximate solutions can be very useful, particularly when the masses of the three objects differ greatly.
For the Sun-Earth-Moon system, the Sun's mass is so dominant that it can be treated as a fixed object and the Earth-Moon system treated as a two-body system from the point of view of a reference
frame orbiting the Sun with that system. 18th century mathematicians Leonhard Euler and Joseph-Louis Lagrange discovered that there were five special points in this rotating reference frame where a
gravitational equilibrium could be maintained. That is, an object placed at any one of these five points in the rotating frame would stay there, with the effective forces with respect to this frame
canceling. Such an object would then orbit the Sun, maintaining the same relative position with respect to the Earth-Moon system. These five points were named Lagrange points and numbered from L1 to
The Lagrange points L4 and L5 constitute stable equilibrium points, so that an object placed there would be in a stable orbit with respect to the Earth and Moon. With small departures from L4 or L5,
there would be an effective restoring force to bring a satellite back to the stable point.
The L5 point was the focus of a major proposal for a colony in "The High Frontier" by Gerard K. O'Neill and a major effort was made in the 1970's to work out the engineering details for creating such
a colony. There was an active "L5 Society" that promoted the ideas of O'Neill. The L4 and L5 points make equilateral triangles with the Earth and Moon.
The Lagrange points L1, L2 and L3 would not appear to be so useful because they are unstable equilibrium points. Like balancing a pencil on its point, keeping a satellite there is theoretically
possible, but any perturbing influence will drive it out of equilibrium. However, in practice these Lagrange points have proven to be very useful indeed since a spacecraft can be made to execute a
small orbit about one of these Lagrange points with a very small expenditure of energy. They have provided useful places to "park" a spacecraft for observations. These orbits around L1 and L2 are
often called "halo orbits". L3 is on the opposite side of the Sun from the Earth, so is not so easy to use. It might be a good place to hide something, since we never see it - fertile ground for
science fiction!
The Lagrange point L2 has been used for the Wilkinson Microwave Anisotropy Probe (WMAP). L2 is positioned outside the Earth's orbit so that the WMAP can always face away from both the Sun and the
Earth, an important feature of a deep-space probe so that it can employ ultra-sensitive detectors without the danger of them being "blinded' by looking at the Sun or the Earth.
Orbit concepts | {"url":"http://hyperphysics.phy-astr.gsu.edu/hbase/Mechanics/lagpt.html","timestamp":"2014-04-20T01:44:31Z","content_type":null,"content_length":"8365","record_id":"<urn:uuid:df980072-3cef-4f2b-a4fd-ab6a06fe99e6>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mia Shores, FL Algebra 2 Tutor
Find a Mia Shores, FL Algebra 2 Tutor
...Sincerely, RocioI would like to be certified in ESL/ESLO because I have been teaching ESL for adults for two years at Stony Point High School. I have also taken the training with Literacy of
Austin group. Teaching English to adults has been a rewarding and interesting experience and allowed me to expand my teaching methods and help the students to reach their potential in a second
16 Subjects: including algebra 2, chemistry, Spanish, geometry
...Over the past decade, I've worked with several dozen students as a private tutor, mentor, and learning coach. While much to most of the work I do with my students is substantive and academic
(test prep, homework, re-teaching and clarifying concepts, academic content skill-building), at least som...
61 Subjects: including algebra 2, English, Spanish, reading
...I have been teaching English for many years to people coming from other countries. English is an amazing language and you will feel great once you are able to walk around and communicate with
everyone. I have been working with Adobe Photoshop for many years, and tutoring it is one of my favorite things.
23 Subjects: including algebra 2, English, reading, algebra 1
I began working as a tutor in High School as part of the Math Club, and then continued in college in a part time position, where I helped students in College Algebra, Statistics, Calculus and
Programming. After college I moved to Spain where I gave private test prep lessons to high school students ...
11 Subjects: including algebra 2, calculus, physics, geometry
I am a sophomore college student who is currently finishing up this spring semester to earn an Associate's degree in Business. I currently go to the Honors College at Miami Dade College where I
maintain an A average. I have been known to be an outstanding student who is always seeking to make a difference.
16 Subjects: including algebra 2, reading, calculus, geometry
Related Mia Shores, FL Tutors
Mia Shores, FL Accounting Tutors
Mia Shores, FL ACT Tutors
Mia Shores, FL Algebra Tutors
Mia Shores, FL Algebra 2 Tutors
Mia Shores, FL Calculus Tutors
Mia Shores, FL Geometry Tutors
Mia Shores, FL Math Tutors
Mia Shores, FL Prealgebra Tutors
Mia Shores, FL Precalculus Tutors
Mia Shores, FL SAT Tutors
Mia Shores, FL SAT Math Tutors
Mia Shores, FL Science Tutors
Mia Shores, FL Statistics Tutors
Mia Shores, FL Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Bay Harbor Islands, FL algebra 2 Tutors
Biscayne Park, FL algebra 2 Tutors
El Portal, FL algebra 2 Tutors
Hialeah Lakes, FL algebra 2 Tutors
Indian Creek Village, FL algebra 2 Tutors
Key Biscayne algebra 2 Tutors
Maimi, OK algebra 2 Tutors
Miami Gardens, FL algebra 2 Tutors
Miami Shores, FL algebra 2 Tutors
North Bay Village, FL algebra 2 Tutors
North Miami, FL algebra 2 Tutors
Opa Locka algebra 2 Tutors
Sunny Isles Beach, FL algebra 2 Tutors
Surfside, FL algebra 2 Tutors
West Park, FL algebra 2 Tutors | {"url":"http://www.purplemath.com/Mia_Shores_FL_algebra_2_tutors.php","timestamp":"2014-04-17T21:36:48Z","content_type":null,"content_length":"24533","record_id":"<urn:uuid:b4e2ea82-2953-4e05-940f-d6ac4eb0b1e8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2000 [00200]
[Date Index] [Thread Index] [Author Index]
Re: Flat, OneIdentity attributes
• To: mathgroup at smc.vnet.net
• Subject: [mg21633] Re: [mg21600] Flat, OneIdentity attributes
• From: BobHanlon at aol.com
• Date: Tue, 18 Jan 2000 02:35:18 -0500 (EST)
• Sender: owner-wri-mathgroup at wolfram.com
Attributes[f] = {Flat};
f[2] /. f[n_Integer] :> n + 10
I cannot explain this behavior. Further, this is even more unusual
Attributes[f] = {Flat};
f[2] /. f[n_] :> n + 10
f[2] + 10
Mathematica appears to interpret the first case as
f[2]/.f[n_Integer]:>(n+10) and the second case as (f[2]/.f[n_]:>n)+10
Bob Hanlon
In a message dated 1/17/2000 12:15:24 AM, ErsekTR at navair.navy.mil writes:
>For the most part I understand how Flat and OneIdentity are related and
>demonstrate this using Version 4 in the examples below.
>In the first example (f) has the attributes Flat and OneIdentity.
>The pattern matcher treats f[a,2,3] as f[a,f[2,3]] then uses the
>replacement rule and {1,{2,3}} is returned.
>In the next example the only attribute (f) has is Flat.
>In this case the pattern matcher treats f[1,2,3] as
>f[f[1],f[f[2],f[3]]] then uses the replacement rule and
>{f[1],{f[2],f[3]}} is returned.
>OneIdentity the pattern matcher doesn't wrap (f) around a single argument
>when it tries different ways of nesting (f).
>In the next example (f) has the attributes Flat, OneIdentity and the rule
>For reasons I can't understand the rule isn't used in the next example.
>anyone explain why? | {"url":"http://forums.wolfram.com/mathgroup/archive/2000/Jan/msg00200.html","timestamp":"2014-04-17T15:37:26Z","content_type":null,"content_length":"36169","record_id":"<urn:uuid:dcd38704-94dc-4fed-b240-ba96ce8bef41>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Placement of the Negative Sign in a Negative Mixed Number
Date: 10/25/2004 at 16:35:09
From: Julia and Sam
Subject: Negative Fractions
When you have a negative improper fraction, does the negative sign
automatically distribute throughout your fraction? For instance, is
-1 3/5 the same as 1 -3/5 and 1 3/-5? We have an intuitive sense that
the fractions are equal, but we are having trouble proving it.
If this is true, why is it the case? Your help is much appreciated!
Julia and Sam
Date: 10/25/2004 at 22:51:24
From: Doctor Peterson
Subject: Re: Negative Fractions
Hi, Julia.
Mixed numbers don't play well with negative signs, or with algebraic
notation in general. When we write 1 3/5, we really mean (1 + 3/5),
and the negative -1 3/5 means -(1 + 3/5), which distributes as
-1 + -3/5
-1 3/5 = -(1 + 3/5) = -8/5
1 -3/5 = 1 - (3/5) = 2/5
1 3/-5 = (1)(3/-5) = -3/5
Or at least that's how I would interpret each expression if I came
across it out of context! Only the first contains cues that strongly
suggest a mixed number is intended, since we never write mixed numbers
with negative fractional parts.
Generally, it's better to use improper fractions instead of mixed
numbers in algebra, to avoid not only that source of confusion, but
also something like 1 3/5 x, where the first space means addition, the
second means multiplication, and parentheses have to be supplied.
If you have any further questions, feel free to write back.
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/66778.html","timestamp":"2014-04-18T01:01:51Z","content_type":null,"content_length":"6642","record_id":"<urn:uuid:28d44086-b79e-40af-b7f6-0501aef7beae>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: ON THE SPECTRUM AND LYAPUNOV EXPONENT OF LIMIT
Abstract. We exhibit a dense set of limit periodic potentials for which the
corresponding one-dimensional Schr¨odinger operator has a positive Lyapunov
exponent for all energies and a spectrum of zero Lebesgue measure. No ex-
ample with those properties was previously known, even in the larger class of
ergodic potentials. We also conclude that the generic limit periodic potential
has a spectrum of zero Lebesgue measure.
1. Introduction
This work is motivated by a question in the theory of one-dimensional ergodic
Schr¨odinger operators. Those are bounded self-adjoint operators of 2
(Z) given by
(1.1) (Hu)n = un+1 + un-1 + v(fn
where f : X X is an invertible measurable transformation preserving an ergodic
probability measure µ and v : X R is a bounded measurable function, called the
One is interested in the behavior for µ-almost every x. In this case, the spectrum
is µ-almost surely independent of x. The Lyapunov exponent is defined as | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/464/2838170.html","timestamp":"2014-04-19T20:01:05Z","content_type":null,"content_length":"8130","record_id":"<urn:uuid:31169bd0-982c-44d8-ad09-49955c39cf23>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Services on Demand
Related links
Computational & Applied Mathematics
On-line version ISSN 1807-0302
DEHGHAN, Mehdi and HAJARIAN, Masoud. Two iterative algorithms for solving coupled matrix equations over reflexive and anti-reflexive matrices. Comput. Appl. Math. [online]. 2012, vol.31, n.2, pp.
353-371. ISSN 1807-0302. http://dx.doi.org/10.1590/S1807-03022012000200008.
An n × n real matrix P is said to be a generalized reflection matrix if P^T = P and P^2 = I (where P^T is the transpose of P). A matrix A ∈ R^n×n is said to be a reflexive (anti-reflexive) matrix
with respect to the generalized reflection matrix P if A = P A P (A = - P A P). The reflexive and anti-reflexive matrices have wide applications in many fields. In this article, two iterative
algorithms are proposed to solve the coupled matrix equations { A[1] XB[1] + C[1]X^TD[1] = M[1]. A[2] XB2 + C[2]X^TD[2] = M[2]. over reflexive and anti-reflexive matrices, respectively. We prove that
the first (second) algorithm converges to the reflexive (anti-reflexive) solution of the coupled matrix equations for any initial reflexive (anti-reflexive) matrix. Finally two numerical examples are
used to illustrate the efficiency of the proposed algorithms. Mathematical subject classification: 15A06, 15A24, 65F15, 65F20.
Keywords : iterative algorithm; matrix equation; reflexive matrix; anti-reflexive matrix. | {"url":"http://www.scielo.br/scielo.php?script=sci_abstract&pid=S1807-03022012000200008&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-20T18:46:08Z","content_type":null,"content_length":"18145","record_id":"<urn:uuid:3a8ca3ed-8021-4b19-9b70-0ce8899d2f43>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Algorithmic number theory
Algorithmic Number Theory Symposium (ANTS)
is an
academic conference
Since their inception in Cornell in 1994, the biennial ANTS meetings have become the premier international forums for the presentation of new research in computational number theory. They are devoted
to algorithmic aspects of number theory, including elementary number theory, algebraic number theory, analytic number theory, geometry of numbers, algebraic geometry, finite fields, and cryptography.
Current events
ANTS IX will be held in
Nancy, France
in 2010
Selfridge Prize
In honour of the many contributions of
John Selfridge
to mathematics, the
Number Theory Foundation
has established a prize to be awarded to those individuals who have authored the best paper accepted for presentation at ANTS. The prize, called the
Selfridge Prize
, will normally be awarded every two years in an even numbered year. The prize winner(s) will receive a cash award and a certificate. The successful paper will be selected by the ANTS Program
The Selfridge Prize at the ANTS VII meeting was awarded to Werner Bley and Robert Boltje for their paper Computation of locally free class groups. The Prize at ANTS VIII was awarded to Juliana
Belding, Reinier Bröker, Andreas Enge and Kristin Lauter for their paper Computing Hilbert Class Polynomials.
The refereed
of ANTS are published in the
Lecture Notes in Computer Science
. The Lecture Notes in Computer Science are now also published
ANTS VIII (2008)
Dates: 17 - 22 May 2008
Location: Banff Centre (Alberta, Canada)
Organizers: Mark Bauer (University of Calgary), Josh Holden (Rose-Hulman Institute of Technology), Mike Jacobson (University of Calgary), Renate Scheidler (University of Calgary) and Jon Sorenson
(Butler University)
Proceedings: LNCS 5011
Web site: http://ants.math.ucalgary.ca/
ANTS VII (2006)
Dates: 23 - 28 July 2006
Location: Technische Universität Berlin (Berlin, Germany)
Organizers: Florian Heß, Sebastian Pauli, Michael Pohst
Proceedings: LNCS 4076
Web site: http://www.math.tu-berlin.de/~kant/ants/
ANTS VI (2004)
Dates: 13 - 18 June 2004
Location: University of Vermont (Burlington, Vermont, USA)
Organizers: Duncan Buell, Jonathan W. Sands, David S. Dummit
Proceedings: LNCS 3076; Poster Abstracts
Web site: http://web.ew.usna.edu/~ants/
ANTS V (2002)
Dates: 7-12 July 2002
Location: University of Sydney (Sydney, Australia)
Organizers: John Cannon, Claus Fieker, David Kohel
Proceedings: LNCS 2369
Web site: http://magma.maths.usyd.edu.au/antsv/index.html
ANTS IV (2000)
Dates: 2-7 July 2000
Location: University of Leiden (Leiden, Netherlands)
Organizers: Peter Stevenhagen, Wieb Bosma
Proceedings: LNCS 1838
Web site: http://www.math.leidenuniv.nl/~desmit/ants4/
ANTS III (1998)
Dates: 21-25 June 1998
Location: Reed College (Portland, Oregon, USA)
Organizer: Joe Buhler
Proceedings: LNCS 1423
Web site: http://www.reed.edu/ants/
ANTS II (1996)
Dates: 18-23 May 1996
Location: University of Bordeaux (Bordeaux, France)
Organizers: Henri Cohen, Michel Olivier
Proceedings: LNCS 1122
ANTS I (1994)
Dates: 6-9 May 1994
Location: Cornell University (Ithaca, New York, USA)
Organizers: Len Adleman, Ming-Deh Huang
Proceedings: LNCS 877 (out of print) | {"url":"http://www.reference.com/browse/Algorithmic+number+theory","timestamp":"2014-04-18T07:31:10Z","content_type":null,"content_length":"90759","record_id":"<urn:uuid:d1aacfee-cec0-4aae-be19-652f7c019f59>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
On December 12, 2010, I graduated from the University of Cincinnati with a degree in mathematics.
I'd come to define myself, in part, in terms of this studenthood. Between first registration and final grade tally, I married once twice and divorced once. I saw two daughters come into the world,
and one leave it. I bought a house, divested myself of that house, and bought another. I accepted the job I've had for a decade, and which I've spent the last few hoping to leave. Always, actively or
in that part of my forebrain where ego and regret collide, I was a student.
And now I'm not. Now I'm "done." I expect my diploma in the mail soon, and I will hang it in our office proudly. Though...though it'll represent not so much a goal reached, but the closing of a
personal epoch. An end to a struggle I never fully joined. A curtain call for the döppelganger, the ostensible student. And a beginning of my real education.
Not that I haven't learned anything; that'd be a bit melodramatic and glib. I've learned that the Stewart calculus text is a credible device for (a) teaching various recipes for mathematical cooking,
e.g., L'Hopital's Rule; and for (b) scaring students into believing they're "bad at math."
I've learned enough of Riemann and Cauchy to have a first-order approximation of how much yet there is to know. I've learned some of the basic lexicon of mathematics, of continuity) and metric spaces
, of normal subgroups and homomorphisms. With a little review, I could even do something with them.
Which is to say, I've also learned what it is to truly study a subject. Well, actually, I suppose I can't claim that: a recursively enumerable set isn't necessarily a recursive set, which is to say
that my knowing that I did not truly study a subject doesn't confer to me knowledge of what it is to study it. Another first-order approximation, then. But useful nonetheless.
Now there is but to finally carry the mantel of student, freed from the illusion that completing repetitive problem sets on line integrals or expectation values actually constitutes more than a cheap
rip-off of mathematics. What I've gleaned from this at-once trivial and enlightening realization applies not only to my development, but also to that of my children. What I've gleaned is that, even
if these cobblestone streets were paved with good intentions, what we generally refer to as "education" is at its very best vocational training, and is at its very worst a commoditizing of our time
and attention, of our will to power.
I failed as a student in more ways than I have the patience to note at the moment. Among this litany is my failure to form any academic community. I'm not sure how much I missed out on, but I bet
it's substantial. I'm a product of my passive-aggressive heuristic for dealing with people, characterized by idolizing my antisocial tendencies and then lamenting my lack of human connection. This
failure as a student is my failure as a person, notwithstanding the logistical challenges a parent-student faces.
I'm officially done with this particular rant. I've really drained myself by alternating self-congratulation and "Woe is me." I created what Carl Rogers might call a self-image at odds with reality:
every Einstein quote, every time I watched Good Will Hunting, every browsing of Carl Friedrich Gauss' or Terrance Tao's Wikipedia pages, contributed to the fantasy that I would, or more importantly,
should, be similarly prodigious and capable. My particular knot of neurons and ganglia translated that information into a proscription against hard work, as if I shouldn't need to work hard, as if
instead I should need only to surround myself with the trappings of genius to free genius. And that, that's the most important thing I've learned.
My most ambitious academic product is stored here (LaTeX file here). It's an interesting, short paper I all but transcribed from my faculty advisor's notes. I did learn a bit about computability, and
LaTeX, and I enjoyed the process. Now I'm moving on, perhaps to study mathematics. | {"url":"http://erectlocution.com/2011/01/23/on-december-12-2010-i-graduated-from-the-university-of-cincinnati-with-a-degree-in-mathematics.html","timestamp":"2014-04-24T16:17:43Z","content_type":null,"content_length":"11856","record_id":"<urn:uuid:d5595485-3382-4d08-bbf5-e5af0cf53d7a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Two wheels of moment of inertia 4 rotate side by side @ of 120rev/min and 240rev/min resp. in d opp directions.If now both d wheels are coupled by means of weightless shaft so that both d wheels now
rotate with the common angular speed,find d new speed of rotation..
• 6 months ago
• 6 months ago
Best Response
You've already chosen the best response.
assuming angular momentum is conserved, then we first calculate the total angular momentum from the initial conditions (L - angular momentum, I - moment of inertia, w - angular velocity: L=Iw).
after attachment both discs rotate with the same angular velocity; this because the rod attaching them forces them to rotate in unison. because they have the same moment of inertia (I) and the
same angular velocity (w), they will also have equal angular momentum. this means they split the total in half! one final note: angular velocity is a measurement of rotations per second. thus, we
convert the given values: w1=120/60=2 and w2=240/60=4. \[L = Iw\]\[L = 4(2) + 4(4) = 24\] each disc rotates with angular momentum of 12 after the attachment. rearranging: \[w=L/I=12/4=3\] whatcha
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/52446af3e4b0838bd4496b15","timestamp":"2014-04-17T22:07:55Z","content_type":null,"content_length":"28593","record_id":"<urn:uuid:b31f2365-f9b9-4404-a32e-a6c6ce5f2b8e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
Say It With Science
The Stern-Gerlach Experiment
In 1922 at the University of Frankfurt in Frankfurt, Germany, Otto Stern and Walther Gerlach sent a beam of silver atoms through an inhomogeneous magnetic field in their experimental device. They
were taking a look at the new concept of quantized spin angular momentum. If indeed the spin associated with particles could only take on two or some other countable number of states, then the atoms
transmitted through the other end of their machine should come out as two (or more) concentrated beams. Meanwhile if the quantum theory was wrong, classical physics predicted that the profile of a
single smeared-out beam would result on the detector screen, due to the magnetic field deflecting each randomly spin-oriented atom a different amount on a continuous, rather than discrete, scale.
As you can see above, the results of the Stern-Gerlach experiment confirmed the quantization of spin for elementary particles.
Spin and quantum states
A spin-1/2 particle actually corresponds to a qubit
|ψ> = c[1]|ψ[↑]> + c[2]|ψ[↓]>
a wavefunction representing a particle whose quantum state can be seen as the superposition (or linear combination) of two pure states, one for each kind of possible spin along a chosen axis (such as
x, y or z). The silver atoms of Stern and Gerlach’s experiment fit in this description because they are made of spin-1/2 particles (electrons and quarks, which make up protons and neutrons).
Significantly, the constant coefficients c[1] and c[2] are complex and can’t be directly measured. But the squared moduli ||c[1]||^2 and ||c[2]||^2 of these coefficients represent the probability
that a particle in state |ψ> will be observed as spin up or down at the detector.
||c[1]||^2 + ||c[2]||^2 = 1 : it is certain that the particle will be detected in one of the two spin states.
That means when we pass a large sample of particles in identical quantum states through a Stern-Gerlach (S-G) machine and detector, we are actually measuring the probabilities that the particle will
adopt the spin up or spin down states along the particular axis of the S-G machine. This follows the relative-frequency interpretation of probability, where as the number of identical trials grows
large the relative frequency of an event approaches the true probability that the event will occur in any one trial.
By moving the screen so that either the up or down beam is allowed to pass while the the other is stopped at the screen, we are “polarizing” the beam to a certain spin orientation along the S-G
machine axis. We can then place one or more S-G machines with stops in front of that beam and reproduce all the experiments analogous to linear polarization of light. | {"url":"http://sayitwithscience.tumblr.com/tagged/quantum","timestamp":"2014-04-17T12:38:29Z","content_type":null,"content_length":"59532","record_id":"<urn:uuid:f8e40ac0-d022-43be-a886-dfefb4200332>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
two tailed hypothesis for difference in proportion mean
November 10th 2009, 09:57 PM #1
Nov 2009
two tailed hypothesis for difference in proportion mean
hello. i have been doing some problems with hypothesis testing of proportions. i have now a real-world problem and want to make sure i set it up correctly. say i have 2 observed proportion
s1: 100 tests, 25 positives
s2: 100 tests, 23 positives
i'd like to answer 1 questions:
what's the probability that the population from which s1 is drawn has a greater mean than the mean of the population from which s2 is drawn?
here's how i set it up and ran it:
first i ran a sequence of two-tailed proportion tests starting at a significance level of 0.05 and ending at 0.95. i find that they are different at the level of 0.75 of significance (not very
significant, i understand). main question to start is, does this mean we are about 25 % confident the means of the 2 populations are different from each other? (null hypothesis having been that
the population means are equal). so, call this significance level L1.
then, my next test was to run a one tailed test. the null hypothesis of this one is s2's population mean is >= s1's. this i find we can reject the null hypth at the 0.4 level. can we call this
effectively a 60% probability?
then, use a conditional probability statement to answer the question in full - there is a 25% probability they're different, and given the condition they are different, there is a 60% probability
s1's population is greater, the answer to the overall statement is: the probability s1's population's mean is greater than s2's is 24%.
am i thinking about this right, or mixing concepts up by using the significance level as a proxy for probability within a conditional probability problem???
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/113809-two-tailed-hypothesis-difference-proportion-mean.html","timestamp":"2014-04-19T19:54:59Z","content_type":null,"content_length":"31009","record_id":"<urn:uuid:65d64e3c-28e4-4c30-88ae-b70416c2ffce>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help reconstructing pixel position from depth
11-27-2012, 06:00 PM #1
Junior Member Newbie
Join Date
Sep 2010
Help reconstructing pixel position from depth
Hi everyone,
I'm making changes to my deferred system. I want to delete the position texture (too much expensive GL_RGB32F) so in the light pass I have to reconstruct the pixel position from depth.
I tried for a few days without luke, so now I'm some how frustated.
What works (old position texture):
Geometry pass - Vertex:
Code :
vsPosition = ( ModelViewMatrix * vec4(in_Position, 1.0) ).xyz;
gl_Position = MVP * vec4(in_Position, 1.0);
Geometry pass - Fragment:
This is how I store old pixel positions on geometry pass.
So next is to figure how to get this without the texture. I tried MANY things that I found around the net (without luke...), what i have now is:
Light pass - fragment:
Code :
vec2 calcTexCoord()
return gl_FragCoord.xy / ScreenSize;
vec3 positionFromDepth()
vec2 sp = calcTexCoord();
float depth = texture2D(Texture4, sp).x * 2.0 - 1.0;
vec4 pos = InvProjectionMatrix * vec4( sp*2.0-1.0, depth, 1.0);
return pos.xyz/pos.w;
vec3 computePhongPointLight()
vec3 color = vec3(0.0);
vec2 texCoord = calcTexCoord();
//vec3 position = texture2D( Texture0, texCoord ).xyz;
vec3 position = positionFromDepth();
vec3 difColor = texture2D( Texture1, texCoord ).xyz;
vec3 specColor = texture2D( Texture2, texCoord ).xyz;
vec3 normColor = texture2D( Texture3, texCoord ).xyz;
vec3 lightDir = Light.position.xyz - position;
vec3 lightDirNorm = normalize( lightDir );
float sDotN = max( dot(lightDirNorm, normColor), 0.0);
float att = 1.0;
float distSqr = dot(lightDir, lightDir);
float invAtt = (Light.constantAtt +
(Light.linearAtt*sqrt(distSqr)) +
att = 0.0;
if (invAtt != 0.0)
att = 1.0/invAtt;
vec3 diffuse = difColor.rgb * Light.diffuse * sDotN;
vec3 ambient = difColor.rgb * Light.ambient; // Cheat here
vec3 vertexToEye = normalize(position);
vec3 r = normalize(reflect(lightDirNorm, normColor));
// SpecularPower
vec3 specular = vec3(0.0);
if ( sDotN > 0.0 )
specular = Light.specular.rgb *
specColor *
pow( max( dot(vertexToEye, r), 0.0), 60.0 ); // Change specular here!!!! value 60 must be an uniform
return (diffuse + specular + ambient)*att;
Where Texture0 is old position texture, Texture4 is depth texture, InvProjectionMatrix is the inverse projection matrix, and Light.position is computed as ViewMatrix * light_position.
I did some debug and I output the absolute diference:
Code :
vec3 pos1 = positionFromDepth();
vec3 pos2 = texture2D(Texture0, calcTexCoord()).xyz;
fragColor = vec4(abs(pos2.x-pos1.x), abs(pos2.y-pos1.y), abs(pos2.z-pos1.z), 1.);
The output is all black minus empty zones that are white.
I think it's space error, something like: light position in view space and pixel position is in other space so "vec3 lightDir = Light.position - position" give bad results. But I can't figure
what's happening, and how to solve it...
I will really apreciate any help (cause I don't know what more to do), and I hope that someone with better math can help me
Sorry for my bad English and thanks in advance.
See this active thread, and the solution I suggest:
* Deferred shader for depth, normals, and position
One thing I notice about your approach that seems strange is that you are feeding NDC-SPACE positions into the inverse PROJECTION matrix, instead of feeding in CLIP-SPACE positions. And then
you're doing a perspective divide "after" getting back into EYE-SPACE (?) Gut says this is not equivalent, but haven't cranked through anything on paper.
Yes I don't undestand really well whats going on so I just search documentation and tested things, I'm reall lost.
You are suggesting that I have to replace the position texture with another depth texture (with linear depth), so now the render target will have two depth textures, right?
It's not possible to get with normal depth texture or it's just hard?
Thanks for your time
PD: I was trying what you say. CLIP-SPACE is without (*2.0 - 1.0) part right?
With something like this:
Code :
vec3 positionFromDepth() { vec2 sp = calcTexCoord(); float depth = texture2D(Texture4, sp).x; vec4 pos = InvProjectionMatrix * vec4( sp, depth, 1.0); return pos.xyz; }
I will have position in view space? I'm tring to undo the steps maded in the old system,
What I have on my head is:
1- Get coordinates for the texture lookup
2- Retrive depth
3- create point in view projection space
4- Transform it to go to view space unsing inverse projection matrix
5- Use it
Something (or many things) are wrong with this because it doesnt work.
Last edited by Junky; 11-27-2012 at 07:58 PM. Reason: Test
I tried your PositionFromDepth_DarkPhoton function without success.
What I get is:
When camera gets close to the light, all scene gets illuminated, when I go far from the light all scene gets darker.
I don't undestand why, because If I'm not wrong pixel position from your function and light position are in eye space.
Light position is calculated in c++ as: ViewMatrix * world_position_light.
You'll get it. Just go slow and make small incremental changes, backing up when you get unexpected results from the last micro-change.
Here's a good pictorial reference for the space transformations:
* OpenGL Transformation (Song Ho Ahn)
You are suggesting that I have to replace the position texture with another depth texture (with linear depth)
No, not with linear depth (such as EYE-SPACE depth). Instead, with a standard WINDOW-SPACE depth texture. You get this by default by just rendering your depth information to a DEPTH or
DEPTH_STENCIL texture (e.g. GL_DEPTH_COMPONENT24, GL_DEPTH24_STENCIL8, etc.) by attaching one of these textures to the depth attachment of your FBO and rendering like normal.
...so now the render target will have two depth textures, right?
No, just one. And it's a single channel (e.g. GL_DEPTH_COMPONENT24).
It's not possible to get with normal depth texture or it's just hard?
I'm suggesting you just use a normal depth texture.
PD: I was trying what you say. CLIP-SPACE is without (*2.0 - 1.0) part right?
The x*2.0-1.0 converts a 0..1 value (e.g. WINDOW-SPACE) back into a -1..1 value (e.g. NDC-SPACE). CLIP-SPACE is one more space back from NDC which is before the perspective divide. See the
diagram in the link above for details.
With something like this:
Code glsl:
vec3 positionFromDepth()
vec2 sp = calcTexCoord();
float depth = texture2D(Texture4, sp).x;
vec4 pos = InvProjectionMatrix * vec4( sp, depth, 1.0);
return pos.xyz;
Problems here are that your sp and depth most likely are 0..1 values (WINDOW-SPACE) while what you would need to feed into the inverse projection matrix should be CLIP-SPACE.
Did you apply it to a standard WINDOW-SPACE depth value, as written by the pipeline to a standard depth attachment?
If so, show more code and tell me what you did. A short stand-alone GLUT test program would be even better.
Also, it can be very helpful for debugging to have a debug screen that displays a linearized depth value, where 0 (black) = near clip plane and 1 (white) = far clip plane. You can use the
function I mentioned to produce an EYE-SPACE depth value. Then map -near..-far to 0..1 to get black...white.
...and light position are in eye space. Light position is calculated in c++ as: ViewMatrix * world_position_light.
A suggestion: you've got too many fish in the air right now. I'd forget the lighting and light position, and just render the linearized depth value on-screen for each pixel (black..white meaning
Z=-near..-far) and get that 100% perfect before you throw lighting into this.
First of all thanks for your time and for you patience!
Okey, I will try to explain step by step what I have right now.
1 - Depth texture attached to a fbo. This depth has format GL_DEPTH_COMPONENT, internal format GL_DEPTH24_STENCIL8 and channel GL_FLOAT.
This depth has also GL_TEXTURE_COMPARE_MODE to default (I think it's GL_NONE) so I can read from it.
The depth and the stencil just works fine because I use them all the time without problem (writing for sure).
2- Old texture position that works (GL_RGB32F) was computed as:
ViewMatrix * in_Position, where in_Position is the input vertex.
ViewMatrix are computed as inverse(camera_full_transform). I'm using GLM library.
So I write pixel positions in view space, right?
3- Pixel position reconstruction (yours function)
Code :
vec3 positionFromDepth()
vec2 ndc;
vec3 eye;
vec2 sp = calcTexCoord();
float depth = texture2D(Texture4, sp).x;
eye.z = ClipDistance.x * ClipDistance.y / ((depth * (ClipDistance.y - ClipDistance.x)) - ClipDistance.y);
ndc.x = ((gl_FragCoord.x / ScreenSize.x) - 0.5) * 2.0;
ndc.y = ((gl_FragCoord.y / ScreenSize.y) - 0.5) * 2.0;
eye.x = ( (-ndc.x * eye.z) * (Right-Left) / (2*ClipDistance.x)
- eye.z * (Right+Left) / (2*ClipDistance.x) );
eye.y = ( (-ndc.y * eye.z) * (Top-Bottom) / (2*ClipDistance.x)
- eye.z * (Top+Bottom) / (2*ClipDistance.x) );
//eye.x = (-ndc.x * eye.z) * Right/ClipDistance.x;
//eye.y = (-ndc.y * eye.z) * Top/ClipDistance.x;
return eye;
I have gDEBugger so I check everything many times (uniforms, etc)
4- Light position is passed as a uniform as: ViewMatrix * light_world_position, so again is in view space.
For test propose I simplify my phong light function to just attenuate by the distance, here it is:
Code :
vec3 computePhongPointLight()
vec3 color = vec3(0.0);
vec2 texCoord = calcTexCoord();
vec3 difColor = texture2D( Texture1, texCoord ).xyz;
vec3 position_depth = positionFromDepth();
vec3 position_texture = texture2D( Texture0, texCoord ).xyz;
//vec3 position = position_texture; <--- WORK!
vec3 position = position_depth; // <-- DON'T WORK!
vec3 lightDir = Light.position.xyz - position;
float att = 1.0;
float distSqr = dot(lightDir, lightDir);
float invAtt = (Light.constantAtt +
(Light.linearAtt*sqrt(distSqr)) +
att = 0.0;
if (invAtt != 0.0)
att = 1.0/invAtt;
return difColor*att;
What I test here?
a) position = position_texture; <---- WORKS
b) position = position_depth; <--- FAILS
c) position = vec3(position_texture.x, position_texture.y, position_depth.z); <---- FAILS
d) position = vec3(position_depth.x, position_depth.y position_texture.z); <------ FAILS
I also tested to display the depth (to visualitze them) and the old position texture on a full screen quad.
fragColor = vec4(position_depth.x, position_depth.y, position_depth.z, 1.0);
fragColor = vec4(position_texture.x, position_texture.y, position_texture.z, 1.0);
fragColor = vec4(abs(position_depth.x), abs(position_depth.y), abs(position_depth.z), 1.0);
fragColor = vec4(abs(position_texture.x), abs(position_texture.y), abs(position_texture.z), 1.0);
Notes: Nearly identical output both. It diferers on empty spaces, texure method are displayed as black, and depth method displayed as white. So i supose that first store (0, 0, 0) for empty
spaces and depth stores 1.0 for empty spaces/infinity?
Anyway apart from empty spaces (sky) output was the same.
I also test the absolute diference of both to see whats diferent, like:
fragColor = vec4(abs(position_depth.x-position_texture.x), abs(position_depth.y-position_texture.y), abs(position_depth.z-position_texture.z), 1.0);
And the output was:
So from what I can deduce is that only sky is shifted, but geometry must be the same. But not, because the illumination doesn't work...
(Thin white lines are for debuggin aabb, its post deferred pass, so it's why it's displayed)
Apart from this I tested many thinks, but without any sense.
I really apreciate your help, and if you need any more information (states, uniforms, anything) please ask
I cannot put the program cause is ~30.000 lines long...
Last edited by Junky; 11-28-2012 at 01:42 PM.
I think I found my problem!!!!!
Here goes:
When I compute light, first I do a stencil test pass and then I perform the light draw pass (for spot and point lights). The problem is that you can not read from depth while you have stencil
test enabled and you have also bound the depth_stencil for stencil test. So I think this is why when I perform quad mesh tests it output the correct results, but when I do light computation
values were wrong).
Question: I have to delete the stencil test optimitzation, right?
Finally after all this days figuring whats wrong... I was becoming mad because I was thinking my math were really really bad (they are just bad).
Thanks very much Dark Photon for your help, I really apreciate it!
Out of my happiness I decide to compress normal values from here http://aras-p.info/texts/CompactNormalStorage.html. For my surprise was really easy.
Last edited by Junky; 11-28-2012 at 05:52 PM.
That's possible. Reading depth while testing (aka reading) stencil might do it. Though reading depth while "writing" stencil (which you're probably also doing) is also likely to cause problems.
The spec does specify undefined behavior in terms of "textures" bound as render targets and shader inputs rather than as specific attachments, which suggests that your supposition is correct.
Question: I have to delete the stencil test optimitzation, right?
Would suggest you make sure this is the problem first. Establishing this is easy: after rendering your opaque depth buffer, just glBlitFramebuffer a copy of it over to another texture with the
same res/format, and feed that copy into your shader sampler input, using the existing one as the depth/stencil attachment of your FBO. Then do you lighting passes (with stencil magic). If the
problem goes away, then that's probably it.
Rasterizing another G-buffer channel for a copy of depth is another option, though likely more expensive.
But re stencil-test optimization, let's take a step back. Consider that it might be cheaper just to throw light quads at the GPU rather than do a bunch of state changes and batches per light
rendered (possibly in general, depending on your scene/lighting/CPU/GPU, but especially with tile-based deferred). That allows you to batch a bunch of lights together and throw them at the GPU in
one batch (possibly with one-sided depth test), with no state changes in between (just uniform updates before each batch) -- very easy on the CPU and no pipeline bubbles. This is mentioned in a
number of places but for instance, see Deferred Rendering for Current and Future Rendering Pipelines (Lauritzen, SIGGRAPH 2010).
...also re your white vs. black for "background" areas. You need to clear your depth buffer for the main depth buffer (gets cleared to FAR = 1.0 = white). If you rasterize a separate depth
channel in the G-buffer, then you need to clear it to the FAR value as well OR ensure that all fragments on the G-buffer (that you care about) are overwritten so there are no leftover "trash"
Last edited by Dark Photon; 11-28-2012 at 07:42 PM.
Would suggest you make sure this is the problem first. Establishing this is easy: after rendering your opaque depth buffer, just glBlitFramebuffer a copy of it over to another texture with the
same res/format, and feed that copy into your shader sampler input, using the existing one as the depth/stencil attachment of your FBO. Then do you lighting passes (with stencil magic). If the
problem goes away, then that's probably it.
I didn't did a really indepth test to see if its 100% sure what I said, but I'm 99,9% sure. Reading from depth and reading from stencil is what it fails. I think it's because you have to bind
depth texture as render target (for stencil read test) and as a input texture (read depth values). I supose reading from depth and writing to stencil also fails.
I'm using Fedora 17 x64 with propertary nvidia drivers (uptodat).
...also re your white vs. black for "background" areas. You need to clear your depth buffer for the main depth buffer (gets cleared to FAR = 1.0 = white). If you rasterize a separate depth
channel in the G-buffer, then you need to clear it to the FAR value as well OR ensure that all fragments on the G-buffer (that you care about) are overwritten so there are no leftover "trash"
Black sky was for position texture because I clear it with (0, 0, 0, 0), depth buffer it's clear ok.
Anyway, I delete the stencil test. What I did (for replacment) was enable front face culling, enable depth test, disable depth write, set depth function to less or equal and render a cube. And
now it seems to work.
I have some more questions,
1) You know a better way to compute point/spot radius? Now I just solve the 2 degree equation for a given threshold. Whats threshold (now 1./16) it's the optimus?
Code :
void PointLight::updateRadius()
// radius = ( -(th*l) +/-sqrt(D) ) / ( 2*(th*q) )
// D = (th*l)² - 4*(th*q) * (th*c-dif)
float32 th = 1.f/16.f; // 1/16
//float32 th = 1.f/8.f;
//float32 th = 1.f/12.f;
//float32 th = 1.f/14.f;
float32 dif = Core::max(Core::max(_diffuse.x, _diffuse.y), _diffuse.z);
float32 D = (th*_linearAtt)*(th*_linearAtt) - 4*(th*_quadraticAtt) * (th*_constantAtt-dif);
if (D < 0.f)
_radiusCache = 0.f;
float32 div = 2 * (th*_quadraticAtt);
if (div == 0.f)
_radiusCache = 0.f;
float32 u = -(th*_linearAtt) / ( 2*th*_quadraticAtt);
float32 v = sqrt(D) / (2*th*_quadraticAtt);
if ( (u+v) > (u-v) )
_radiusCache = u+v;
_radiusCache = u-v;
2) I'm intrested in tile-base deferred rending, have you got more information about this technique? (I'm interested in any optimized technique)
Edit: Finally I think I will go with two depth buffers because I need stencil test for deferred decals.
What format will be better? single 32 bit floating point texture or encode to a 8 bit channel RGBA?
Last edited by Junky; 11-29-2012 at 07:50 AM.
11-27-2012, 06:37 PM #2
11-27-2012, 07:42 PM #3
Junior Member Newbie
Join Date
Sep 2010
11-28-2012, 05:25 AM #4
Junior Member Newbie
Join Date
Sep 2010
11-28-2012, 11:16 AM #5
11-28-2012, 11:22 AM #6
11-28-2012, 12:50 PM #7
Junior Member Newbie
Join Date
Sep 2010
11-28-2012, 05:47 PM #8
Junior Member Newbie
Join Date
Sep 2010
11-28-2012, 07:00 PM #9
11-29-2012, 06:08 AM #10
Junior Member Newbie
Join Date
Sep 2010 | {"url":"http://www.opengl.org/discussion_boards/showthread.php/179823-Help-reconstructing-pixel-position-from-depth?p=1244976","timestamp":"2014-04-16T13:36:48Z","content_type":null,"content_length":"97936","record_id":"<urn:uuid:7021516a-3be2-4777-960c-357914fd4a74>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rounak's Blog
Online Coding Round on InterviewStreet
The given problem boils down to : Given a undirected graph, source and destination, write the code to find the total number of distinct nodes visited, considering all possible paths.
4 of us were shortlisted for Personal Interviews in Delhi.
Personal Interview Problems:
1. Given two “ids” and a function getFriends(id) to get the list of friends of that person id, you need to write a function that returns the list of mutual friends.
2. Given an “id” and a function getFriends(id) to get the list of friends of that person id, you need to write a function that returns the list of “friends of friends” in the order of decreasing
number of mutual friends, as in friend recommendations.
3. Given a number of time slots – start time and end time,“a b”, find any specific time with the maximum number of overlapping.
4. Given an array of Integers, find the Longest sub-array whose elements are in Increasing Order.
5. Given an array of Integers, find the length of Longest Increasing Subsequence and print the sequence.
6. Given a Sorted Array which has been rotated, write the code to find a given Integer.
7. You have a number of incoming Integers, all of which cannot be stored into memory. We need to print largest K numbers at the end of input.
8. Implement LRU Cache.
The Interview Process was pretty impressive and well-organised. A fully sponsored trip to Delhi comes to an end, with me having a full-time Offer as a Software Engineer at Facebook. \m/
Interview Questions #1
This problem was asked to a friend of mine in his final round of interview for internship at Microsoft.
Given an unsorted array of non-negative integers and a non-negative integer ‘K’, find any one pair of integer in the given array such that their sum is equal to K. ‘n’ is the number of elements in
the array.
• Brute-Force O(n^2) solution.
using namespace std;
#include <cstdio>
#include <iostream>
int arr[10000];
int main()
int n,sum,i,j;
for(i=1;i<=n;i++) scanf("%d",&arr[i]);
if( (arr[i]+arr[j])==sum )
printf("%d %d\n",arr[i],arr[j]);
if(j<=n) break;
if(i>n) printf("NO\n");
return 0;
It’s pretty simple to design the O(n^2) algorithm. So we betta have a look at an O(n) algorithm.
• Optimised O(n) solution
using namespace std;
#include <cstdio>
#include <iostream>
int arr[10000],ans[10000];
int main()
int n,sum,i,val;
for(i=1;i<=n;i++) scanf("%d",&arr[i]);
for(i=0;i<=sum;i++) ans[i]=0;
if(val<0) continue;
if(ans[val]==1) {printf("%d %d\n",val,arr[i]);break;}
if(i>n) printf("NO\n");
return 0;
Microsoft Internship Selection
Finally Microsoft visited our campus!!
Only those with a minimum CGPA of 7.00 were allowed to sit for the written test. The written test comprised of two sections. The first one had 10 multiple choice questions with +3 for every right
answer and -1 for every wrong answer. Questions were pretty simple mainly from topics like pointers, recursion, memory, and the C programming language. The other section had two questions of 10 marks
each: one was a coding problem and for the other we had to design test cases.
Coding Problem: Write a function to shrink a given string. For example: Input String is “aaabbbaaccc”, and then the output string should be “a3b3a2c3”. You are passed a character array. Make changes
in that string (do not create a new string) and return it.
Well I don’t remember the other one… :(
After waiting for more than 3 weeks, the results were announced. 16 students made it to the list, and I was one of them. Now 6 of us were supposed to have a Group Activity. In that each one of us
were given two coding problems.
Coding Problem 1: We have a sorted circular linked list. You are being passed a pointer to one of the node and an integer. Write a function to insert a node in that linked list, with the given value
such that the linked list thus formed is also sorted.
Coding Problem 2: Write a function to find the number of set bits in an unsigned integer.
Now comes the Personal Interview.
Interview #1 (Technical) :
1. You are given two sorted arrays. Write a function to find the median of the two merged arrays. No extra space is to be used (Only variables are allowed).
2. Write a function to find the depth of a Binary Tree.
And I was asked a few questions on the difference between Arrays and Linked List and the advantage of one over the other. Same goes for Iteration and Recursion.
Interview #2 (Technical) :
You are given a binary tree where each node has 3 pointers: left, right and cousin. All the cousin pointers are initialized to NULL. Write a function to connect the nodes at the same level such that
the cousin pointer of a node points to a node at its immediate right, on the same level.
And at the end I got to hear those words which actually left me numb for a moment,
“Congratulations, you have made it through to MICROSOFT”. :)
Codeforces Round #105 ( Problem: Bag Of Mice )
This is a problem from the recently held Codeforces Round #105 (Div 2).
Problem Statement: The dragon and the princess are arguing about what to do on the New Year’s Eve. The dragon suggests flying to the mountains to watch fairies dancing in the moonlight, while the
princess thinks they should just go to bed early. They are desperate to come to an amicable agreement, so they decide to leave this up to chance.
They take turns drawing a mouse from a bag which initially contains w white and b black mice. The person who is the first to draw a white mouse wins. After each mouse drawn by the dragon the rest of
mice in the bag panic, and one of them jumps out of the bag itself (the princess draws her mice carefully and doesn’t scare other mice). Princess draws first. What is the probability of the princess
If there are no more mice in the bag and nobody has drawn a white mouse, the dragon wins. Mice which jump out of the bag themselves are not considered to be drawn (do not define the winner). Once a
mouse has left the bag, it never returns to it. Every mouse is drawn from the bag with the same probability as every other one and every mouse jumps out of the bag with the same probability as every
other one.
The only line of input data contains two integers w and b (0≤w,b≤1000).
Output the probability of the princess winning. The answer is considered to be correct if its absolute or relative error does not exceed10^-9.
Sample test(s)
Here’s my accepted solution:
using namespace std;
#include <algorithm>
#include <iostream>
#include <iterator>
#include <sstream>
#include <fstream>
#include <cassert>
#include <climits>
#include <cstdlib>
#include <cstring>
#include <string>
#include <cstdio>
#include <vector>
#include <cmath>
#include <queue>
#include <deque>
#include <stack>
#include <map>
#include <set>
double arr[1010][1010],temp,temp1,temp2;
int main()
int i,j,w,b;
scanf("%d %d",&w,&b);
for(i=0;i<=w;i++) arr[i][0]=1;
for(i=0;i<=b;i++) arr[0][i]=0;
arr[i][j]=(double)i/( (double)i+(double)j );
temp*=(double)(j-1)/( (double)i+(double)j-1.0 );
if(i+j>2 && j>2) temp1=temp*((double)(j-2.0)/( (double)i+(double)j-2 ));
if(j>=3) temp1*=arr[i][j-3];
if(i+j>2) temp2=temp*((double)(i)/( (double)i+(double)j-2 ));
if(j>=2) temp2*=arr[i-1][j-2];
return 0;
ACM-ICPC Kanpur Site Online Prelims ’11
Finally, ACM-ICPC Asia-Kanpur Site First Round Online Contest has come to an end…..Must say, it was real fun!!!!!
Lets have a look at the contest problems and their optimal solutions. Some of the solutions are from those teams which managed a good rank at the end of the contest. Congratulations to them!!!! :D
My team Gibberish managed an AIR of 26 and 1^st in BIT Mersa. But unless and until the results are declared I won’t be at peace. :P
Problem 1: Arithmancy
Hermione Granger, the most talented witch of her generation, likes to solve various types of mathematical problems in the Arithmancy class. Today, the professor has given her the following task:
Find the number of fractions a/b such that-
1. gcd(a, b) = 1
2. 0 < a/b < 1
3. a * b = (n!) ^ (n!)
Where “n!” denotes the factorial of n and “^” denotes power, i.e. (2!) ^ (2!) = 4.
She is quite confident that she can solve it for n <= 10,000,000 (i.e. 10^7), but then she remembers that she has to study some case history so that she can help Hagrid to win the case of Buckbeak.
So, she wants your help to solve the problem.
There will be one line for each test case containing the number n (1 <= n <= 10,000,000). Input will be terminated by EOF. There will be around 20,000 test cases.
For each case, print the number of fractions in a separate line. This number may be very large, so print the answer modulo 10,007.
Time limit: 5s
Source limit: 50000
Solution by Team Gibberish
using namespace std;
#include <iostream>
#include <cstdio>
#include <vector>
# define MAX 10000001
# define MOD 10007
vector<bool> prime(MAX,true);
int sum[MAX];
void sieve()
int i,j,p;
if(!prime[i]) continue;
else sum[i]=p;
int sq(int x)
return (x*x)%MOD;
int pow(int x)
if(!x) return 1;
if(x&1) return (2*pow(x-1))%MOD;
return sq(pow(x/2));
int main()
int num,cnt,i;
if(cnt!=0) cnt=pow(cnt-1);
return 0;
Problem 2: Calender
We know that there are so many calendar systems. For example, Bangla, Christ, Arabic, Chinese etc. This problem is about Decimal calendar. There are 3 months in this calendar. First month is
“Hundreds”. There are 300 days in this month. Second month is “Tens”. There are 60 days in this month. And this followed by the last month “Ones” having 5 or 6 days depending on whether this is leap
year or not. A Decimal year spans a full Christ calendar. That is 1st Hundreds in Decimal Calendar is 1st January in Christi Calendar. Similarly, 31st December of Christ Calendar is 5th or 6th day of
Decimal calendar (depending on whether it is leap year or not).
A year in Decimal calendar is leap year if the corresponding Christ year is leap year. For example, the Decimal year corresponding to 2000 Christ year is leap year but 2001 is not, and again 1900 is
not leap year too. A year in Christ calendar is leap year if the year is divisible by 400 or divisible by 4 but not by 100.
You are given a day in Christ calendar in DD-MMM-YYYY format (DD stands for day, MMM stands for first three letters (in CAPS) of the month and YYYY stands for the year). You are to give the date in
Decimal Calendar format.
First line contains number of test case. Every test case consists of a date in Christ Calendar format in each line.
You are to output the case number and the date in Decimal Calendar format. Output the date and the month in the Decimal Calendar.
Case 1: 1 Hundreds
Case 2: 10 Hundreds
Case 3: 50 Tens
First three letters for the months are:
JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, DEC.
Time limit: 5s
Source limit: 50000
Solution By Team Gibberish
using namespace std;
int main()
int n,k=1,day,year,mon,l,date;
string s,month;
int cal[][12]={ {0,31,59,90,120,151,181,212,243,273,304,334},
{0,31,60,91,121,152,182,213,244,274,305,335} };
day=(s[0]-'0')*10 + (s[1]-'0');
month= month + s[3]+s[4]+s[5];
year=(s[7]-'0')*1000 + (s[8]-'0')*100 + (s[9]-'0')*10 + (s[10]-'0');
if(month=="JAN") mon=1;
else if(month=="FEB" ) mon=2;
else if(month=="MAR" ) mon=3;
else if(month=="APR") mon=4;
else if(month=="MAY" ) mon=5;
else if(month=="JUN" ) mon=6;
else if(month=="JUL" ) mon=7;
else if(month=="AUG" ) mon=8;
else if(month=="SEP" ) mon=9;
else if(month=="OCT" )mon=10;
else if(month=="NOV" ) mon=11;
else if(month=="DEC") mon=12;
else if(year%4==0)
date = day+cal[l][mon-1];
cout<<"Case "<<k++<<": "<<date<<" "<<"Hundreds\n" ;
cout<<"Case "<<k++<<": "<<date<<" "<<"Tens\n";
cout<<"Case "<<k++<<": "<<date<<" "<<"Ones\n";
return 0;
Problem 3: CricInfo
I guess the most visited site of the past 3 months is www.cricinfo.com. First World Cup Cricket, then Australia tour to Bangladesh and now IPL T20. I believe there are lots of cricket fans among you.
So I do not need to describe the game rule. But for the purpose of this problem here is short description of scoring. Any rule out of this problem description is not applicable for this problem.
For this problem we will use only the following outcomes in a ball:
│Possible Outcome in a Ball │Runs│Is the Ball valid? │
│. (dot) │0 │Yes │
│1 │1 │Yes │
│2 │2 │Yes │
│3 │3 │Yes │
│4 │4 │Yes │
│6 │6 │Yes │
│Wd │1 │No │
│1Wd │2 │No │
│2Wd │3 │No │
│4Wd │5 │No │
│Nb │1 │No │
│1Nb │2 │No │
│4Nb │5 │No │
│6Nb │7 │No │
│W │0 │Yes │
(Wd stands for Wide, Nb for No Ball and W for Wicket)
In cricinfo we always watch the score card. In cricket an over consists of 6 valid balls. A score card of an over may look like below:
1 . W . Wd Nb . 6
In this over there were 1 wicket and 9 runs. In the last over of second innings of a match, a team requires N runs to win. You are to output number of ways of the outcome of the over. Note that, as
you are watching second innings of the match, so it may be possible that he can score N runs in first 4 balls and win the match. That means, it is not necessary to play an entire over to score N
runs. Also suppose you do not know how many wickets are already gone. So it may also be possible that after a few wicket falls they are all out. Also note that, if a team scores greater or equal to N
runs the team wins and does not play any ball.
First line contains number of test case T (T <= 10000). For each test a line contains N (1 <= N <= 10000).
For every test case, output the case number and number of ways of outcome of the last over where the team needs N runs to win. As the answer can be very big, so output in mod 10000007.
Case 1: 946
Time limit: 5s
Source limit: 50000
Solution by Team XCoders
# include<stdio.h>
int run[15]={0,1,2,3,4,6,1,2,3,5,1,2,5,7,0};
int ball[15]={1,1,1,1,1,1,0,0,0,0,0,0,0,0,1};
long count,n;
long a[10000][7]={0};
void fun(long rnew,int bnew)
long nb,nr,i,bn,rn,c;
if(i==14) c++;
if(rn<=0) c++;
if(bn>0) c=c+a[rn][bn];
else c++;
int main()
int i,j,t,b,r;
printf("Case %d: %ld\n",i+1,a[n][6]);
return 0;
Problem 4: Mr and Mrs Ant
Mr. and Mrs. Ant are very hungry. So, they want to collect food as much as they can. They can search for foods simultaneously. To do so, they start from their house and collect all foods together and
meet in some place (not necessarily their house). Finally, they eat together.
The world of Mr. and Mrs. Ant is a two dimensional grid. Each cell is either the home, or free, or blocked, or contains a food. Two cells are adjacent if they share an edge. In each second, they can
move from one cell to another cell simultaneously. One can decide to not to move in some step, while other may move. One cell can be visited many times. Both of them can move into the same cell also.
In this problem, the grid is given by an R x C matrix represented by following characters:
Character Meaning Remarks
│H │Home of Mr. and Mrs. Ant│Occurs exactly once │
│F │A food item │Occurs at least once, at most 8 times │
│. (dot) │Free (passable) cell │- │
│# (hash)│Blocked Cell │- │
Given the grid information, give the minimum amount of time that must be needed for them to collect all the foods and then meet.
The first line of input will contain T (T <= 30) denoting the number of cases. Each case starts with two integers R and C (2 <= R, C <= 12). Then, R lines follow giving the grid.
For each case, print the case number, the minimum amount of time (in seconds) that must be needed for them to collect all the foods and meet. If it is impossible to collect all the food items, output
-1 (negative one) instead.
Case 1: -1
Case 2: 8
Time limit: 5s
Source limit: 50000
Solution by Team Pandoras Box
# include <cstdio>
# include <algorithm>
using namespace std;
char maze[12][13];
int dist[12][12][12][12];
int neigh[4][2]={{0,1},{0,-1},{1,0},{-1,0}};
int arr[8];
int coord[9][2];
int main()
int T;
for(int t=0;t<T;t++)
int R,C;
for(int i=0;i<R;i++)
int cnt=1;
for(int i=0;i<R;i++)
for(int j=0;j<C;j++)
for(int k=0;k<R;k++)
for(int l=0;l<C;l++)
if(maze[i][j]!='#') dist[i][j][i][j]=0;
if(maze[i][j]=='H') {coord[0][0]=i;coord[0][1]=j;}
else if(maze[i][j]=='F') {coord[cnt][0]=i;coord[cnt++][1]=j;}
for(int chaathu=0;chaathu<144;chaathu++)
for(int i=0;i<R;i++)
for(int j=0;j<C;j++)
for(int q=0;q<4;q++)
int k=i+neigh[q][0],l=j+neigh[q][1];
for(int m=0;m<R;m++)
for(int n=0;n<C;n++)
for(int i=0;i<cnt;i++)
int mindist=1000000;
int start=0,disttot=0;
for(int i=0;i<cnt;start=arr[i++])
printf("Case %d: %d\n",t+1,mindist>10000?-1:mindist);
return 0;
Negative Base Number System
Here’s a C++ code to convert a number from decimal base to any negative base number system.
using namespace std;
void convert(int n,int b)
int rem;
if(n==0) return;
if(rem<0) {
printf("%d ",rem);
int main()
int n,b;
while(scanf("%d %d",&n,&b)!=EOF)
if(n==0) {printf("0\n");continue;}
return 0;
Multiply Using Shift Operators
Recently came across this Russian Multiplication Algorithm, also known as Ethiopian Multiplication to multiply two numbers without using * operator.
Here’s the Algorithm to multiply a and b:
1.If a is odd, add b to the Result.
2.If a is zero, break and print the Result.
3.Divide a by 2, Multiply b by 2 and goto step 1.
Here’s the C++ code…..
# include<cstdio>
int main() {
int a,b,ans;
while(scanf("%d%d",&a,&b)!=EOF) {
while(a!=0) {
if(a&1) ans+=b;
return 0;
Interesting, huh! :D
For more details on the topic, refer to Russian Peasant Multiplication. | {"url":"http://rounaktibrewal.wordpress.com/","timestamp":"2014-04-19T13:09:14Z","content_type":null,"content_length":"111752","record_id":"<urn:uuid:a85b4719-a2fb-4d98-bc05-52da8930ba71>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
MySQL Lists: mysql: Stdev calculation for aggregrate tables
List: General Discussion « Previous MessageNext Message »
From: Ow Mun Heng Date: April 5 2007 5:04am
Subject: Stdev calculation for aggregrate tables
View as plain text
I'm looking at creating some aggregrate table based on ~2hr pull from
the main DB and looking to get standard stuffs like min/max/ave etc.
I'm having some issues with getting stdev which would be representative
of the stdev of say 10 hours of data.
>From this website : (it references using SQL Server Analysis services
but I think the concept is the same for MySQL)
1. Calculate sum of square of each sale
2. multiple the result of step 1 by the sales count
3. sum all sales
4. Square the result of step 3
5. Substract the result of step 4 from the result of step 2
6. Multiply the sales count by one less than sales count ("sales_count"
* ("sales_count" - 1))
7. Divide the result of step 5 by the result of step 6
8. Stdev will be the square root of step 7
The results are valid (verified with actual data) but I don't understand
the logic. All the Statistical books I've read marked stdev as sqrt
(sum(x - ave(x))^2 / (n - 1). The formula is very different, hence the
All I know is that, it works. Only question is, why?
Can anyone explain this? | {"url":"http://lists.mysql.com/mysql/205985","timestamp":"2014-04-18T03:12:31Z","content_type":null,"content_length":"5132","record_id":"<urn:uuid:c3163b96-3b20-4879-8c63-f2813ac91de8>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Aldous, David (ed.) et al., Discrete probability and algorithms. Proceedings of the workshops “Probability and algorithms” and “The finite Markov chain renaissance” held at IMA, University of
Minnesota, Minneapolis, MN, USA, 1993. New York, NY: Springer-Verlag. IMA Vol. Math. Appl. 72, 15-41 (1995).
This is an elegant survey article about the number of rectangular arrays of nonnegative integers with given row and column sums and its importance in various combinatorial and statistical
applications. The combinatorial problems include magical squares, enumeration of permutations by descents, enumeration of double cosets, the description of tensor product decompositions, Young
tableaux and Kostka numbers. The statistical applications focus on tests for independence in contingency tables with given margins. Several exact and approximative methods for determining the array
numbers are described. The computational complexity is discussed. The authors also present Monte Carlo techniques based on Markov chains on the set of arrays with prespecified margins.
05A15 Exact enumeration problems, generating functions
60G50 Sums of independent random variables; random walks
60C05 Combinatorial probability
60J10 Markov chains (discrete-time Markov processes on discrete state spaces)
05A16 Asymptotic enumeration
62H17 Contingency tables (statistics)
65C05 Monte Carlo methods
05B15 Orthogonal arrays, Latin squares, Room squares | {"url":"http://zbmath.org/?q=an:0839.05005&format=complete","timestamp":"2014-04-21T07:11:08Z","content_type":null,"content_length":"22493","record_id":"<urn:uuid:d3277e6d-91e4-495c-ba86-4677f99935f0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
phonon laser action
A step closer to visualizing the electron___phonon interplay
Energy Technology Data Exchange (ETDEWEB)
The origin of the very high superconducting transition temperature (Tc) in ceramic copper oxide superconductors is one of the greatest mysteries in modern physics. In the superconducting state,
electrons form pairs (known as Cooper pairs) and condense into the superfluid state to conduct electric current with zero resistance. For conventional superconductors, it is well established that the
2 electrons in a Cooper pair are 'bonded' by lattice vibrations (phonons), whereas in high-Tc superconductors, the 'glue' for the Cooper pairs is still under intense discussion. Although the high
transition temperature and the unconventional pairing symmetry (d-wave symmetry) have led many researchers to believe that the pairing mechanism results from electron-electron interaction, increasing
evidence shows that electron-phonon coupling also significantly influences the low-energy electronic structures and hence may also play an important role in high-Tc superconductivity. In a recent
issue of PNAS, Carbone et al. use ultrafast electron diffraction, a recently developed experimental technique, to attack this problem from a new angle, the dynamics of the electronic relaxation
process involving phonons. Their results provide fresh evidence for the strong interplay between electronic and atomic degrees of freedom in high-Tc superconductivity. In general, ultrafast
spectroscopy makes use of the pump-probe method to study the dynamic process in material. In such experiments, one first shoots an ultrafast (typically 10-100 fs) 'pumping' pulse at the sample to
drive its electronic system out of the equilibrium state. Then after a brief time delay ({Delta}t) of typically tens of femtoseconds to tens of picoseconds, a 'probing' pulse of either photons or
electrons is sent in to probe the sample's transient state. By varying {Delta}t, one can study the process by which the system relaxes back to the equilibrium state, thus acquiring the related
dynamic information. This pump-probe experiment is reminiscent of the standard method used by bell makers for hundreds of years to judge the quality of their products (hitting a bell then listening
to how the sound would fade away), albeit the relevant time scale here is way beyond tens of femtoseconds. Traditionally, ultrafast spectroscopy was carried out to study gas-phase reactions, but it
has also been applied to study condensed phase systems since the development of reliable solid-state ultrafast lasers approximately a decade ago. In addition, the ability to control pulse width,
wavelength, and amplification of the output of Ti:Sapphire lasers has further increased the capability of this experimental method. During the past decade, many ultrafast pump-probe experiments have
been carried out in various fields by using different probing methods, such as photo-resistivity, fluorescence yield, and photoemission, and they have revealed much new information complementary to
the equilibrium spectroscopy methods used before. Carbone et al. used the photon-pump, electron (diffraction)-probe method. The pumping photon pulse first drives the electrons in the sample into an
oscillating mode along its polarization direction. Then during the delay time, these excited electrons can transfer excess energy to the adjacent nuclei and cause crystal lattice vibration on their
way back to the equilibrium state. An ultrashort electron pulse is shot at the sample at various time delays {Delta}t and the diffraction pattern is collected. Because the electron diffraction
pattern is directly related to the crystal lattice structure and its motion, this technique provides a natural way to study the electron-phonon coupling problem. Furthermore, by adjusting the pump
pulse's relative polarization with respect to the Cu-O bond direction, Carbone et al. were able to acquire the electron-phonon coupling strength along different directions. Focusing on the lattice
dynamic along the c axis, Carbone et al. found that the c-axis phonons in the optimally-doped Bi{sub 2}Sr{sub 2}CaCu{sub
Chen, Y.L.; Lee, W.S.; Shen, Z.X.; /Stanford U., Appl. Phys. Dept. /Stanford U., Phys. Dept. /SLAC, PULSE | {"url":"http://worldwidescience.org/topicpages/p/phonon+laser+action.html","timestamp":"2014-04-21T10:04:23Z","content_type":null,"content_length":"1043771","record_id":"<urn:uuid:e14fca31-c8c7-4996-b46a-16fb9442e42b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
[racket] Interesting article
From: Noel Welsh (noelwelsh at gmail.com)
Date: Thu Aug 12 09:42:09 EDT 2010
I agree with most of his gripes. Addressing his points about vectors,
I have a fairly extensive library of vectors functions on github
It has the same API for vector, evector [extensible vector], and
f64vector. It could easily be extended to flvector but I haven't yet
had the need/time. (Nor the time to properly package it.) I regularly
write numeric algorithms (e.g. discrete cosine transform, which I
wrote yesterday) in a functional system using this library.
The code is written using a compile-time unit system, which is well
...err... something.
API is below in case anyone is interested.
(import 'vector
(export for/vector
;;; Constructors
(import make-vector
(export vector-ones
;;; Predicates
(import in-vector vector-length)
(export vector-null? vector-=))
;;; Selectors
(import for/vector for/fold/vector in-vector vector-ref
vector-length list->vector)
(export vector-select
vector-find vector-findi
vector-find-first vector-find-firsti
;;; Mutators
(import vector-ref vector-set!)
(export vector-add1! vector-sub1!))
;;; Iterators
(import for/fold/vector for/vector
(export vector-map vector-mapi
vector-fold vector-foldi))
;;; General Functions
(import for/vector
(export vector* vector+ vector/ vector-
vector*s vector/s vector+s vector-s
vector-sum vector-product
vector-max vector-maxi vector-min vector-mini
vector-adjoin vector-append
vector-remove vector-remove-first vector-removei))
Posted on the users mailing list. | {"url":"http://lists.racket-lang.org/users/archive/2010-August/041025.html","timestamp":"2014-04-16T14:31:56Z","content_type":null,"content_length":"7477","record_id":"<urn:uuid:d24d9c04-1c38-408d-bab4-ea0e04102241>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Your High School's highest GPA...
I found threads for this in other forums but I'm curious about those who applied to USC.
Ours is just over 4.4 and is based on a 4.0 scale for all CP and most honors classes. 5.0 scale is used only for:
Honors English 3
Language 4
Honors Pre-Calc
Honors Chem
All AP classes (16 available)
Post edited by cc411 on
Replies to: Your High School's highest GPA...
our highest gpa is just over 4.4, too
our highest is a 4.3, when a kid skipped lunch periods for 2 years to take an AP course
im confused stress0ut? he was a nerd and didnt eat lunch so he could take an ap course?
ours is 4.0... i.e. Gym counts the same as AP calculus
the highest you can attain is 5.0..but that's never been done..4.5 has been however achieved before.
How can you get a 5.0? Does that mean that you can take all classes worth 5 points, even freshman classes like Honors English 1, , all Languages, Honors Science classes, etc.?
We get 6 for an AP, 5 for a Pre-AP and 4 for a levels class.
lol no a 5.0 is impossible..
AP classes = 5 pts
honors and all other classes = 4 pts
so in order to get a 5.0..each and everyone of ur classes would have to be an AP
we do it by a 6.0 scale too, our highest this year is a 5.7something weighted
AP and honors get a 6.0
then we have advance classes tat are 5.0
they got rid of reg classes tat gets a 4.0 in our freshman year, so noone gets a 4.0 unless they make a C in an AP class of a B in an advance class
i have a 4.86 where honors&AP are 5.0, all the rest are 4.0
this might sound crazy, but my friend from last year actually got a 5.0 for two grading periods, ended up with like 4.9 or so for the semester...he had 6 honors/AP's (worth 5) and religion (worth
4) and he got A+'s in a couple classes (5.33)...he ended up going to Harvard (he was also URM/Recruited athlete/2200 SAT)...and he was a nice guy
6.0 for AP's?! That's why we can't compare weighted GPA's!
I finally get why schools send their grading system along w/ transcripts haha.
Past valedictorians have had 5.1111 or something like that. but the gpa system that determines our valedictorians follows some strange equation. so if they took extra AP's it counts for more or | {"url":"http://talk.collegeconfidential.com/university-southern-california/311247-your-high-schools-highest-gpa.html","timestamp":"2014-04-18T05:53:53Z","content_type":null,"content_length":"66065","record_id":"<urn:uuid:69473647-38b0-4415-a102-ca9733eb2f58>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combinatorics and partially ordered sets
Results 1 - 10 of 54
- IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 1998
"... Abstract—We give a denotational framework (a “meta model”) within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of
possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are e ..."
Cited by 245 (54 self)
Add to MetaCart
Abstract—We give a denotational framework (a “meta model”) within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of
possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors
in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags
can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with
the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under
certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent
sequential processes with rendezvous, Petri nets, and discrete-event systems. I.
- Software?”, Computer , 2000
"... at "components" and "frameworks" might entail. Otherwise, we have little hope of getting a useful model because the prevailing component architectures in software engineering are not suitable
for embedded systems. Most frameworks have four service categories: . Ontology. A framework defines wha ..."
Cited by 80 (10 self)
Add to MetaCart
at "components" and "frameworks" might entail. Otherwise, we have little hope of getting a useful model because the prevailing component architectures in software engineering are not suitable for
embedded systems. Most frameworks have four service categories: . Ontology. A framework defines what it means to be a component. Is a component a subroutine? A state transformation? A process? An
object? An aggregate of components may or may not be a component. Certain semantic properties of components also flow from the definition. Is a component active or passive---can it autonomously
initiate interactions with other components or does it simply react to stimulus? . Epistemology. A framework defines states of knowledge. What does the framework know about the components? What do
components know about one another? Can components interrogate one another to obtain information (that is, is there reflection or introspection)? What do components know<F1
- ORDER , 2000
"... We define an analogue of Schnyder's tree decompositions for 3-connected planar graphs. Based on this structure we obtain: Let G be a 3-connected planar graph with f faces, then G has a convex
drawing with its vertices embedded on the (f 1) (f 1) grid. Let G be a 3-connected planar graph. The d ..."
Cited by 34 (14 self)
Add to MetaCart
We define an analogue of Schnyder's tree decompositions for 3-connected planar graphs. Based on this structure we obtain: Let G be a 3-connected planar graph with f faces, then G has a convex drawing
with its vertices embedded on the (f 1) (f 1) grid. Let G be a 3-connected planar graph. The dimension of the incidence order of vertices, edges and bounded faces of G is at most 3. The second result
is originally due to Brightwell and Trotter. Here we give a substantially simpler proof.
- J. Combin. Theory Ser. A , 1997
"... We investigate several hyperplane arrangements that can be viewed as deformations of Coxeter arrangements. In particular, we prove a conjecture of Linial and Stanley that the number of regions
of the arrangement x i \Gamma x j = 1; 1 i ! j n; is equal to the number of alternating trees. Remarkab ..."
Cited by 31 (6 self)
Add to MetaCart
We investigate several hyperplane arrangements that can be viewed as deformations of Coxeter arrangements. In particular, we prove a conjecture of Linial and Stanley that the number of regions of the
arrangement x i \Gamma x j = 1; 1 i ! j n; is equal to the number of alternating trees. Remarkably, these numbers have several additional combinatorial interpretations in terms of binary trees,
partially ordered sets, and tournaments. More generally, we give formulae for the number of regions and the Poincar'e polynomial of certain finite subarrangements of the affine Coxeter arrangement of
type A n\Gamma1 . These formulae enable us to prove a "Riemann hypothesis" on the location of zeros of the Poincar'e polynomial. We also consider some generic deformations of Coxeter arrangements of
type A n\Gamma1 . 1 Introduction The Coxeter arrangement of type A n\Gamma1 is the arrangement of hyperplanes given by x i \Gamma x j = 0; 1 i ! j n: (1.1) This arrangement has n! regions. They
- EVOLUTIONARY PROGRAMMING VII, PROCEEDINGS OF THE 7TH ANNUAL CONFERENCE ON EVOLUTIONARY PROGRAMMING , 1998
"... The task of finding minimal elements of a partially ordered set is a generalization of the task of finding the global minimum of a real-valued function or of finding pareto--optimal points of a
multicriteria optimization problem. It is shown that evolutionary algorithms are able to converge to t ..."
Cited by 29 (7 self)
Add to MetaCart
The task of finding minimal elements of a partially ordered set is a generalization of the task of finding the global minimum of a real-valued function or of finding pareto--optimal points of a
multicriteria optimization problem. It is shown that evolutionary algorithms are able to converge to the set of minimal elements in finite time with probability one, provided that the search space is
finite, the time-invariant variation operator is associated with a positive transition probability function and that the selection operator obeys the so--called `elite preservation strategy.'
- IN PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON INFORMATION SCIENCE INNOVATIONS IN ENGINEERING OF NATURAL AND ARTIFICIAL INTELLIGENT SYSTEMS (ISI 2001 , 2001
"... The search for minimal elements in partially ordered sets is a generalization of the task of finding Pareto-optimal elements in multi-criteria optimization problems. Since there are usually many
minimal elements within a partially ordered set, a population-based evolutionary search is, as a matter o ..."
Cited by 19 (3 self)
Add to MetaCart
The search for minimal elements in partially ordered sets is a generalization of the task of finding Pareto-optimal elements in multi-criteria optimization problems. Since there are usually many
minimal elements within a partially ordered set, a population-based evolutionary search is, as a matter of principle, capable of finding several minimal elements in a single run and gains therefore a
steadily increase of popularity. Here, we present an evolutionary algorithm which population converges with probability one to the set of minimal elements within a finite number of iterations.
, 1996
"... This paper relates to system-level design of signal processing systems, which are often heterogeneous in implementation technologies and design styles. The heterogeneous approach, by combining
small, specialized models of computation, achieves generality and also lends itself to automatic synthesis ..."
Cited by 17 (4 self)
Add to MetaCart
This paper relates to system-level design of signal processing systems, which are often heterogeneous in implementation technologies and design styles. The heterogeneous approach, by combining small,
specialized models of computation, achieves generality and also lends itself to automatic synthesis and formal verification. Key to the heterogeneous approach is to define interaction semantics that
resolve the ambiguities when different models of computation are brought together. For this purpose, we introduce a tagged signal model as a formal framework within which the models of computation
can be precisely described and unambiguously differentiated, and their interactions can be understood. In this paper, we will focus on the interaction between dataflow models, which have partially
ordered events, and discrete-event models, with their notion of time that usually defines a total order of events. A variety of interaction semantics, mainly in handling the different notions of time
in the two models, are explored to illustrate the subtleties involved. An implementation based on the Ptolemy system from U.C. Berkeley is described and critiqued.
- PROC. AMER. MATH. SOC , 2002
"... For n-regular, N-vertex bipartite graphs with bipartition A ∪ B, a precise bound is given for the sum over independent sets I of the quantity µ |I∩A | λ |I∩B |. (In other language, this is
bounding the partition function for certain instances of the hard-core model.) This result is then extended to ..."
Cited by 16 (1 self)
Add to MetaCart
For n-regular, N-vertex bipartite graphs with bipartition A ∪ B, a precise bound is given for the sum over independent sets I of the quantity µ |I∩A | λ |I∩B |. (In other language, this is bounding
the partition function for certain instances of the hard-core model.) This result is then extended to graded partially ordered sets, which in particular provides a simple proof of a well-known bound
for Dedekind’s Problem given by Kleitman and Markowsky in 1975.
- State of the Art in Multiple Criteria Decision Analysis , 2005
"... This paper provides the reader with a presentation of preference modelling fundamental notions as well as some recent results in this field. Preference modelling is an inevitable step in a
variety of fields: economy, sociology, psychology, mathematical programming, even medicine, archaeology, and ob ..."
Cited by 12 (0 self)
Add to MetaCart
This paper provides the reader with a presentation of preference modelling fundamental notions as well as some recent results in this field. Preference modelling is an inevitable step in a variety of
fields: economy, sociology, psychology, mathematical programming, even medicine, archaeology, and obviously decision analysis. Our notation and some basic definitions, such as those of binary
relation, properties and ordered sets, are presented at the beginning of the paper. We start by discussing different reasons for constructing a model or preference. We then go through a number of
issues that influence the construction of preference models. Different formalisations besides classical logic such as fuzzy sets and non-classical logics become necessary. We then present different
types of preference structures reflecting the behavior of a decision-maker: classical, extended and valued ones. It is relevant to have a numerical representation of preferences: functional
representations, value functions. The concepts of thresholds and minimal representation are also introduced in this section. In section 7, we briefly explore the concept of deontic logic (logic of
preference) and other formalisms associated with "compact representation of preferences " introduced for special purposes. We end the paper with some concluding remarks. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=376665","timestamp":"2014-04-16T18:07:23Z","content_type":null,"content_length":"38536","record_id":"<urn:uuid:60168229-3252-43ab-9989-5a4de83dcb26>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
You are not logged in.
• Index
• » Help Me !
• » Exercise in Probability!!!!
Post a reply
Topic review (newest first)
2013-04-15 00:30:21
2013-04-15 00:27:26
Okay, I will run a few when I get clear of work.
2013-04-15 00:25:30
Cannot do. I'm on the phond and will probably not be able to get on the laptop for the day.
2013-04-15 00:24:10
M code please?
2013-04-15 00:19:25
I tested for a few values and it works.
2013-04-15 00:12:57
We could run a simulation. We could substitute numbers and at least see whether the formula works for a few examples. It can not prove the correctness of it but it sure can prove
it false.
2013-04-15 00:09:39
Which kind of verification do you need?
2013-04-15 00:08:13
Did you fix it? Now how about a verification?
2013-04-14 23:50:37
Hm, I just found an error concerning the numbers. Let me fix the formula.
2013-04-14 23:48:22
Hmmm, have you tested this formula?
2013-04-14 23:47:09
No, I derived the formula myself using the PIE.
2013-04-14 23:45:10
The Wikipedia article covers PIE but it does not explain the formula in post #8.
2013-04-14 23:38:41
Well, I do not know of any you could read, but there is an article on Wikipedia on it. Search for "inclusion exclusion principle".
2013-04-14 23:37:05
Which common one would you recommend?
2013-04-14 23:32:24
It can be found in any common textbook. |AuB|=|A|+|B|-|AnB|.
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
Okay, I will run a few when I get clear of work.
Cannot do. I'm on the phond and will probably not be able to get on the laptop for the day.
We could run a simulation. We could substitute numbers and at least see whether the formula works for a few examples. It can not prove the correctness of it but it sure can prove it false.
Hm, I just found an error concerning the numbers. Let me fix the formula.
The Wikipedia article covers PIE but it does not explain the formula in post #8.
Well, I do not know of any you could read, but there is an article on Wikipedia on it. Search for "inclusion exclusion principle". | {"url":"http://www.mathisfunforum.com/post.php?tid=19228&qid=263192","timestamp":"2014-04-21T05:23:23Z","content_type":null,"content_length":"20212","record_id":"<urn:uuid:deae8eca-653c-4203-abf0-7563a22a3f6f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
Orthogonal basis
is not this an infraction of " linear independency
No it is not an infraction.
Any set of enough non parallel vectors from a vector space can be used as a basis.
However finding the correct coefficients is more difficult (laborious) than for an orthogonal set since the orthogonality means they can be found one at a time. | {"url":"http://www.physicsforums.com/showthread.php?t=524809","timestamp":"2014-04-18T13:59:20Z","content_type":null,"content_length":"34719","record_id":"<urn:uuid:708efd11-50b5-4300-8704-0f990624f55f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jamaica, NY Precalculus Tutor
Find a Jamaica, NY Precalculus Tutor
...My coursework in college increased my understanding and knowledge of better ways to learn, explain, and understand complicated topics. My extra curricular activities in writing and public
relations also honed skills for editing, literature, communication, and composition. Being able to share my experiences and knowledge make me happy and adds to my own experiences in life.
41 Subjects: including precalculus, English, reading, Spanish
I studied computer science and math in college and later practiced for the LSAT (got a 169 but ended up not applying for law school). I am actually a fan of standardized tests. Although daunting
sometimes, I think you can beat them simply by taking on a positive attitude and by practicing. I have ...
9 Subjects: including precalculus, algebra 1, algebra 2, trigonometry
...The thing that makes a little different from most tutors is that I only do this on the side. Despite over ten years of experience tutoring, I've always kept tutoring as a part-time thing with
a limited number of students. I prefer it this way so I can get to know my pupils better.
26 Subjects: including precalculus, chemistry, physics, calculus
...I'm really passionate about spreading mathematical knowledge and, therefore, really explain why things are the way they are rather than just how to complete a problem. From my experience in
the classroom, this has produced greater results on exams, state exams/regents, and set a strong foundatio...
10 Subjects: including precalculus, calculus, geometry, trigonometry
...I have instructed students in Algebra, Geometry, Algebra II and Trigonometry, and Pre-Calculus. Experienced high school math teacher available to tutor SAT math. I have instructed classified
students at all levels and in most content area subjects.
14 Subjects: including precalculus, reading, accounting, algebra 1
Related Jamaica, NY Tutors
Jamaica, NY Accounting Tutors
Jamaica, NY ACT Tutors
Jamaica, NY Algebra Tutors
Jamaica, NY Algebra 2 Tutors
Jamaica, NY Calculus Tutors
Jamaica, NY Geometry Tutors
Jamaica, NY Math Tutors
Jamaica, NY Prealgebra Tutors
Jamaica, NY Precalculus Tutors
Jamaica, NY SAT Tutors
Jamaica, NY SAT Math Tutors
Jamaica, NY Science Tutors
Jamaica, NY Statistics Tutors
Jamaica, NY Trigonometry Tutors | {"url":"http://www.purplemath.com/Jamaica_NY_precalculus_tutors.php","timestamp":"2014-04-18T03:54:47Z","content_type":null,"content_length":"24313","record_id":"<urn:uuid:0672c4a9-8f51-4f96-b1c8-f7d573bb4f24>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hookes Law Sample Problems Homework Help Archive Question
Question Hooke's Law Sample Problems ( Physics Help and Math Help - Physics Forums Homework Help Archive )
Updated: 2008-03-14 20:49:45 (3)
Hooke's Law Sample Problems
I need a sample problem (and solution, please) for hooke's law to help me understand. I understand the equation, I just don't understand what the variables mean exactly and how the equation works and
what each variable stands for.
Answers: Hooke's Law Sample Problems ( Physics Help and Math Help - Physics Forums Homework Help Archive )
Hooke's Law Sample Problems
ok, Hooke's law applies to the idealized case of a spring. The further you stretch the spring, the greater the force opposing the stretching, in other words, it assumes that the force increases
linearly with distance.
F = -kx
where k is the spring constant, F is the force generated by the spring, x is the displacement from equilibrium (where F=0). Any basic sample problem will require the equation re-arranged; or
substitution of another variable into the two changable variables, x and F; or balance the equation with another force (say, a mass on a spring so that F = mg).
You could also ask to determine the velocity and KE of the spring at any time or displacement of x. Or you could find the general solution to the differential equation of a harmonic oscillator, which
is what you've got with a mass on a spring, and find sinusoidal motion in space, decaying exponentially with the damping constant. So it depends on what depth you need.
Hooke's Law Sample Problems
ok, Hooke's law applies to the idealized case of a spring. The further you stretch the spring, the greater the force opposing the stretching, in other words, it assumes that the force increases
linearly with distance.
F = -kx
where k is the spring constant, F is the force generated by the spring, x is the displacement from equilibrium (where F=0). Any basic sample problem will require the equation re-arranged; or
substitution of another variable into the two changable variables, x and F; or balance the equation with another force (say, a mass on a spring so that F = mg).
You could also ask to determine the velocity and KE of the spring at any time or displacement of x. Or you could find the general solution to the differential equation of a harmonic oscillator, which
is what you've got with a mass on a spring, and find sinusoidal motion in space, decaying exponentially with the damping constant. So it depends on what depth you need.
Hooke's Law Sample Problems
Hook's law is this: F= kx where F is the force applied to stretch or compress the spring, x is the distance the spring is stretched or compressed and k is the "spring constant". It basically says
that the response of a spring is proportional to the force.
Your text may have F= -kx. The difference here is that F now is the force exerted BY the spring rather than the force exerted ON the spring ("equal and opposite").
Here are several "Hook's law" problems.
A spring with spring constant .4 cm/dyne has a force of 40 dynes applied to it (stretching it). How much does the spring stretch?
A force of 600 Newtons will compress a spring 0.5 meters. What is the spring constant of the spring?
A spring has spring constant 0.1 m/Newton. What force is necessary to stretch the spring by 2 meters?
A force of 40 Newtons will stretch a spring 0.1 meter. How far will a force of 80 Newtons stretch it?
- Source:
- Previous Question: Physics Help and Math Help - Physics Forums Homework Help Archive
- Next Question: Physics Help and Math Help - Physics Forums Homework Help Archive | {"url":"http://www.allquests.com/question/894927/Hookes-Law-Sample-Problems.html","timestamp":"2014-04-17T16:16:43Z","content_type":null,"content_length":"13508","record_id":"<urn:uuid:2db61c79-aa32-4a25-8b7a-969499ad74c8>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
Critical random graphs and the structure of a minimum spanning tree, Random Structures Algorithms
- In preparation , 2009
"... We consider the Erdős–Rényi random graph G(n, p) inside the critical window, that is when p = 1/n + λn −4/3, for some fixed λ ∈ R. Then, as a metric space with the graph distance rescaled by n
−1/3, the sequence of connected components G(n, p) converges towards a sequence of continuous compact metri ..."
Cited by 12 (5 self)
Add to MetaCart
We consider the Erdős–Rényi random graph G(n, p) inside the critical window, that is when p = 1/n + λn −4/3, for some fixed λ ∈ R. Then, as a metric space with the graph distance rescaled by n −1/3,
the sequence of connected components G(n, p) converges towards a sequence of continuous compact metric spaces. The result relies on a bijection between graphs and certain marked random walks, and the
theory of continuum random trees. Our result gives access to the answers to a great many questions about distances in critical random graphs. In particular, we deduce that the diameter of G(n, p)
rescaled by n −1/3 converges in distribution to an absolutely continuous random variable with finite mean. Keywords: Random graphs, Gromov-Hausdorff distance, scaling limits, continuum random tree,
diameter. 2000 Mathematics subject classification: 05C80, 60C05.
"... c t i v it y e p o r t 2009 Table of contents 1. Team.................................................................................... 1 ..."
Add to MetaCart
c t i v it y e p o r t 2009 Table of contents 1. Team.................................................................................... 1
"... c t i v it y e p o r t 2008 Table of contents 1. Team.................................................................................... 1 ..."
Add to MetaCart
c t i v it y e p o r t 2008 Table of contents 1. Team.................................................................................... 1
"... c t i v it y e p o r t 2009 Table of contents ..."
"... We study the fully-dynamicall pairsshortestpath problem forgraphswith arbitrary non-negative edge weights. It is known for digraphs that an update of the distance matrix costs O(n 2.75 polylog
(n)) worst-case time [Thorup, STOC ’05] and O(n 2 log 3 (n)) amortized time [Demetrescu and Italiano, J.ACM ..."
Add to MetaCart
We study the fully-dynamicall pairsshortestpath problem forgraphswith arbitrary non-negative edge weights. It is known for digraphs that an update of the distance matrix costs O(n 2.75 polylog(n))
worst-case time [Thorup, STOC ’05] and O(n 2 log 3 (n)) amortized time [Demetrescu and Italiano, J.ACM ’04] where n is the number of vertices. We present the first average-case analysis of the
undirected problem. For a random update we show that the expected time per update is bounded by O(n 4/3+ε) for all ε> 0. If the graph is outside the critical window, we prove even smaller bounds. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=10658719","timestamp":"2014-04-18T07:17:17Z","content_type":null,"content_length":"22193","record_id":"<urn:uuid:4da5412b-04e6-471d-993d-b3800d4fe0fd>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum field theory
$\infty$-Chern-Simons theory
A $\sigma$-model is a particular kind of physical theory of certain fields. The basic data describing a specific $\sigma$-model is some kind of “space” $X$, in a category of “spaces” which includes
smooth manifolds. We call $X$ the target space, and we define the “configuration space of fields” $Conf_\Sigma$ over a manifold $\Sigma$ to be the mapping space/mapping stack $Map(\Sigma, X)$. That
is, a “configuration of fields” over a manifold $\Sigma$ is like an an $X$-valued function on $\Sigma$.
We assign a dimension $n \in \mathbb{N}$ to our $\sigma$-model, take $dim \Sigma \leq n$ and assume that target space $X$ is equipped with a “circle n-bundle with connection”.
For $n = 1$ this is an ordinary circle bundle with connection and models a configuration of the electromagnetic field on $X$. To distinguish this “field” on $X$ from the fields on $\Sigma$ we speak
of a background gauge field. (This remains fixed background data unless and until we pass to second quantization.) A field configuration $\Sigma \to X$ on $\Sigma$ models a trajectory of a charged
particle subject to the forces exerted by this background field.
For $n = 2$, a circle $n$-bundle with connection is a circle 2-group principal 2-bundle or equivalently a bundle gerbe with connection. This models a “higher electromagnetic field”, called a
Kalb-Ramond field. Now $\Sigma$ is taken to be 2-dimensional and a map $\Sigma \to X$ models the trajectory of a string on $X$, subject to forces exerted on it by this higher order field.
This pattern continues. In the next dimension a membrane with 3-dimensional worldvolume is charged under a circle 3-bundle with connection, for instance something called the supergravity C-field.
While one can speak of higher bundles in full generality and full analogy to ordinary principal bundles, it is useful to observe that any circle $n$-bundle is characterized by a classifying map $\
alpha : X \to \mathbf{B}^n U(1)$ in our category of spaces, so we can just think about classifying maps instead. Here $U(1)$ is the circle group, and $\mathbf{B}^n$ denotes its $n$th delooping ; thus
such a map is also a sort of cocycle in “smooth $n$th cohomology of $X$ with coefficients in $U(1)$”. The additional data of a connection refines this to a cocycle in the differential cohomology of
Such connection data $abla$ on a circle $n$-bundle defines – and is defined by – a notion of higher parallel transport over $n$-dimensional trajectories: for closed $n$-dimensional $\Sigma$ it
defines a map $hol : (\gamma : \Sigma \to X) \mapsto \exp(i \int_\Sigma \gamma^*abla) \in U(1)$ that sends trajectories to elements in $U(1)$: the holonomy of $abla$ over $\Sigma$, given by
integration of local data over $\Sigma$. The local data being integrated is called the Lagrangian of the $\sigma$-model. Its integral is called the action functional.
In the quantum $\sigma$-model one considers in turn the integral of the action functional over all of configuration space: the “path integral”. In the classical $\sigma$-model one considers only the
critical locus of the action functional (where the rough idea is that the path integral to some approximation localizes around the critical locus). Points in this critical locus are said to be
configurations that satisfy the “Euler-Lagrange equations of motion”. These are supposed to be the physically realized trajectories among all of them, in the classical approximation.
Finally, just like an ordinary circle group-principal bundle has an associated vector bundle once we fix a representation of $U(1)$ to be the fibers, any “circle n-bundle” has an associated “n-vector
bundle” once we fix a “∞-representation” $\rho : \mathbf{B}^n U(1) \to n Vect$ on “n-vector spaces”. Just as for the ordinary $U(1)$, here we usually pick the canonical 1-dimensional such
“representation”. Finally, we define bundles $V_\Sigma : Conf_\Sigma \to \mathcal{C}$ of “internal states” by transgression of these associated bundles.
The passage from principal ∞-bundles to associated ∞-bundles is necessary for the description of the quantum $\sigma$-model: it assigns in positive codimension spaces of sections of these associated
bundles. For a 1-categorical description of the resulting QFT ordinary vector bundles (assigned in codimension 1) would suffice, but the $\sigma$-model should determine much more: an extended quantum
field theory. This requires sections of higher vector bundles. For instance for $n = 2$ some boundary conditions of the $\sigma$-model are given by sections of the background 2-vector bundle: these
are the twisted vector bundles known as the Chan-Paton bundles on the boundary-D-branes of the string. (…)
We now try to fill this with life by spelling out some standard examples. Further below we look at precise formalizations of the situation.
Terminology and history
In physics one tends to speak of a model if one specifies a particular quantum field theory for describing a particular situation, for instance by specifying a Lagrangian or local action functional
on some configuration space. This is traditionally not meant in the mathematical sense of model of some theory. But in light of progress of mathematically formalizing quantum field theory (see FQFT
and AQFT), it can with hindsight be interpreted in this way:
a $\sigma$-model is supposed to be a type of model for the theory called quantum field theory. This sounds like a tautology, but much effort in mathematical physics is devoted to eventually making
this a precise statement. In special cases and toy examples this has been achieved, but for the examples that seem to be directly relevant for the phenomenological description of the observed world,
lots of clues still seem to be missing.
As to the ”$\sigma$” in ”$\sigma$-model”: back in the 1960s people were interested in a hypothetical particle called the $\sigma$-particle. Murray Gell-Mann came up with a theory of them. It was
called ‘the $\sigma$-model’. It was an old-fashioned field theory where the field took values in a vector space. Then someone came up with a modified version of the σ-model where the field took
values in some other manifold and this was called ‘the nonlinear σ-model’.
While the parameter space (the domain space of the fields) of the original $\sigma$-models was supposed to be our spacetime and the target space was some abstract space, with the advent of string
theory the nonlinear $\sigma$-models gained importance as quantum field theories whose target space is spacetime $X$ and whose parameter space is some low dimensional space, usually denoted $\Sigma$.
A field configuration $\Sigma \to X$ is then interpreted as being the trajectory of an extended fundamental particle – a fundamental brane – in $X$, and the $\sigma$-model describes the quantum
mechanics of that brane propagating in $X$.
In particular the quantum mechanics of a relativistic particle propagating on $X$ is described by a $\sigma$-model on the real line $\Sigma = \mathbb{R}$ – the worldline of the particle.
In string theory one considers 2-dimensional $\Sigma$ and thinks of maps $\Sigma \to X$ as being the worldsheets of the trajectory of a string propagating in spacetime.
In the context of 11-dimensional supergravity there is a $\sigma$-model with 3-dimensional $\Sigma$, describing the propagation of a membrane in spacetime.
Exposition of classical sigma-models
We survey, starting from the very basics, classical field theory aspects of $\sigma$-models that describe dynamics of particles, strings and branes on geometric target spaces.
The content of this section is at
Exposition of higher gauge theories as $\sigma$-models
We discuss how gauge theories and their higher analogs are naturally regarded as $\sigma$-models.
The content of this section is at
Exposition of quantum $\sigma$-models
Above we have discussed some standard classical sigma-models and higher gauge theories as sigma-models, also mostly classically. Here we talk about the quantization of these models (or some of them)
to QFTs: quantum $\sigma$-models .
The content of this section is at
See there for discussion of string topology, Gromov-Witten theory, Chern-Simons theory.
Exposition of a general abstract formulation
We give a leisurely exposition of a general abstract formulation $\sigma$-models, aimed at readers with a background in category theory but trying to assume no other prerequisites.
What is called an $n$-dimensional $\sigma$-model is first of all an instance of an $n$-dimensional quantum field theory (to be explained). The distinctive feature of those quantum field theories that
are $\sigma$-models is that
1. these arise from a simpler kind of field theory – called a classical field theory – by a process called quantization
2. moreover, this simpler kind of field theory encoded bygeometric data in a nice way: it describes physical configuration spaces that are mapping spaces into a geometric space equipped with some
differential geometric structure.
We give expositions of these items step-by-step:
We draw from (FHLT, section 3).
The content of this section is at
Exposition of second quantization of $\sigma$-models
We discuss second quantization in the context of $\sigma$-models.
The content of this section is at
Non-topological $\sigma$-models
• The canonical textbook example of a quantum mechanical system is of this form for $n=1$: A line bundle with connection $E \to X$ on a (pseudo-)Riemannian manifold $X$ induces the 1-dimensional
quantum field theory which is the quantum mechanics of a point particle which propagates on $X$, subject to the forces of gravity (given by the pseudo-Riemannian metric on $X$) and
electromagnetism (given by the line bundle with connection). The Hamilton operator encoding this quantum dynamics in this case is the Laplace-operator of $T X$ twisted by the line bundle $E$.
For $X$ a spacetime this is called the relativistic particle.
For $\Sigma$ or $X$ a supermanifold this is the superparticle.
• Generalizing in the above example the line bundle $E$ by an abelian bundle gerbe with a connection yields a background for a 2-dimensional $\sigma$-model which mayb be thought of as describing
the propgation of a string. The best-studied version of this is the case where $X = G$ is a Lie group, in which case this $\sigma$-model is known as the Wess--Zumino--Witten model.
Topological $\sigma$-models
• Dijkgraaf-Witten theory is the (2+1)-dimensional $\sigma$-model induced from an abelian 2-gerbe on $\mathbf{B} G$, for $G$ a finite group.
• Chern-Simons theory is supposed to be analogously the $\sigma$-model induced from an abelian 2-gerbe with connection on $\mathbf{B}G$, but now for $G$ a Lie group.
• the Poisson sigma-model is a model whose target is a Poisson Lie algebroid.
• in AKSZ theory this is generalized to a large class of sigma models with symplectic Lie n-algebroids as target.
• Rozansky–Witten theory is essentially the $\sigma$-model for $X$ a smooth projective variety.
• generally ∞-Chern-Simons theory is a $\sigma$-model with a smooth ∞-groupoid of ∞-connections.
A standard reference on 2-dimensional string $\sigma$-models is
• Pierre Deligne, Dan Freed, Classical field theory , chapter 5, page 211
Krzysztof Gawedzki, Lectures on conformal field theory , part 3, lecture 3
Pierre Deligne, Pavel Etingof, Dan Freed, L. Jeffrey, David Kazhdan, John Morgan, D.R. Morrison and Edward Witten, eds. Quantum Fields and Strings, A course for mathematicians, 2 vols. Amer.
Math. Soc. Providence 1999. (web version)
Further discussion of sigma-models in the context of string theory is for instance in
• C. Callan, L. Thorlacius, Sigma models and string theory, Particles, Strings and Supernovae, Volumes I and II. Proceedings of the Theoretical Advanced Study Institute in Elementary Particle
Physics, held June 6 - July 2, 1988, at Brown University, Providence, Rhode Island. Edited by A. Jevicki and C.-I. Tan. Published by World Scientific, New York, 1989, p.795 (pdf)
• Arkady Tseytlin, Sigma model approach to string theory effective actions with tachyons, J. Math.Phys.42:2854-2871 (2001) (arXiv:hep-th/0011033)
First indications on how to formalize $\sigma$-models in a higher categorical context were given in
A grand picture developing this approach further is sketched in
A discussion of 2- or (2+1)-dimensional $\Sigma$-models whose target is an derived stack/infinity-stack is in
More discussion of the latter is at geometric infinity-function theory.
A discussion of $\sigma$-models of higher gauge theory type is at
Concrete applications of $\sigma$-models with target stacks (typically smooth ones, hence smooth groupoids) in string theory and supergravity are discussed for instance in | {"url":"http://www.ncatlab.org/nlab/show/sigma-model","timestamp":"2014-04-20T05:42:47Z","content_type":null,"content_length":"143421","record_id":"<urn:uuid:01b9898a-8b03-4912-9db1-576b2857af99>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
Classical Gravity Question: F = G m1 * m2 / r2 goes to infinity as r->0?
2011-Apr-14, 03:51 PM #1
Join Date
Oct 2009
Classical Gravity Question: F = G m1 * m2 / r2 goes to infinity as r->0?
Please explain to me where I am wrong.
I have a doughnut, a very large massive doughnut made of lead... Ok so it's not edible.
For calculating the force of gravity exerted by this doughnut I can replace it with a point located at its center. Correct? So as I approach this point the force of gravity will increase. As my
distance to this point goes to 0, F = G m1 * m2 / r2 will go to infinity. Intuitively this is wrong. Why is this so?
You can only replace the doughnut with a point at its center for
measurements of gravity at a distance from the doughnut. The
simplification works very well-- though not perfectly-- for spheres,
less well for doughnuts, and even less well for more complicated
shapes. If you approach a sphere so closely that you are
inside the sphere, the force not only stops increasing, it begins
to fall. Same with the doughnut.
-- Jeff, in Minneapolis
"I find astronomy very interesting, but I wouldn't if I thought we
were just going to sit here and look." -- "Van Rijn"
"The other planets? Well, they just happen to be there, but the
point of rockets is to explore them!" -- Kai Yeves
No, this is not correct. The correct way is to divide the donut into infinitesimal elements and to calculate the force due to each element. If you do that correctly, you will find out that the
force in the center of the donut is precisely 0. Due to the symmetry, the forces balance each other.
So as I approach this point the force of gravity will increase.
No, see above.
As my distance to this point goes to 0, F = G m1 * m2 / r2 will go to infinity. Intuitively this is wrong. Why is this so?
Because you use intuition instead of math.
You may want to check out Feynman's Lectures on Physics Volume 1. It has several pages describing (mostly in words with a few equations as well) what Jeff Root just said. The equations assume
that you know some integral calculus, but even if you don't, the accompanying description alone is worth reading. As I said, it is only a few pages and can be easily read in the aisle of your
local big chain bookstore.
The problem is the shape. One can use the center of gravity as being the center at a distance but as you get closer to the center of gravity forces counteract each other in the center there will
be a net force of 0 not infinity.
The max force would be at the surface of the outside of the doughnut.
However the gravitational potential would probably be a max at the center.
Please explain to me where I am wrong.
I have a doughnut, a very large massive doughnut made of lead... Ok so it's not edible.
For calculating the force of gravity exerted by this doughnut I can replace it with a point located at its center. Correct? So as I approach this point the force of gravity will increase. As my
distance to this point goes to 0, F = G m1 * m2 / r2 will go to infinity. Intuitively this is wrong. Why is this so?
Another way to look at it is: If all the mass of that doughnut was all concentrated in one point, then F would be getting quite large at small distances... it's possibly black hole time.
Thank you, members of cosmoquest forum, you are a part of my life I value.
It's time for you to break out your calculus. I suspect that this is not a trivial calculation, and it may not even be possible in closed form--I'm too short for time to even try to find out--but
the principle is that you would need to integrate dg resulting from each dm within the toroid to find the gravitational acceleration at any given point. When the point is sufficiently far from
the toroid, with "sufficiently far" dependent on how accurately you need to know the gravitational acceleration at any given point, you can just use the basic g=GMr/|r|^3, and assume that M (the
mass of your toroid) is spherical.
Have fun, and tell us how you fared.
(note that I'm using g as shorthand for F/m, where m is the mass of a test particle. Bold type denotes vector quantities)
Information about American English usage here and here. Floating point issues? Please read this before posting.
Please explain to me where I am wrong.
I have a doughnut, a very large massive doughnut made of lead... Ok so it's not edible.
For calculating the force of gravity exerted by this doughnut I can replace it with a point located at its center. Correct? So as I approach this point the force of gravity will increase. As my
distance to this point goes to 0, F = G m1 * m2 / r2 will go to infinity. Intuitively this is wrong. Why is this so?
Like I'm sure others have pointed out already, but I have not read their posts yet, this formula only works when you are outside of the object's surface. So if your doughnut is of major radius of
100km and a minor radius of 50km and your distance is 20km from the centre then this is outside of the applicability of that formula.
At R=0 the gravity would actually not go to infinity but go to zero but, again, you can't use that formula for that calculation. Use the wrong formula for a given and you'll get a wrong answer.
Garbage in, Garbage out.
???? gravitational potential .... shouldnt this be a max at the center? For a sphere it is the max in the center right?
Isnt that what this diagram shows:
Thanks for the responses
I am trying to model the attractive force of gravity in a computer program... Why? I'm bored. So I'm watching Stephen Hawking's 'Into the Universe' and there is a simulation that shows a bunch of
marbles layed out on the floor of the cafeteria at Cambridge University. They were all equally spread out and nothing would happen because all of the forces would cancel out (this assumed an
infinitely sized floor with an infinite number of marbles). If you removed some of the marbles the forces would be stronger and weaker in other areas and the marbles would begin to move. This was
an explaination of the early Universe and the creation of galaxies/stars...
I was going to go into a long detailed explaination of what I have attempted, but instead I will pose a single question:
I have two masses, m1 and m2 seperated by some distance x. What are their equations of motion? You do not have to account for collisions, the 'particles' can pass right through each other. I
would think that the masses seperated by x would accelerate towards each other and when they meet would start to decelerate until they are x distance apart again and this would repeat. For the
more adventurous what is the equations of motion for the 3 'particle' problem.
Thanks in advance.
???? gravitational potential .... shouldnt this be a max at the center? For a sphere it is the max in the center right?
Isnt that what this diagram shows:
To me, the diagram looks like the lowest point is at the center.
Keplerian ellipses, depending upon their initial velocity. If the velocities are too great, they might "escape" each other.
For the more adventurous what is the equations of motion for the 3 'particle' problem.
An unsolved mathematical problem, in general!
I am trying to model the attractive force of gravity in a computer program... Why? I'm bored. So I'm watching Stephen Hawking's 'Into the Universe' and there is a simulation that shows a bunch of
marbles layed out on the floor of the cafeteria at Cambridge University. They were all equally spread out and nothing would happen because all of the forces would cancel out (this assumed an
infinitely sized floor with an infinite number of marbles). If you removed some of the marbles the forces would be stronger and weaker in other areas and the marbles would begin to move. This was
an explaination of the early Universe and the creation of galaxies/stars...
I was going to go into a long detailed explaination of what I have attempted, but instead I will pose a single question:
I have two masses, m1 and m2 seperated by some distance x. What are their equations of motion? You do not have to account for collisions, the 'particles' can pass right through each other. I
would think that the masses seperated by x would accelerate towards each other and when they meet would start to decelerate until they are x distance apart again and this would repeat. For the
more adventurous what is the equations of motion for the 3 'particle' problem.
Thanks in advance.
By convention, the gravitational potential is defined as zero infinitely far away from any mass. As a result it is negative elsewhere.
So, sounds like the potential is -something at the center.
Indeed. You will have to model this with a simulation; taking small enough (?) time steps should produce something reasonably accurate.
You should just need F=ma and F=Gm1m2/r^2 - you can then calculate the total force on each particle due to all the others and therefore its acceleration, at each time step, for every particle.
Tedious and repetitive. Which is what we have computers for.
See here
I would think that the masses seperated by x would accelerate towards each other and when they meet would start to decelerate until they are x distance apart again and this would repeat.
This is false, they end up colliding (see link cited above).
Last edited by macaw; 2011-Apr-15 at 10:14 PM.
... there is a simulation that shows a bunch of marbles layed out on
the floor of the cafeteria at Cambridge University. They were all equally
spread out and nothing would happen because all of the forces would
cancel out (this assumed an infinitely sized floor with an infinite number
of marbles).
That's what I tried to argue, but Ken G (one of BAUT's real experts)
wouldn't go along with it. My argument was Newtonian and his was
probably relativistic, so that likely makes the difference...
If you treat them as points, and the points get very close together,
then you *will* get absurd interactions. If the points are exactly on
top of one another, you will either get infinite gravitational attraction,
or a division by zero error. So you'll need to test for it.
Yes. The general terms for it are "harmonic motion" or "oscillation".
It looks exactly the same as looking at two masses orbiting each
other edge-on to the plane of the orbits. Keplerian circular and
elliptical orbits are examples of harmonic motion.
-- Jeff, in Minneapolis
"I find astronomy very interesting, but I wouldn't if I thought we
were just going to sit here and look." -- "Van Rijn"
"The other planets? Well, they just happen to be there, but the
point of rockets is to explore them!" -- Kai Yeves
So, sounds like the potential is -something at the center.
My interpretation was potential energy. Oops.
I was going to go into a long detailed explaination of what I have attempted, but instead I will pose a single question:
I have two masses, m1 and m2 seperated by some distance x. What are their equations of motion? You do not have to account for collisions, the 'particles' can pass right through each other. I
would think that the masses seperated by x would accelerate towards each other and when they meet would start to decelerate until they are x distance apart again and this would repeat. For the
more adventurous what is the equations of motion for the 3 'particle' problem.
Thanks in advance.
For your experiment you have to think of what type of particle you are talking about.
If they can pass right through each other then they are bosons and as bosons they will be traveling at c and never slow down. If they are Fermion then they can't pass through each other so you'd
get a collision. Congratulations you've designed a particle accelerator. If you have them miss each other by just a little bit what you've done is put them in orbit around each other.
His particles are almost certainly either stars or planets, since he's
working on a gravity simulator. Could be galaxies, though.
-- Jeff, in Minneapolis
"I find astronomy very interesting, but I wouldn't if I thought we
were just going to sit here and look." -- "Van Rijn"
"The other planets? Well, they just happen to be there, but the
point of rockets is to explore them!" -- Kai Yeves
2011-Apr-14, 04:47 PM #2
Order of Kilopi
Join Date
Dec 2004
2011-Apr-14, 05:10 PM #3
Join Date
Oct 2007
2011-Apr-14, 05:15 PM #4
Established Member
Join Date
Mar 2002
2011-Apr-14, 06:23 PM #5
2011-Apr-14, 07:42 PM #6
2011-Apr-15, 01:05 AM #7
2011-Apr-15, 05:39 AM #8
2011-Apr-15, 07:40 AM #9
2011-Apr-15, 07:42 AM #10
2011-Apr-15, 01:12 PM #11
2011-Apr-15, 01:16 PM #12
Join Date
Oct 2009
2011-Apr-15, 01:27 PM #13
2011-Apr-15, 02:14 PM #14
Join Date
Oct 2009
2011-Apr-15, 03:09 PM #15
2011-Apr-15, 03:14 PM #16
2011-Apr-15, 03:52 PM #17
Join Date
Oct 2007
2011-Apr-15, 06:40 PM #18
Order of Kilopi
Join Date
Dec 2004
2011-Apr-16, 01:19 AM #19
2011-Apr-16, 02:27 AM #20
2011-Apr-16, 08:24 AM #21
Order of Kilopi
Join Date
Dec 2004 | {"url":"http://cosmoquest.org/forum/showthread.php?114627-Classical-Gravity-Question-F-G-m1-*-m2-r2-goes-to-infinity-as-r-gt-0","timestamp":"2014-04-17T12:29:17Z","content_type":null,"content_length":"152841","record_id":"<urn:uuid:b189c812-d0e8-4f7f-be2e-5168edfe3583>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Variance of the sum of two poisson random variables
June 11th 2012, 10:47 PM #1
Junior Member
Dec 2010
Variance of the sum of two poisson random variables
Hi, I have this problem, the solution of which is unclear to me. It's as follows.
Let the Poisson random variable U be the number of calls for technicals assistance received by a computer firm during the firm's normal 9 hour day. Suppose the average number of calls per hour is
7.0 and that each call costs the company $50. Let V be a Poisson random variable representing the number of calls for assistance received during the remaining 15 hours of the day. Suppose the
average number of calls per hour for this time period is 4.0 and that each call costs the firm $60. Find the expected cost and the variance of the cost associated with calls received during a
24-hour day.
Now, the expectation and the variance for a Poisson random variable are the same, right? And as these are independent random variable the expected cost of the sum should be the sum of the
individual expectations. i.e.
$\mathrm{E}[U + V] = \mathrm{E}[U] + \mathrm{E}[V] = \3150 + \3600 = \6750$
This squares with the solution manual. However, it gives the variance as $373500.
But if these are independent random variables, why isn't the variance the sum of the individual variances, and in this case the same as the expectation? i.e. $6750
This has got to be a typo right? Or am I missing something. My experience with presumed typos in other maths books is it's usually the latter ;-) so I thought I'd check with you cats, MD.
p.s. shouldn't the units of the variance be $\^{2}$ too?
Last edited by Mathsdog; June 11th 2012 at 10:50 PM.
Re: Variance of the sum of two poisson random variables
The expectation and the variance for a Poisson random variable are the same. But the poisson-distributed random variables here are the number of calls, not the cost. Both the expectation and
variance for the number of calls are dimensionless.
If you multiply a random variable with a constant factor, the variance gets scaled with the square of this factor. Therefore, if u is the number of calls during daytime and U the costs, Var(U)=
(50$)^2 * var(u) = (50$)^2 * E(u) = (50$) * E(U) = 157500$^2
Similarly, Var(V)=60$*E(V)=216000$^2 and Var(U+V)=373500$^2.
June 12th 2012, 04:51 AM #2
Junior Member
Jun 2012 | {"url":"http://mathhelpforum.com/advanced-statistics/199936-variance-sum-two-poisson-random-variables.html","timestamp":"2014-04-16T14:29:33Z","content_type":null,"content_length":"34063","record_id":"<urn:uuid:27638c3d-bdf5-41bc-b65c-8dc1d3441f28>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Faculty and Their Interests
Faculty and Their Interests
department of mathematics
Name Interests
Avner Ash Ph.D. Harvard University
Number Theory, Algebraic Geometry
Jenny Baglivo Ph.D. Syracuse University
Statistics, Applied Mathematics
Martin Bridgeman Ph.D. Princeton University
Geometry, Topology
Solomon Friedberg Ph.D. University of Chicago
Number Theory, Representation Theory
Benjamin Howard Ph.D. Stanford University
Number Theory, Arithmetic Geometry
Tao Li Ph.D. California Institute of Technology
Topology, Geometry of Low-Dimensional Manifolds
G. Robert Meyerhoff Ph.D. Princeton University
Geometry, Topology
Mark Reeder Ph.D. The Ohio State University
Lie Groups, Representation Theory
Associate Professors
Name Interests
Daniel W. Chambers Ph.D. University of Maryland
Probability, Stochastic Processes, Statistics
C-K Cheung Ph.D. University of California at Berkeley
Complex Differential Geometry, Several Complex Variables
Robert Gross Ph.D. Massachusetts Institute of Technology
Algebra, Number Theory, History of Mathematics
William J. Keane Ph.D. University of Notre Dame
Abelian Group Theory
Rennie Mirollo Ph.D. Harvard University
Dynamical Systems
Nancy Rallis Ph.D. Indiana University
Algebraic Topology, Fixed Point Theory, Probability and Statistics
Assistant Professors
Name Interests
John Baldwin Ph.D. Columbia University
Low-dimensional Topology, Contact Geometry
Ian Biringer Ph.D. University of Chicago
Dawei Chen Ph.D. Harvard University
Algebraic Geometry
Maksym Fedorchuk Ph.D. Harvard University
Algebraic Geometry
David Geraghty Ph.D. Harvard University
Number Theory
Joshua Greene Ph.D. Princeton University
Low-dimensional Topology
Elisenda Grigsby Ph.D. University of California, Berkeley
Low-dimensional Topology
Dubi Kelmer Ph.D. Tel Aviv University
Number Theory, Spectral Theory
David Treumann Ph.D. Princeton University
Algebraic Geometry, Representation Theory
Adjunct Assistant Professors
Name Interests
Marie Clote D.E.A. Université Paris VII
Robert Reed Ph.D. University of Wisconsin
Mathematical Logic
Visiting Assistant Professors
Name Interests
Radu Cebanu Ph.D. Université du Québec à Montréal
Geometric Topology
Anja Bankovic Ph.D. University of Illinois
Ellen Julia Goldstein Ph.D. Tufts University
Algebraic Groups, Representation Theory
Li-Mei Lim Ph.D. Brown University
Number Theory
Joseph A. Johns Ph.D. University of Chicago
Symplectic Topology
Aaron Fraenkel (McMillan) Ph.D. University of California, Berkeley
Differential Geometry and Geometric Representation Theory
Anand Patel Ph.D Harvard
Algebraic Geometry
Lei Zhang Ph.D. University of Minnesota - Twin Cities
Automorphic Forms
Part-Time Faculty
Name Interests
Paul Garvey Ph.D. Old Dominion University
Risk and Mathematical Decision Theory
Retired Faculty
Name Interests
Gerald G. Bilodeau Ph.D. Harvard University
Robert J. Bond Ph.D. Brown University
Margaret Kenney Ph.D. Boston University
Gerard E. Keough Ph.D. Indiana University
Joseph Krebs M.A. Boston College
Charles K. Landraitis Ph.D. Dartmouth University
Richard A. Jenson Ph.D. University of Illinois
Ned Rosen Ph.D. University of Michigan
John H. Smith Ph.D. Massachusetts Institute of Technology
Paul R. Thie Ph.D. University of Notre Dame | {"url":"http://www.bc.edu/schools/cas/math/newsinfo/faculty.html","timestamp":"2014-04-16T16:08:05Z","content_type":null,"content_length":"19473","record_id":"<urn:uuid:17289083-d58f-443f-acdd-234246b8afd1>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to securely outsource cryptographic computations
Results 1 - 10 of 25
, 2009
"... Verifiable Computation enables a computationally weak client to “outsource ” the computation of a function F on various inputs x1,...,xk to one or more workers. The workers return the result of
the function evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out co ..."
Cited by 96 (10 self)
Add to MetaCart
Verifiable Computation enables a computationally weak client to “outsource ” the computation of a function F on various inputs x1,...,xk to one or more workers. The workers return the result of the
function evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out correctly on the given value xi. The verification of the proof should require substantially less
computational effort than computing F(xi) from scratch. We present a protocol that allows the worker to return a computationally-sound, non-interactive proof that can be verified in O(m) time, where
m is the bit-length of the output of F. The protocol requires a one-time pre-processing stage by the client which takes O(|C|) time, where C is the smallest Boolean circuit computing F. Our scheme
also provides input and output privacy for the client, meaning that the workers do not learn any information about the xi or yi values. 1
- In Proc. Eurocrypt ’05 , 2005
"... At the mouth of two witnesses... shall the matter be establishedDeuteronomy Chapter 19. ..."
- In Proceedings of the 31st annual conference on Advances in cryptology, CRYPTO’11 , 2011
"... We study the problem of computing on large datasets that are stored on an untrusted server. We follow the approach of amortized verifiable computation introduced by Gennaro, Gentry, and Parno in
CRYPTO 2010. We present the first practical verifiable computation scheme for high degree polynomial func ..."
Cited by 19 (2 self)
Add to MetaCart
We study the problem of computing on large datasets that are stored on an untrusted server. We follow the approach of amortized verifiable computation introduced by Gennaro, Gentry, and Parno in
CRYPTO 2010. We present the first practical verifiable computation scheme for high degree polynomial functions. Such functions can be used, for example, to make predictions based on polynomials
fitted to a large number of sample points in an experiment. In addition to the many non-cryptographic applications of delegating high degree polynomials, we use our verifiable computation scheme to
obtain new solutions for verifiable keyword search, and proofs of retrievability. Our constructions are based on the DDH assumption and its variants, and achieve adaptive security, which was left as
an open problem by Gennaro et al (albeit for general functionalities). Our second result is a primitive which we call a verifiable database (VDB). Here, a weak client outsources a large table to an
untrusted server, and makes retrieval and update queries. For each query, the server provides a response and a proof that the response was computed correctly. The goal is to minimize the resources
required by the client. This is made particularly challenging if the number of update queries is unbounded. We present a VDB scheme based on the hardness of the subgroup
- In Proc. Crypto ’06 , 2006
"... Abstract. Let H1, H2 be two hash functions. We wish to construct a new hash function H that is collision resistant if at least one of H1 or H2 is collision resistant. Concatenating the output of
H1 and H2 clearly works, but at the cost of doubling the hash output size. We ask whether a better constr ..."
Cited by 15 (0 self)
Add to MetaCart
Abstract. Let H1, H2 be two hash functions. We wish to construct a new hash function H that is collision resistant if at least one of H1 or H2 is collision resistant. Concatenating the output of H1
and H2 clearly works, but at the cost of doubling the hash output size. We ask whether a better construction exists, namely, can we hedge our bets without doubling the size of the output? We take a
step towards answering this question in the negative — we show that any secure construction that evaluates each hash function once cannot output fewer bits than simply concatenating the given
functions. 1
- CRYPTO , 2006
"... Abstract. Let A and B denote cryptographic primitives. A (k, m)robust A-to-B combiner is a construction, which takes m implementations of primitive A as input, and yields an implementation of
primitive B, which is guaranteed to be secure as long as at least k input implementations are secure. The ma ..."
Cited by 13 (2 self)
Add to MetaCart
Abstract. Let A and B denote cryptographic primitives. A (k, m)robust A-to-B combiner is a construction, which takes m implementations of primitive A as input, and yields an implementation of
primitive B, which is guaranteed to be secure as long as at least k input implementations are secure. The main motivation for such constructions is the tolerance against wrong assumptions on which
the security of implementations is based. For example, a (1,2)-robust A-to-B combiner yields a secure implementation of B even if an assumption underlying one of the input implementations of A turns
out to be wrong. In this work we study robust combiners for private information retrieval (PIR), oblivious transfer (OT), and bit commitment (BC). We propose a (1,2)-robust PIR-to-PIR combiner, and
describe various optimizations based on properties of existing PIR protocols. The existence of simple PIR-to-PIR combiners is somewhat surprising, since OT, a very closely related primitive, seems
difficult to combine (Harnik et al., Eurocrypt’05). Furthermore, we present (1,2)-robust PIR-to-OT and PIR-to-BC combiners. To the best of our knowledge these are the first constructions of A-to-B
combiners with A � = B. Such combiners, in addition to being interesting in their own right, offer insights into relationships between cryptographic primitives. In particular, our PIR-to-OT combiner
together with the impossibility result for OT-combiners of Harnik et al. rule out certain types of reductions of PIR to OT. Finally, we suggest a more fine-grained approach to construction of robust
combiners, which may lead to more efficient and practical combiners in many scenarios.
"... We put forth the problem of delegating the evaluation of a pseudorandom function (PRF) to an untrusted proxy. A delegatable PRF, or DPRF for short, is a new primitive that enables a proxy to
evaluate a PRF on a strict subset of its domain using a trapdoor derived from the DPRF secret-key. PRF delega ..."
Cited by 8 (0 self)
Add to MetaCart
We put forth the problem of delegating the evaluation of a pseudorandom function (PRF) to an untrusted proxy. A delegatable PRF, or DPRF for short, is a new primitive that enables a proxy to evaluate
a PRF on a strict subset of its domain using a trapdoor derived from the DPRF secret-key. PRF delegation is policy-based: the trapdoor is constructed with respect to a certain policy that determines
the subset of input values which the proxy is allowed to compute. Interesting DPRFs should achieve low-bandwidth delegation: Enabling the proxy to compute the PRF values that conform to the policy
should be more efficient than simply providing the proxy with the sequence of all such values precomputed. The main challenge in constructing DPRFs is in maintaining the pseudorandomness of unknown
values in the face of an attacker that adaptively controls proxy servers. A DPRF may be optionally equipped with an additional property we call policy privacy, where any two delegation predicates
remain indistinguishable in the view of a DPRF-querying proxy: achieving this raises new design challenges as policy privacy and efficiency are seemingly conflicting goals. For the important class of
policies described as (1-dimensional) ranges, we devise two DPRF constructions and rigorously prove their security. Built upon the well-known tree-based GGM PRF family [15], our constructions are
generic and feature only logarithmic delegation size in the number of values conforming to the policy predicate. At only a constant-factor efficiency reduction, we show that our second construction
is also policy private. As we finally describe, their new security and efficiency properties render our delegated PRF schemes particularly useful in numerous security applications, including RFID,
symmetric searchable encryption, and broadcast encryption. 1
"... Outsourced databases provide a solution for data owners who want to delegate the task of answering database queries to third-party service providers. However, distrustful users may desire a
means of verifying the integrity of responses to their database queries. Simultaneously, for privacy or secur ..."
Cited by 7 (0 self)
Add to MetaCart
Outsourced databases provide a solution for data owners who want to delegate the task of answering database queries to third-party service providers. However, distrustful users may desire a means of
verifying the integrity of responses to their database queries. Simultaneously, for privacy or security reasons, the data owner may want to keep the database hidden from service providers. This
security property is particularly relevant for aggregate databases, where data is sensitive, and results should only be revealed for queries that are aggregate in nature. In such a scenario, using
simple signature schemes for verification does not suffice. We present a solution in which service providers can collaboratively compute aggregate queries without gaining knowledge of intermediate
results, and users can verify the results of their queries, relying only on their trust of the data owner. Our protocols are secure under reasonable cryptographic assumptions, and are robust to
collusion between k dishonest service providers.
"... Abstract. A(k; n)-robust combiner for a primitive F takes as input n candidate implementations of F and constructs an implementation of F, which is secure assuming that at least k of the input
candidates are secure. Such constructions provide robustness against insecure implementations and wrong ass ..."
Cited by 5 (3 self)
Add to MetaCart
Abstract. A(k; n)-robust combiner for a primitive F takes as input n candidate implementations of F and constructs an implementation of F, which is secure assuming that at least k of the input
candidates are secure. Such constructions provide robustness against insecure implementations and wrong assumptions underlying the candidate schemes. In a recent work Harnik et al. (Eurocrypt 2005)
have proposed a (2; 3)-robust combiner for oblivious transfer (OT), and have shown that (1; 2)-robust OT-combiners of a certain type are impossible. In this paper we propose new, generalized notions
of combiners for two-party primitives, which capture the fact that in many two-party protocols the security of one of the parties is unconditional, or is based on an assumption independent of the
assumption underlying the security of the other party. This fine-grained approach results in OT-combiners strictly stronger than the constructions known before. In particular, we propose an
OT-combiner which guarantees secure OT even when only one candidate is secure for both parties, and every remaining candidate is flawed for one of the parties. Furthermore, we present an efficient
uniform OT-combiner, i.e., a single combiner which is secure simultaneously for a wide range of candidates ’ failures. Finally, our definition allows for a very simple impossibility result, which
shows that the proposed OT-combiners achieve optimal robustness.
"... Abstract—With the abundance of location-aware portable devices such as cellphones and PDAs, a new emerging application is to use this pervasive computing platform to learn about the whereabouts
of one’s friends and relatives. However, issues of trust, security and privacy have hindered the popularit ..."
Cited by 3 (0 self)
Add to MetaCart
Abstract—With the abundance of location-aware portable devices such as cellphones and PDAs, a new emerging application is to use this pervasive computing platform to learn about the whereabouts of
one’s friends and relatives. However, issues of trust, security and privacy have hindered the popularity and safety of the systems developed for this purpose. We identify and address the key
challenges of enabling private spatial queries in social networks using an untrusted server model without compromising users ’ privacy. We propose Private Buddy Search (PBS), a framework to enable
private evaluation of spatial queries predominantly used in social networks, without compromising sensitive information about its users. Utilizing server side encrypted index structures and client
side query processing, PBS enjoys both scalability and privacy. Our extensive experimental evaluation shows that PBS supports very efficient user operations such as location updates, as well as
spatial queries such as range and k-nearest neighbor search. I.
"... Abstract. Gennaro et al. (Crypto 2010) introduced the notion of noninteractive verifiable computation, which allows a computationally weak client to outsource the computation of a function f on
a series of inputs x (1) ,... to a more powerful but untrusted server. Following a preprocessing phase (th ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. Gennaro et al. (Crypto 2010) introduced the notion of noninteractive verifiable computation, which allows a computationally weak client to outsource the computation of a function f on a
series of inputs x (1) ,... to a more powerful but untrusted server. Following a preprocessing phase (that is carried out only once), the client sends some representation of its current input x (i)
to the server; the server returns an answer that allows the client to recover the correct result f(x (i)), accompanied by a proof of correctness that ensures the client does not accept an incorrect
result. The crucial property is that the work done by the client in preparing its input and verifying the server’s proof is less than the time required for the client to compute f on its own. We
extend this notion to the multi-client setting, where n computationally weak clients wish to outsource to an untrusted server the computation of a function f over a series of joint inputs (x (1)
1,..., x(1) n),... without interacting with each other. We present a construction for this setting by combining the scheme of Gennaro et al. with a primitive called proxy oblivious transfer. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=4355483","timestamp":"2014-04-19T21:03:41Z","content_type":null,"content_length":"41479","record_id":"<urn:uuid:54ef75a3-3af6-4a9f-9b02-d08e7812c12d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00401-ip-10-147-4-33.ec2.internal.warc.gz"} |
Burke, VA Prealgebra Tutor
Find a Burke, VA Prealgebra Tutor
...I am an experienced middle school math teacher with extensive experience teaching algebra. I hold an MEd in secondary math education and am pursuing a PhD in math education at George Mason
University. I am an experienced middle school and high school math teacher.
19 Subjects: including prealgebra, calculus, statistics, algebra 2
I recently graduated with a master's degree in chemistry, all the while tutoring extensively in math and science courses throughout my studies. I am well versed in efficient studying techniques,
and am confident that I will be able to make the most use of both your time and mine! I have taken the ...
17 Subjects: including prealgebra, chemistry, calculus, physics
...I have to routinely start thousands of processes remotely and manipulate output data through the shell. Additionally, as a computer science major at my college I've had to learn day-to-day
UNIX commands to progress through my studies. Python is my favorite scripting language, and I've tutored several students in its use.
14 Subjects: including prealgebra, biology, C++, computer science
I have more than 10 years' experience teaching math in private, public, and charter school sectors. More than 80% of my students pass the state's standardized test each year. I have excellent
communication skills which help me to relate mathematical content to my students, and make the concepts seem easy and more doable.
5 Subjects: including prealgebra, algebra 1, elementary math, linear algebra
...I worked as a committee member and chairman of several international conferences, such as IEEE. I have more than 17 years of experience with teaching mathematics in the United States, Japan
and China. I have a unique teaching philosophy for mathematics.
12 Subjects: including prealgebra, calculus, geometry, algebra 1
Related Burke, VA Tutors
Burke, VA Accounting Tutors
Burke, VA ACT Tutors
Burke, VA Algebra Tutors
Burke, VA Algebra 2 Tutors
Burke, VA Calculus Tutors
Burke, VA Geometry Tutors
Burke, VA Math Tutors
Burke, VA Prealgebra Tutors
Burke, VA Precalculus Tutors
Burke, VA SAT Tutors
Burke, VA SAT Math Tutors
Burke, VA Science Tutors
Burke, VA Statistics Tutors
Burke, VA Trigonometry Tutors | {"url":"http://www.purplemath.com/Burke_VA_prealgebra_tutors.php","timestamp":"2014-04-18T03:40:33Z","content_type":null,"content_length":"24031","record_id":"<urn:uuid:df0214db-039d-4fd5-952e-87e898292261>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Explain the Algorithm of Insertion Sort? - Docsity Answers: questions and answers (q&a) from college students
Installation sorting is a simple classification protocol in which develops one more categorized raiment (or maybe checklist) i detail at a time. It truly is a lot less productive along large provides
when compared with heightened sets of rules including quicksort, heapsort, or combine sort
"Things for you to form are put with arrays connected with time-span N. May be in comparison Selecting can be performed inside key ram Simple working sets of rules : E(N2) Shellsort: to(N2)
Sophisticated working methods: I(NlogN) Normally: Ω(NlogN)"
"Insertion sort is a simple sorting algorithm, a comparison sort in which the sorted array (or list) is built one entry at a time. It is much less efficient on large lists than more advanced
algorithms such as quick sort, heap sort, or merge sort, but it has various advantages : (1) Simple to implement. (2) Efficient on (quite) small data sets. (3) Efficient on data sets which are
already substantially sorted: it runs in O(n + d) time, where d is the number of inversions. (4) stable(does not change the relative order of elements with equal keys) (5) In- place (only requires a
constant amount O(1) of extra memory space) (6) It is an online algorithm , in that it can sort a list as it receives it. A simple procedure for Insertion Sort is : insertionSort(array A) for i = 1
to length[A]-1 do begin value = A[i] j = i-1 while j >= 0 and A[j] > value do begin A[j + 1] = A[j] j = j-1 end A[j+1] = value end. Source:
Explain the Algorithm of Insertion Sort?
" On the other hand, insertion type delivers many strengths: Basic rendering Efficient intended for (quite) modest information packages Adaptative (my spouse and i.e., useful) for information units
which might be already drastically fixed: some time complexity is actually E(in + deb), in which debbie may be the volume of inversions More effective in practice as compared to other uncomplicated
quadratic (we.e., E(n2)) algorithms like selection form or perhaps bubble sorting; the most beneficial case (almost sorted suggestions) is E(n) Static; when i.at the., isn't going to customize the
comparative obtain connected with things having the same secrets Throughout-spot; my partner and i.at the., simply uses a continual volume I(just one) of additional storage On-line; my spouse and
i.elizabeth., can easily type a list since it gets that"
"Should the initial things are actually taken care of, an unsorted thing might be put inside the sorted placed in right position. This is known as introduction sorting. A formula look at the aspects
individually, placing every in their ideal position the type of currently thought to be (retention these individuals categorized). Installation sorting is an example of the small protocol; the item
creates this fixed sequence a single variety at any given time. Be considered the most convenient illustration showing the actual step-by-step introduction approach, wherever we all build up a
complex structure in and items frist by constructing that with d − 1 things after which doing the required changes to solve points inside introducing the last item. This provided sequences are
generally saved in arrays. Most of us in addition relate your numbers as recommendations. In conjunction with each and every essential can be additional information, referred to as artificial
satellite information. [Remember that ""satellite tv on pc information"" doesn't automatically are derived from satellite!] "
"Interpolation sorting is among the quite a few calculations that individuals will take care of within this blog. I select this kind of algorithmic rule to start with due to the fact I'm sure that it
is a fairly straightforward (much better start with a simple one ) "
"Introduction Sorting is another O(n2) categorization formula (much like Bubble Form had been) nevertheless may be bit faster. At this point, if at all possible, as soon as wanting to improve through
deciding on a fresh algorithm - toddler change from 1 sluggish algorithmic program to a new, merely When i number though I am describing selecting strategies - I will at the same time reveal a
variety of them. Anyways... apparently , installation type computes to be swifter in comparison with Percolate Type from the general event. "
"Criteria: Interpolation Sort It truely does work the way you might kind help connected with credit cards: All of us commence with jail left hand [sorted range] as well as the playing cards faced
down on the table [unsorted range]. Subsequently get rid of i cards [important] at once on the table [uncategorized raiment], in addition to put that to the right location from the left [grouped
assortment]. To discover the proper post with the cards, we evaluate this together with each of the credit cards currently within the side, from straight away to eventually left" | {"url":"http://en.docsity.com/answers/13526/explain-the-algorithm-of-insertion-sort","timestamp":"2014-04-16T10:13:23Z","content_type":null,"content_length":"199352","record_id":"<urn:uuid:a8c239e9-496c-4bbc-bea0-c500d7549148>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
SymMath Applications
Particle in a Box
Energies and This Mathcad module allows calculation of energies and wavefunctions for several one-dimensional potentials: Particle in a finite box, Particle in a box with a barrier,
Wavefunctions for Several Harmonic oscillator, Morse oscillator, and a Double-minimum potential, comparison of the wavefunctions and their energies, and how barriers affect wavefunction tunneling
One-Dimensional by looking at two double minima potentials.Aa particle in a box basis set and the Variational method are used.
Ricardo Metz, University
of Massachusetts
Particle-in-a-Box This Mathcad document explores the time-dependent quantum mechanics for a particle-in-a-box using guided inquiry and the animation tools of the software.
Theresa Julia Zielinski,
Monmouth University
David M. Hanson, State
University of New York at
Stony Brook
Potential Barriers and A document to provide students with the opportunity to develop their understanding of the behavior of particles in the presence of finite barriers. Color enhancement of
Tunneling the wave function and probability density plots clearly delineate the different regions and makes clear what has happened.Also Scanning Tunneling Microscopy is examined
Mark David Ellison, followed by applications of the tunneling concept to chemical reactions.
Wittenberg University
Variational Methods A Mathcad instructional document that allows students to explore the variational method. The variational method is used here to estimate the energy levels of the particle
Applied to the Particle in a one-dimensional box, where exact expressions are available for the energies and wave functions. By practicing the variational method on a known system, students
in the Box explore the factors that govern the accuracy of the estimated energy and thereby gain an appreciation and confidence in the variational method that is difficult to obtain
W. Tandy Grubbs, Stetson in any other fashion. Exercises are included that allow students to try the variational method using a wide range of trial functions.
Visualizing Using the built-in differential equation solvers and the graphical capabilities of Mathcad, students can visualize the wavefunctions of the particle-in-a-box potential.
Particle-in-a-Box The document examines bound states and tunneling using the particle in a box, step potential and double well potential.
Wavefunctions using
Edmund L. Tisko,
University of Nebraska at | {"url":"http://www.chemeddl.org/alfresco/service/org/chemeddl/symmath/apps?toc_id=20&guest=true","timestamp":"2014-04-19T19:47:38Z","content_type":null,"content_length":"6746","record_id":"<urn:uuid:8b5a21fa-00d6-4c7d-b4b0-b66a85c9526a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computer Science 345 > Dr. Johnson > Notes > slides01-1.pdf | StudyBlue
Simply amazing. The flashcards are smooth, there are many different types of studying tools, and there is a great search engine. I praise you on the awesomeness. - Dennis
I have been getting MUCH better grades on all my tests for school. Flash cards, notes, and quizzes are great on here. Thanks! - Kathy
I was destroying whole rain forests with my flashcard production, but YOU, StudyBlue, have saved the ozone layer. The earth thanks you. - Lindsey
This is the greatest app on my phone!! Thanks so much for making it easier to study. This has helped me a lot! - Tyson | {"url":"http://www.studyblue.com/notes/note/n/slides01-1pdf/file/341102","timestamp":"2014-04-20T20:56:24Z","content_type":null,"content_length":"35712","record_id":"<urn:uuid:a4b3a0a0-b269-452f-8925-da164c84b78e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Do More, Code Less with ArrayFire GPU Matrix Library
Posted on by Mark Harris No Comments Tagged ArrayFire, C++, Fortran, Libraries
ArrayFire is a fast and easy-to-use GPU matrix library developed by Accelereyes. ArrayFire wraps GPU memory into a simple “array” object, enabling developers to process vectors, matrices, and volumes
on the GPU using high-level routines, without having to get involved with device kernel code.
ArrayFire Feature Highlights
• ArrayFire provides a high-level array notation and an extensive set of functions for easily manipulating N-dimensional GPU data.
• ArrayFire provides all basic arithmetic operations (element-wise arithmetic, trigonometric, logical operations, etc.), higher-level primitives (reductions, matrix multiply, set operations,
sorting, etc.), and even domain-specific functions (image and signal processing, linear algebra, etc.).
• ArrayFire can be used as a self-contained library, or integrated into and supplement existing CUDA code. The array object can wrap data from CUDA device pointers and existing CPU memory.
• ArrayFire contains built-in graphics functions for data visualization. The graphics library in ArrayFire provides easy rendering of 2D and 3D data, and leverages CUDA OpenGL interoperation, so
visualization is fast and efficient. Various visualization algorithms make easy to explore complex data.
• ArrayFire offers a unique “gfor” construct that can drastically speed up conventional “for” loops over data. The gfor loop essentially auto-vectorizes the code inside, and executes all iterations
of the loop simultaneously.
• ArrayFire supports C, C++, and Fortran on top of the CUDA platform.
• ArrayFire is built on top of a custom just-in-time (JIT) compiler for efficient GPU memory usage. The JIT back-end in ArrayFire automatically combines many operations behind the scenes, and
executes them in batches to minimize GPU kernel launches.
• Accelereyes strives to include only the best performing code in ArrayFire. This means that ArrayFire uses existing implementations of functions when they are faster—such as Thrust for
sorting, CULA for linear algebra, and CUFFT for fft.
K-means Clustering: An ArrayFire Example
The K-means clustering algorithm is a popular method of cluster analysis commonly used in data mining and machine learning applications. The goal of the algorithm is to partition n data points into k
clusters in which each observation belongs to the cluster with the nearest mean. K-means can also be used in image processing applications as a method for image segmentation. Here is some pseudocode
for the K-means clustering algorithm.
Choose k data points to act as initial cluster centers
Until the cluster centers do not change:
Assign each data point (pixel) to the nearest (color-distance) cluster
Update the cluster centers with the average of the pixels in the current cluster
Implementing k-means in raw CUDA C/C++ can be a daunting task; even using Thrust it takes over 300 lines of code. With ArrayFire, a k-means implementation can be written in about 10 times fewer lines
of code than an equivalent Thrust implementation, allowing the programmer to spend time and effort on high-level algorithms rather than low-level implementation details. The following code snippet is
from the K-Means example (included with ArrayFire), which demonstrates easy k-means image segmentation.
// kmeans(input,input,output)
// data: input, 1D or 2D (range [0-1])
// k: input, # desired means (k > 1)
// means: output, vector of means
void kmeans(array& data, int k, array& means) {
array datavec = flat(data); // convert 2D -> 1D
float minimum = min(datavec); // get minimum value
datavec = datavec - minimum; // re-center range
int nbins = max(datavec) + 1; // number of color bins
array means_vec = array(seq(k)) * nbins / (k + 1); // initial centroids
array hist_counts = zero(1, nbins); // initialize histogram bins
array hist = histogram(datavec, nbins); // compute image histogram
array hist_idx = where(hist); // get non-zero histogram bins
int num_uniq = hist_idx.elements(); // number of non-zero bins
while (1) { // convergence loop
array prev_means = means_vec; // running means total
gfor(array i, num_uniq) { // update all bins in parallel
array diffs = abs(hist_idx(i) - means_vec); // current classifications
array val, idx;
min(val, idx, diffs); // get index of minimum value
hist_counts(hist_idx(i)) = idx; // update bins
for (int i = 0; i < k; ++i) {
array m = where(hist_counts == i); // find all occurrences
means_vec(i) = sum(m * hist(m)) / sum(hist(m)); // recalculate means
if (norm(means_vec - prev_means) < 1) break; // stop when converged
means = means_vec + minimum; // re-center range, output means
This heavily commented code sample uses several of the features mentioned previously; notice how expressive and powerful the ArrayFire syntax can be. The complete demo source code also demonstrates
how to re-color an image based on the computed means (a “k-means shift”). The following text and image are the output of running the k-means example.
machine_learning $ ./kmeans
** ArrayFire K-Means Demo **
k = 3
min 29
max 209
means =
A relative of the k-means clustering algorithm is mean-shift clustering, in which local clusters form based on the local data, and not based on a specific k. ArrayFire also includes a meanshift()
function for simple image filtering.
Beautiful ArrayFire Graphics
The “Shallow Water Equations” (SWE) example included with ArrayFire demonstrates how to use ArrayFire to simulate and visualize the shallow water equations on the GPU. It makes heavy use of
convolution and element-wise arithmetic, and showcases 2D and 3D graphics rendering, as you can see in the following screenshot. The upper left surface plot and bottom left image plot show the
current wave formation, and the 3D points on the right plot gradient vs. magnitude.
Easy Integration with CUDA
ArrayFire can easily share data with existing CUDA device memory, allowing plug-in functionality to existing CUDA applications. The following code snippet demonstrates how to use ArrayFire with a
CUDA device memory pointer for image edge detection.
// cuda_img_ptr is an existing cuda device memory pointer
// 3x3 sobel weights
const float h_sobel[] = {
-2.0, -1.0, 0.0,
-1.0, 0.0, 1.0,
0.0, 1.0, 2.0
// load sobel convolution kernel
array sobel_k = array(3, 3, h_sobel);
// wrap cuda memory in an ArrayFire array object
array image = array(m, n, cuda_img_ptr, afDevice);
// run filter to get edges
array edges = convolve(image, sobel_k);
// convert array memory back to a raw cuda device pointer
cuda_img_ptr = edges.device();
// continue using cuda_img_ptr
Multi-GPU Support
You can use ArrayFire to easily parallelize computation among multiple CUDA devices. The deviceset() function is used to select a device on which subsequent operations will be performed. The
following example shows how to run an FFT across many devices in parallel.
// divide up work across all GPUs
array *y = new array[num_gpus];
for (int i = 0; i < num_gpus; ++i) {
deviceset(i); // change GPU
array x = randu(5,5); // add data to selected GPU
y[i] = fft(x); // put work in queue
// all GPUs are now computing simultaneously, until done
See the multi-gpu gemv code included with ArrayFire for a more complete distributed-computing example. Using deviceset() in combination with gfor offers massive parallelism with minimal code
Limitations of ArrayFire
While ArrayFire has a solid set of routines for 1D/2D/3D data, it does not offer true N-D support. ArrayFire is currently limited to 4-dimensional data, and most functions only support up to 3D
Currently, not every function supports gfor. While many functions can be used inside gfor loops, there are some that cannot. Also, gfor currently requires each iteration to be independent, which
precludes certain types of computation. Over time, however, Accelereyes hopes to enable gfor support for all ArrayFire computations.
Gear Up for Speed Up
ArrayFire is a fast matrix library for GPU computing with an easy-to-use API. Its “array”-based function set makes GPU programming simple and accessible. ArrayFire is cross-platform
(Linux,Windows,OSX) and offers hundreds of routines in areas of matrix arithmetic, signal processing, linear algebra, statistics, image processing, and can easily integrate into existing CUDA
applications. Read more about AccelerEyes and try out ArrayFire for free!
Developers, come take part in the Gear Up for Speed Up! campaign hosted by NVIDIA and AccelerEyes! We are confident you will get at least 4x performance improvement on your application in just 2
weeks, simply by using ArrayFire to accelerate critical portions of your code.
NVIDIA and AccelerEyes are also hosting a free “Gear up for Speedup” Webinar Thursday, November 1st at 9:00 AM PDT. Don’t miss it!
Chris McClanahan is a software engineer at Accelereyes. | {"url":"http://devblogs.nvidia.com/parallelforall/do-more-code-less-arrayfire-gpu-matrix-library/","timestamp":"2014-04-16T10:03:05Z","content_type":null,"content_length":"58398","record_id":"<urn:uuid:ed02204f-2317-401a-80d8-d607140b6dc4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Milton, MA Math Tutor
Find an East Milton, MA Math Tutor
...I have more than five years experience as a career ownership and development coach. The career level of my clients includes those at the high school, college/university, entry, experienced,
and top executive levels. The foundation of my experience is more than 20 years in business, as an employee and consultant; as well as holding senior positions in management, including
41 Subjects: including algebra 1, chemistry, English, reading
...I love teaching, but tutoring really allows that one-on-one time that helps students build the desire for lifelong learning. It is a joy to be part of that process.I am a full time High School
Chemistry Teacher. We exceed the MA state standards and usually delve into organic Chemistry at the end of the year.
14 Subjects: including algebra 1, SAT math, Spanish, chemistry
...Now I have a family and follow daily the news and statistics of baseball. Proof reading is a skill that must not be overlooked. In the hundreds of essays and reports I have written over the
years, I have spent much time re-writing papers because the grammar often doesn't sound the same when you look at it a second or third time.
8 Subjects: including algebra 1, vocabulary, English, SAT math
...My teaching approach to the ISEE is very student-specific. I start by analyzing my student's specific strengths and weaknesses on a practice test so that our time together is spent in the most
productive and efficient way possible. As we progress, what we spend time on each step of the way is in response to my student's evolving needs.
33 Subjects: including algebra 2, SAT math, English, algebra 1
...My name is Rebecca and I am currently a freshman at Northeastern University studying International Business. I've always loved math and am able to tutor students up to Algebra 2. I have
previous experience tutoring students of all ages throughout my four years of high school.
11 Subjects: including algebra 2, reading, linear algebra, probability
Related East Milton, MA Tutors
East Milton, MA Accounting Tutors
East Milton, MA ACT Tutors
East Milton, MA Algebra Tutors
East Milton, MA Algebra 2 Tutors
East Milton, MA Calculus Tutors
East Milton, MA Geometry Tutors
East Milton, MA Math Tutors
East Milton, MA Prealgebra Tutors
East Milton, MA Precalculus Tutors
East Milton, MA SAT Tutors
East Milton, MA SAT Math Tutors
East Milton, MA Science Tutors
East Milton, MA Statistics Tutors
East Milton, MA Trigonometry Tutors
Nearby Cities With Math Tutor
East Braintree, MA Math Tutors
Grove Hall, MA Math Tutors
Houghs Neck, MA Math Tutors
Marina Bay, MA Math Tutors
Milton Village Math Tutors
Norfolk Downs, MA Math Tutors
North Quincy, MA Math Tutors
Quincy Center, MA Math Tutors
Readville Math Tutors
Reservoir, MS Math Tutors
South Quincy, MA Math Tutors
Squantum, MA Math Tutors
West Quincy, MA Math Tutors
Weymouth Lndg, MA Math Tutors
Wollaston, MA Math Tutors | {"url":"http://www.purplemath.com/east_milton_ma_math_tutors.php","timestamp":"2014-04-16T10:13:19Z","content_type":null,"content_length":"24251","record_id":"<urn:uuid:e5daf4a1-7b2a-4673-a627-51b7b3719e1f>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
Java’s integral types in pvs
- Nijmegen Institute of Computing and Information Sciences , 2003
"... This paper presents a historical overview of the work on Java program verification at the University of Nijmegen (the Netherlands) over the past six years (1997--2003). It describes the
development and use of the LOOP tool that is central in this work. Also, it gives a perspective on the field. ..."
Cited by 47 (5 self)
Add to MetaCart
This paper presents a historical overview of the work on Java program verification at the University of Nijmegen (the Netherlands) over the past six years (1997--2003). It describes the development
and use of the LOOP tool that is central in this work. Also, it gives a perspective on the field.
, 2004
"... We present an approach to integrating the refinement relation between infinite integer types (used in specification languages) and finite integer types (used in programming languages) into
software verification calculi. Since integer types in programming languages have finite ranges, in general they ..."
Cited by 16 (3 self)
Add to MetaCart
We present an approach to integrating the refinement relation between infinite integer types (used in specification languages) and finite integer types (used in programming languages) into software
verification calculi. Since integer types in programming languages have finite ranges, in general they are not a correct data refinement of the mathematical integers usually used in specification
languages. Ensuring the correctness of such a refinement requires generating and verifying additional proof obligations. We tackle this problem considering Java and UML/OCL as example. We present a
sequent calculus for Java integer arithmetic with integrated generation of refinement proof obligations. Thus, there is no explicit...
, 2003
"... Data[Semantics int] dt int exists : Axiom Exists (x: (pod data type?[Semantics int])): True dt int : (pod data type?[Semantics int]) End Cxx Int The identifiers with sshort refer to the
corresponding items from the semantics of signed short. First we declare the size of the value representation, ..."
Cited by 12 (6 self)
Add to MetaCart
Data[Semantics int] dt int exists : Axiom Exists (x: (pod data type?[Semantics int])): True dt int : (pod data type?[Semantics int]) End Cxx Int The identifiers with sshort refer to the corresponding
items from the semantics of signed short. First we declare the size of the value representation, this becomes important for the unsigned integer types, see below. We define the value type Semantics
int as a predicate subtype of the PVS integer type int. The axioms int longer and int contains sshort formalise the requirement that "[short int] provides at least as much storage as [int]" (3.9.1
- Proc. ZB 2005: Formal Specification and Development in B, volume 3455 of LNCS , 2005
"... Abstract. We describe a method for combining formal program development with a disciplined and documented way of introducing realistic compromises, for example necessitated by resource bounds.
Idealistic specifications are identified with the limits of sequences of more “realistic” specifications, a ..."
Cited by 8 (3 self)
Add to MetaCart
Abstract. We describe a method for combining formal program development with a disciplined and documented way of introducing realistic compromises, for example necessitated by resource bounds.
Idealistic specifications are identified with the limits of sequences of more “realistic” specifications, and such sequences can then be refined in their entirety. Compromises amount to focusing the
attention on a particular element of the sequence instead of the sequence as a whole. This method addresses the problem that initial formal specifications can be abstract or complete but rarely both.
Various potential application areas are sketched, some illustrated with examples. Key research issues are found in identifying metric spaces and properties that make them usable for refinement using | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1082761","timestamp":"2014-04-18T06:05:33Z","content_type":null,"content_length":"20436","record_id":"<urn:uuid:a1327c35-c638-4392-99b5-fe8439262b4c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
A concert ticket that originally cost $18.50 is on sale for $14.00. What is the percent of decrease, rounded to the nearest tenth?
A concert ticket that originally cost $18.50 is on sale for $14.00. What is the percent of decrease, rounded to the nearest tenth? A. 22.6% B. 24.3% C. 28.8% D. 32.1%
This conversation has been flagged as incorrect. Can you answer it? Please add your answer below ....
Not a good answer? Get an answer now. (FREE) | {"url":"http://www.weegy.com/?ConversationId=7965A0F5","timestamp":"2014-04-18T00:23:57Z","content_type":null,"content_length":"43843","record_id":"<urn:uuid:dcd0bfe5-0ddb-425b-b625-eb8ca50d24d0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Best Square
Copyright © University of Cambridge. All rights reserved.
'The Best Square' printed from http://nrich.maths.org/
This is an open investigation. It can be taken to various levels of complexity and note that the construction of clear, algorithmic procedures is difficult but fascinating.
Imagine that you have been asked to write a computer program to determine how accurately people can draw, freehand, a square of side $50$cm on an interactive whiteboard.
How would you mathematically judge the accuracy of such a drawing? Create a well-defined process by which the computer would be able to compute your measure of accuracy. You can assume that the
whiteboard stores the freehand square internally as a set of pixels described by pairs of Cartesian coordinates.
Extensions: Consider the implementation of a similar process to judge the perfection of a free-hand circle; How might you tackle the problem of deciding if an image is a square or a circle?
Did you know ... ?
Image recognition is big business and very difficult: it involves cutting edge mathematics and computing. Imagine the intricacies involved in programming a computer to recognise typeface, handwriting
or even human faces. Some progress is now being made into recognising images of faces from the binary encoding of photographs. We wonder how rapidly this development will proceed and what the
resulting technological and sociological implications will be. | {"url":"http://nrich.maths.org/7068/index?nomenu=1","timestamp":"2014-04-17T15:30:49Z","content_type":null,"content_length":"4331","record_id":"<urn:uuid:aa2f3be1-8ca1-4b24-b228-727a996a4194>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: LR(k)-Parser wanted
grosch@cocolab.sub.com (Josef Grosch)
Tue, 25 Apr 1995 11:12:39 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers
From: grosch@cocolab.sub.com (Josef Grosch)
Keywords: parse
Organization: CoCoLab, Karlsruhe, Germany
References: 95-04-133
Date: Tue, 25 Apr 1995 11:12:39 GMT
Bernd Holzmueller (holzmuel@kafka.informatik.uni-stuttgart.de) wrote:
: Has anybody developed a parser-generator which generates parsers for the
: grammar class of LR(1), LR(k) for a fixed k > 1 or LR(k) for arbitrary k?
The parser generator 'lark' of the Cocktail Toolbox can generate parsers for
LR(1) grammars. There are several options:
- full LR(1): is only practical for medium size grammars, otherwise huge
amounts of memory and runtime are needed.
- partial LR(1): uses LALR(1) by default and LR(1) at states where necessary.
I have implemented my own version of state splitting. This works well
in simple cases and tends to degenerate to full LR(1) in case of many
LR conflicts. Perhaps I should try the algorithm of Pager, however,
I am not sure whether this is worth the effort.
Also, I started to work on an LR(k) version of 'lark'. It should be called
partial LR(k), because again LALR(1) is used by default and LR(k) only where
necessary. It tries LR(2), LR(3), etc. up to a given limit. Before trying
LR(k) an approximation of LR(k) is tried which is much more efficient to
compute. Again, conventional LR(k) for a even few states is expensive.
The analysis phase is more or less completed. A grammar for COBOL with more
than 100 LR(1) conflicts can be analyzed giving the result that half of
the conflicts can be solved using LR(2). The generation of parsers that
automatically use more than one lookahead token has to be finished.
For more information please contact
Josef Grosch
Mail: grosch@cocolab.sub.com
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/95-04-179","timestamp":"2014-04-19T02:09:57Z","content_type":null,"content_length":"5845","record_id":"<urn:uuid:c7cc59ba-e294-4081-856f-9f2871fe57d5>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Joanne on Sunday, September 20, 2009 at 7:09pm.
how can i solve this problem? The number of tickets sold each day for a upcoming peformance of Handel's Messiah is given by N(x)=0.4x^2+8x+11, where x us the number of days since the concert was
first announced. When will daily ticket sales peak and how many tickets will be sold that day.
• Math - Ms. Sue, Sunday, September 20, 2009 at 7:13pm
Your School Subject is Math, not "college."
• Math - Reiny, Sunday, September 20, 2009 at 7:49pm
since you labeled it "college" I will assume you know Calculus.
N'(x) = .8x + 8
= 0 for a max/min of N
.8x + 8 = 0
x will be a negative, so your problem makes no sense.
your function is an upwards parabola so it has a minimum, not a maximum.
perhaps you have a typo and the first term is -.8x^2
then the above result becomes
-.8x + 8 = 0
x = 10
and N(10) = -.8(100) + 80 + 11
= 11
11 tickets sold ????
There are more people singing Hallelujah.
• Math - jim, Sunday, September 20, 2009 at 7:49pm
Apart from the subject, that equation isn't going to peak for x > 0. I think you may have an error in the question.
Related Questions
College Algebra 2 - The number of tickets sold each day for an upcoming ...
college algebra - The number of tickets sold each day for an upcoming ...
Math/Algebra - The number of tickets sold each day for an upcoming performance ...
Algebra 2 - The number of tickets sold each day for an upcoming performance of ...
algebra - The number of tickets sold each day for an upcoming performance of ...
algebra - The number of tickets sold each day for upcoming perfomance of Handel'...
algebra - The number of tickets soldeach day for an upcoming performance of ...
math - The number of tickets sold each day for an upcoming performance of Hanel'...
Math - The number of tickets sold each day for a performance is given by N(x)=-0...
Math - The number of tickets sold each day for an upcoming performance is given ... | {"url":"http://www.jiskha.com/display.cgi?id=1253488157","timestamp":"2014-04-16T10:46:27Z","content_type":null,"content_length":"9400","record_id":"<urn:uuid:136cd1ce-3c1a-4ca4-af7e-e9e6b8dc71e7>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Median and Percentiles
21.9 Median and Percentiles
The median and percentile functions described in this section operate on sorted data. For convenience we use quantiles, measured on a scale of 0 to 1, instead of percentiles (which use a scale of 0
to 100).
Function: double gsl_stats_median_from_sorted_data (const double sorted_data[], size_t stride, size_t n)
This function returns the median value of sorted_data, a dataset of length n with stride stride. The elements of the array must be in ascending numerical order. There are no checks to see whether
the data are sorted, so the function gsl_sort should always be used first.
When the dataset has an odd number of elements the median is the value of element (n-1)/2. When the dataset has an even number of elements the median is the mean of the two nearest middle values,
elements (n-1)/2 and n/2. Since the algorithm for computing the median involves interpolation this function always returns a floating-point number, even for integer data types.
Function: double gsl_stats_quantile_from_sorted_data (const double sorted_data[], size_t stride, size_t n, double f)
This function returns a quantile value of sorted_data, a double-precision array of length n with stride stride. The elements of the array must be in ascending numerical order. The quantile is
determined by the f, a fraction between 0 and 1. For example, to compute the value of the 75th percentile f should have the value 0.75.
There are no checks to see whether the data are sorted, so the function gsl_sort should always be used first.
The quantile is found by interpolation, using the formula
quantile = (1 - \delta) x_i + \delta x_{i+1}
where i is floor((n - 1)f) and \delta is (n-1)f - i.
Thus the minimum value of the array (data[0*stride]) is given by f equal to zero, the maximum value (data[(n-1)*stride]) is given by f equal to one and the median value is given by f equal to
0.5. Since the algorithm for computing quantiles involves interpolation this function always returns a floating-point number, even for integer data types. | {"url":"http://www.gnu.org/software/gsl/manual/html_node/Median-and-Percentiles.html","timestamp":"2014-04-18T07:27:47Z","content_type":null,"content_length":"6960","record_id":"<urn:uuid:54eefd0d-664f-4f13-a1e3-88de5d1ccf45>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Conchoid Family of Curves - National Curve Bank
Back to . . .
Curve Bank Home
Classics Index Page The Conchoid Family of Curves Area of the loop . . .
Deposit #61
For the Conchoid of
Nicomedes.... Replay the Definition:
Let S be any curve and let A be a fixed point. If a straight line is drawn through A to meet the curve at Q, and if P and P' are points on this line such that PQ =
P'Q = a constant term, the locus of the points P and P' is called a conchoid of the curve with respect to the fixed point A.
The Conchoid family of curves is easily entered and modified on a graphing calculator.
Be sure to use (1/cos) for the secant term.
│ "The invention of the conchoid ( mussel-shell shape ) is ascribed to Nicomedes by Pappus and other classical authors; it was a favourite with the mathematicians of the seventeenth century as a │
│ specimen for the new method of analytical geometry and calculus. It could be used (as was the purpose of the invention) to solve the two problems of doubling the cube and of trisecting an angle; │
│ and hence for every cubic or quartic problem. For this reason Newton suggested that it should be treated as a standard curve." │
│ │
│ E. H. Lockwood │
Historical Sketch
Nicomedes (ca. 225 BC) is credited with being the first to investigate the conchoid, a name from the Greek meaning "shell-like" or "shell form." He sought to find two mean proportions between given
lengths in order to solve the famous Delian problem.
The Delian problem is also known as duplication of the cube; in other words, finding the edge of a cube having a volume exactly twice that of a given cube. This is one of the three famous
constructions dating from antiquity. If, contrary to Euclidean assumptions, we permit ourselves to mark a straight edge, the conchoid may, in fact be applied to these famous problems.
With adjustments in various constant terms, the conchoid is modified into a circle, spiral, limaçon, or cardiod. Moreover, we find equations for the conchleoid and conchal as well as focal conchoids
of conic sections in the literature. But the most famous are the . . . . . .
• Conchoid of Dürer
• Conchoid of de Sluze
• Limaçon of Pascal, father of Blaise Pascal
• Conchoid of Külp
• and of course, the oldest, the Conchoid of Nicomedes.
To underscore the historical importance of the conchoid, we have selected Figure #139 of John Colson's translation of Maria Gaetana Agnesi's Instituzioni analitche, now generally recognized as the
first book of mathematics written by a woman. Colson held the prestigious Lucasian chair at Cambridge. In 1801 he chose to translate Agnesi's widely acclaimed treatise on Calculus written one-half
century earlier (1748). Unfortunately, his translation of the name of the curve now known in English speaking countries as the Witch of Agnesi was a slight, but lasting error. However, his Conchoid
of Nicomedes was accurate and an indication of the enduring fame of this popular curve. Colson's EXAMPLE IV speaks for itself.
Reproduced with permission from the Rare Books Division, Dept. of Rare Books and Special Collections, Princeton University Library.
Please observe that Agnesi first sketched a circle to generate both her
and the
before proceeding to give a proof.
│ Useful Links and Books │
│ http://www-history.mcs.st-and.ac.uk/history/Curves/Conchoid.html │
│ http://mathworld.wolfram.com/ConchoidCurve.html │
│ Eves, Howard, An Introduction to the History of Mathematics, 6th ed,. The Saunders College Publishing, 1990. │
│ Gray, Alfred, Modern Differential Geometry of Curves and Surfaces with MATHEMATICA^®, 2nd ed., CRC Press, 1998, p. 898. │
│ Lockwood, E. H., A Book of Curves, Cambridge University Press, 1961. │
│ Shikin, Eugene V., Handbook and Atlas of Curves, CRC Press, 1995. │
│ Yates, Robert, CURVES AND THEIR PROPERTIES, The National Council of Teachers of Mathematics, 1952. │
│ MATHEMATICA^® Code and animation contributed by │
│ Gustavo Gordillo, 2006. │ | {"url":"http://curvebank.calstatela.edu/conchoid/conchoid.htm","timestamp":"2014-04-18T20:44:03Z","content_type":null,"content_length":"15550","record_id":"<urn:uuid:2d466fa1-d01c-4753-a555-b85865959d87>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
One-Dimensional Heat Conduction with Temperature-Dependent Conductivity
Consider the following nonlinear boundary value problem , with , , and . This equation with the boundary conditions (BCs) describes the steady-state behavior of the temperature of a slab with a
temperature-dependent heat conductivity given by .
Introducing nondimensional temperature and position using and , the governing equation and BCs become , with , , and . Without loss of generality, set . Then this problem admits an analytical
solution given by .
We use Galerkin's method to find an approximate solution in the form . The unknown coefficients of the trial solution are determined using the residual and setting the integrals for . You can vary
the order of the trial solution, . The Demonstration plots the analytical solution (in gray) as well as the approximate solution (in dashed cyan). The polynomial expression of the trial solution is
displayed on the same plot. In addition, the difference between the exact and approximate solutions, , is plotted. This error is seen to decrease as the order of the trial solution, , is increased.
[1] W. F. Ramirez,
Computational Methods for Process Simulation
, 2nd ed., Oxford: Butterworth–Heinemann, 1997. | {"url":"http://www.demonstrations.wolfram.com/OneDimensionalHeatConductionWithTemperatureDependentConducti/","timestamp":"2014-04-20T23:27:04Z","content_type":null,"content_length":"46179","record_id":"<urn:uuid:13a67152-fb35-4a39-8e2e-464333ff2002>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from January 2012 on Mathematics, Learning and Web 2.0
Top Posts and pages is proving popular – this is updated regularly.
Mathematics for Students has had several updates recently, a series of Explore pages direct students to resources for exploring a topic.
The Notes and Videos pages have both been updated, check Just Math Tutorials for example. The Useful Links page has been updated several times, including a new page directing students to commonly
required constructions on Math Open Reference.
Mathematics Calculators and Tools has several online calculators.
Mathematics Games has a collection of favourite games – many from Nrich. Several games to help students practise tables have been added recently.
There are many resources for starting and ending lessons described on Mathematics – Starters and Plenaries.
Plans and Elevations – Wisweb Applets
This week I need some resources to demonstrate plans and elevations. There are several Wisweb applets from the Freudenthal Institute which are excellent for this topic. These work well on the
interactive whiteboard for demonstrating to students, they are also ideal for students to explore themselves.
Cube houses shows several models with their elevations, select drawing then 3d-model to give a model you can rotate to generate different views.
Readers familiar with the excellent Improving Learning in Mathematics materials may recognise the applets for building houses; these are described in SS6 – Representing 3D shapes which has suggested
lesson activities and describes the applets (see pages 4 and 5).
Building Houses allows you to create buildings and see the plan, front and side elevations as you build. (If that link does not work – try this).
You can add (build) or remove (break down) bricks and control the size of the square base.
Building houses with side views challenges students to construct 3D models given the plans and elevations; the task is made more challenging by specifying that as few cubes as possible should be
Note that in order to achieve the minimum number of cubes, ‘floating’ cubes are needed.
Note that these resources have been added to the ‘explore‘ series of pages on the companion blog for students.
Update - these resources worked well with my students – they particularly enjoyed the challenge of trying to build models using the minimum number of cubes!
Readers interested in the Improving Learning in Mathematics materials further may find IWB Resources useful as it has a flipchart and notebook file for each of the activities. This site is a result
of the NCETM research project: Enabling enhanced mathematics teaching with some interactive whiteboards (September 2006- September 2008) and is supported by the IWB research team at Keele University
and the www.iwbmaths.co.uk team. See also the link on the reading page to Malcolm Swan’s Improving Learning in Mathematics – Challenges and Strategies.
Numbers in love with themsleves!
Watch the video from Numberphile to learn why 153 is in love with itself!
Numberphile launched their first video appropriately on 11th November 2011.
The Numberphile site which has a series of videos from Brady Haran has been added to the Videos collection in this Evernote notebook.
Another recent addition to this collection is PatrickJMT’s – Just Maths Tutorials – an extensive collection for students.
Note that there are suggested collections of videos to direct students to on the videos page of Mathematics For Students.
See also this related post on Mathematics Videos.
Happy 2012!
Happy 2012! The above image is from Jesse Vig’s geoGreeting site where you can enter a message and obtain a link which you could send in an email. It is also possible to send as an E-card. Jesse Vig
noticed whilst working on a Google Maps project that a number of buildings looked like letters of the alphabet when viewed from above. and his website was born!
A rather novel way to obtain images of numbers!
So to complete the new year greeting we need of course some number properties of 2012.
We can turn to the Mathematical Association’s number a day blog. (Click on the image for the full post).
Or we could use Tanya Khovanova’s site: Number Gossip where we learn that 2012 is evil!
WolframAlpha can of course supply some number properties of 2012 or provide a calendar for the year …or even send us best wishes for the new year! Wishing everyone a great 2012! | {"url":"http://colleenyoung.wordpress.com/2012/01/","timestamp":"2014-04-19T07:16:35Z","content_type":null,"content_length":"87501","record_id":"<urn:uuid:08a9bf17-427c-4dd1-94ab-6710671e229d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
B Tommy has 4 different shirts, which he can wear with 3 different trousers. In how many ways can Tommy wear these shirts and trousers? View Solution
B There are 10 English books and 12 French books in a shelf. In how many ways can one English book and one French book be chosen? View Solution
B If each of 7 men shakes hands with 14 women in a party, then find the number of handshakes made by the men with the women. View Solution
B Francis had to go from Rosedale to Milan through Sheldon. There are 5 routes from Rosedale to Sheldon and 4 routes from Sheldon to Milan. How many different routes are available for View
Francis to reach Milan? Solution
B Jessica wants to attend dance and music classes after school. There are 4 dance teachers and 2 music teachers available to choose. How many different choices are available for Jessica View
to choose the two teachers? Solution
B If there are 9 greeting cards and 4 gift items available to choose with Paula, then find the number of ways of choosing a card with a gift item. View Solution
B A dentist examines 23 patients per day for 4 days. Find the total number of patients examined by him. View Solution
B Nina wants to buy a packet of popcorns and a packet of potato chips. Both the packets of popcorns and chips are available in 3 different sizes, small, medium and large. In how many ways View
can Nina buy the two packets? Solution
B There are 4 different types of plates with 6 colors in each type on a dining table. In how many different ways can you choose a plate? View Solution
B There are 4 majors available in a college. Each major has 83 students. In how many ways can you select a student in the college? View Solution
B A supermarket sells milk bottles in 2 different sizes of 4 different brands. How many different choices does a customer have? View Solution
B Bed spreads made with 2 types of fabric with 6 colors in each type are available. In how many different ways can a bed spread be chosen? View Solution
B Tanya visited 17 music stores. Each music store has a collection of 450 music albums. What are the ways in which a music album can be selected? View Solution
B There are 6 jewelry boxes in a wardrobe. Each jewelry box has 4 jewelry sets. In how many ways can you choose a jewelry set? View Solution
B Each of the 11 super markets in a county sell 8 different types of candles. In how many ways can you choose a candle? View Solution
B There are 4 entry doors and 4 exit doors in a movie theater. Find the number of ways in which a person can enter and leave the theater. View Solution
B A basket contains 46 different fruits. If there are 198 such baskets, then find the probability of choosing a banana from a basket. View Solution
B A dentist examines 25 patients per day for 6 days. Find the total number of patients examined by him. View Solution
B Stephanie wants to buy a packet of popcorns and a packet of potato chips. Both the packets of popcorns and chips are available in 3 different sizes, small, medium and large. In how many View
ways can Stephanie buy the two packets? Solution
B There are 4 majors available in a college. Each major has 73 students. In how many ways can you select a student in the college? View Solution
B Bed spreads made with 3 types of fabric with 2 colors in each type are available. In how many different ways can a bed spread be chosen? View Solution
D There are 8 jewelry boxes in a wardrobe. Each jewelry box has 8 jewelry sets. In how many ways can you choose a jewelry set? View Solution
D A supermarket sells milk bottles in 2 different sizes of 2 different brands. How many different choices does a customer have? View Solution
D Each of the 23 super markets in a county sell 20 different types of candles. In how many ways can you choose a candle? View Solution
D Diane visited 8 music stores. Each music store has a collection of 153 music albums. What are the ways in which a music album can be selected? View Solution
D There are 2 entry doors and 3 exit doors in a movie theater. Find the number of ways in which a person can enter and leave the theater. View Solution
D Using a tree diagram, find the number of ways in which 3 prizes (first, second, and third) can be distributed to the 3 students, John, Mike, and Joseph, who participated in an essay View
writing competition. D Solution
D At a chinese outlet, the menu card says noodles are available in two types, vegetarian and chicken. Both types of noodles are available as wet and dry. How many types of noodles are View
available to choose? Solution
D If there are 6 greeting cards and 3 gift items available to choose with Paula, then find the number of ways of choosing a card with a gift item. View Solution
D Using the tree diagram, find the probability of choosing a yellow-colored tie at random. View Solution
D There is a collection of cassettes and compact discs in a music store, as shown in the tree diagram. What is the probability of selecting a French audio cassette? View Solution
D Andy has 4 different shirts, which he can wear with 3 different trousers. In how many ways can Andy wear these shirts and trousers? View Solution
D There are 4 different types of plates with 5 colors in each type on a dining table. In how many different ways can you choose a plate? View Solution
D A basket contains 35 different fruits. If there are 340 such baskets, then find the probability of choosing a banana from a basket. View Solution
D Charles had to go from Rosedale to Taloga through Lake City. There are 5 routes from Rosedale to Lake City and 5 routes from Lake City to Taloga. How many different routes are available View
for Charles to reach Taloga? Solution
D There are 20 English books and 7 French books in a shelf. In how many ways can one English book and one French book be chosen? View Solution
D If each of 8 men shakes hands with 15 women in a party, then find the number of handshakes made by the men with the women. View Solution
D Laura wants to attend dance and music classes after school. There are 8 dance teachers and 7 music teachers available to choose. How many different choices are available for Laura to View
choose the two teachers? Solution
D If you choose a medal at random, then what is the probability that you choose a gold medal? View Solution | {"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgeagxkjjfk&.html","timestamp":"2014-04-16T07:20:33Z","content_type":null,"content_length":"81805","record_id":"<urn:uuid:62982582-f010-4a06-a2b3-e90d26af0dfa>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
No Bayesians in foxholes, or Putting "data" as a keyword in an applied statistics paper is something like putting "physics" as a keyword in a physics paper - Statistical Modeling, Causal Inference, and Social Science
No Bayesians in foxholes, or Putting “data” as a keyword in an applied statistics paper is something like putting “physics” as a keyword in a physics paper
I came across this article by the late Leo Breiman from 1997, “No Bayesians in foxholes.” It’s fun sometimes to go back and see what people were saying nearly a decade ago. This one is particularly
interesting because it presents a strongly anti-Bayesian position which used to be common in statistics (see, for example, various JRSS-B discussions during the 70s and 80s) but you don’t really hear
about anymore. Breiman wote:
The Current Index of Statistics lists all statistics articles published since 1960 by author, title, and key words. The CIS includes articles from a multitude of journals in various
fields—medical statistics, reliability, environmental, econometrics, and business management, as well as all of the statistics journals. Searching under anything that contained the word “data” in
1995–1996 produced almost 700 listings. Only eight of these mentioned Bayes or Bayesian, either in the title or key words. Of these eight, only three appeared to apply a Bayesian analysis to data
sets, and in these, there were only two or three parameters to be estimated.
Actually, our toxicology paper appeared in the Journal of the American Statistical Association in 1996—how could Breiman have missed that one (our model had 90 parameters, and the paper had a
detailed discussion of why the prior distribution was needed in order to get reasonable results)? Was he restricting himself to papers with “data” in their keywords? Putting “data” as a keyword in an
applied statistics paper is something like putting “physics” as a keyword in a physics paper!
OK, OK . . .
My point here isn’t to pick on Breiman, who isn’t around to defend himself (when we were both at Berkeley, I tried to talk with him about Bayesian methods, but we never found the time for the
conversation, something I strongly regret in retrospect), but rather to reiterate a point I’ve made elsewhere, which is how our attitudes toward methods are so strongly shaped by our direct
experiences. Continuing my quoting from the Breiman article:
I [Breiman] spent 13 years as a full-time consultant and continue to consult in many fields today—air-pollution prediction, analysis of highway traffic, the classification of radar returns,
speech recognition, and stockmarket prediction, among others. Never once, either in my work with others or in anyone else’s published work in the fields in which I consulted, did I encounter the
application of Bayesian methodology to real data.
. . .
All it would take to convince me [about Bayesian methods] are some major success stories in complex, high-dimensional problems where the Bayesian approach wins big compared to any frequentist
approach. . . . A success story is a tough problem on which numbers of people have worked where a Bayesian approach has done demonstrably better than any other approach.
Now that these success stories are out there (and are reachable with almighty Google—which puts the Current Index of Statistics to shame—or by flipping through various textbooks), I suppose Breiman
would have been convinced. What’s funny is that he couldn’t just say that he had made great contributions to statistics, and others had made important contributions to applied problems using Bayesian
methods. He had to say that “when big, real, tough problems need to be solved, there are no Bayesians.”
I think that a more pluralistic attitude is more common in statistics today, partly through the example of people like Brad Efron who’ve had success with both Bayesian and non-Bayesian methods, and
partly through the pragmatic attitudes of computer scientists, who neither believe the extreme Bayesians who told them that they must use subjective Bayesian probability (or else—gasp—have incoherent
inferences) nor the anti-Bayesians who talked about “tough problems” without engaging with research outside their subfields.
My impression is that there’s a lot more openness now, and a willingness in evaluating methods to go beyond the two poles of pure subjectivism (like those Bayesians at the 1991 Valencia meeting who
were opposed in principle to checking model fit) and barren significance testing (like those papers that used to appear in the statistical journals with tables and tables of simulations of coverage
probabilities). It’s refreshing to see the errors of even the experts of a decade ago—perhaps this will give us courage to make our own rash statements which can in their turn be overtaken by
3 Comments
1. David Draper and I wrote the short essay that prompted Breiman's remarks. With hindsight, we were rather stuffy and formal in our writing – Breiman obviously had some fun with his retort. We
tried to get them to publish our response to Breiman but failed.
It would be fun to do again (although, alas, not with Breiman this time)
2. That's an interesting observation you make: "What's funny is that he couldn't just say that he had made great contributions to statistics, and others had made important contributions to applied
problems using Bayesian methods. He had to say that 'when big, real, tough problems need to be solved, there are no Bayesians.'"
Why do you think that is?
3. Seth,
Perhaps somebody who knew Breiman well can answer this one. . . my guess is that, first, he was reacting to the state of Bayesian statistics from the 1970-1980s, when Bayes saw many theoretical
developments (e.g., Efron and Morris, 1973) and much discussion in the statistical world (e.g., Lindley and Smith, 1972), but where the practical developments in data analysis were out of his
view (for example, but Novick, Rubin, and others in psychometrics, and by Sheiner, Beal, and others in pharmacology). So from his perspective, Bayesian statistics was full of theory but not much
That said, I think he didn't try very hard to look for big, real, tough problems that were solved by Bayesian methods. (For example, he could have just given me a call to see if his Current Index
search had missed anything.) I think he'd become overcommitted to his position and wasn't looking for disconfirming evidence. Also, unfortunately, he was in a social setting (the UC Berkeley
statistics department) which at that time encouraged outrageous anti-Bayesian attitudes. As I said in my entry above, this sort of thing just looks silly in retrospect, but at the time it was an
accepted position to hold in certain circles. | {"url":"http://andrewgelman.com/2006/12/20/no_bayesians_in/","timestamp":"2014-04-20T20:55:07Z","content_type":null,"content_length":"30681","record_id":"<urn:uuid:1bf2a331-01b0-47ba-9962-697dd6054416>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
'Ball Bearings' printed from http://nrich.maths.org/
The wheels of a car, or a bicycle, run smoothly because they are separated from the axle of the wheel by a ring of ball bearings as illustrated below. Of course, the wheel turns smoothly because the
ball bearings fit exactly between the hub of the wheel and the axle with no room to move about except, of course, to rotate. It is this rotation that keeps the friction to a minimum, and so makes the
wheel turn smoothly.
Suppose that $a$ is the radius of the axle, $b$ is the radius of each ball-bearing, and $c$ is the radius of the hub (see the figure).What are the ratios ${a\over b}$, ${b\over c}$ and ${c\over a}$
when there are exactly three ball-bearings? What are these ratios when there are exactly four ball-bearings? Try to explain why the number of ball bearings determines the ratio ${c\over a}$ exactly.
Can you find a formula for ${c\over a}$ in terms of $n$ when there are exactly $n$ ball-bearings? | {"url":"http://nrich.maths.org/295/index?nomenu=1","timestamp":"2014-04-16T22:19:45Z","content_type":null,"content_length":"4475","record_id":"<urn:uuid:52fc1408-15e5-4a92-9efa-aa6de9b84b73>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Re: instrumental variables estimation problems
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Re: instrumental variables estimation problems
From Kit Baum <baum@bc.edu>
To statalist@hsphsun2.harvard.edu
Subject st: Re: instrumental variables estimation problems
Date Tue, 8 Dec 2009 06:49:29 -0500
sysuse auto,clear
ivregress 2sls price (mpg = weight length)
// these are the proper 2SLS resids: y - orig X * 2SLS beta
predict double eps, res
estat overid
// Sargan stat by hand
reg eps weight length
// compute the uncentered r^2 from this regression
predict double yhat, xb
gen double syhat2 = sum(yhat^2)
gen double seps2 = sum(eps^2)
scalar ur2 = syhat2[_N]/seps2[_N]
// Sargan: N * uncentered r^2
di e(N)*ur2
On Dec 8, 2009, at 2:33 AM, Tunga wrote:
> * Regarding Question 1: In fact, my question is slightly wrong but I still
> think that it has nothing to do with the ivreg or ivreg2 command. Let me
> state my question again.
> The model is the following. depend = b1 + b2 endo + u. I have two
> instruments for endo and I want to test if they are exogenous. The idea of
> the test is that you regress the residuals from this model (using iv
> estimates of b1 and b2) on the instruments to see if the instruments have
> explanatory power. Hence the steps are as follows:
> Step 1. Obtain the IV estimates of b1 and b2. Here I use ivregress 2sls
> depend (endo = instone insttwo).
> Step 2. Obtain the residuals. Here the point is that I wish to get the
> residuals using these two iv estimates and the variable 'endo' and NOT the
> 'predicted endo' from the first stage of the 2SLS. Is it ok if I just use
> the command "predict resid, residuals"? Or is this command producing
> residuals using the IV estimates of b1 and b2 and the variable 'predicted
> endo'? This question is not about the updated ivreg command. It is about
> getting the correct residuals. (Note: there can be a direct, ready-made test
> for instrument exogeneity but I don't want to follow them. The test I am
> following here is intuitive and therefore I wish to follow the steps I lay
> down here.)
Kit Baum | Boston College Economics & DIW Berlin | http://ideas.repec.org/e/pba1.html
An Introduction to Stata Programming | http://www.stata-press.com/books/isp.html
An Introduction to Modern Econometrics Using Stata | http://www.stata-press.com/books/imeus.html
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-12/msg00296.html","timestamp":"2014-04-16T07:19:23Z","content_type":null,"content_length":"7947","record_id":"<urn:uuid:8d2c5593-3ef9-454e-86af-d6a73160fd9f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |