content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Numbers of Sides
The Pythagorean Theorem
A 4,000-Year History
by Eli Maor
Princeton, 286 pp., $24.95
The Pythagorean Theorem is perhaps the one mathematical fact an Average Joe might be able to name. It is ancient. Evidence of the Pythagorean Theorem can be found on Babylonian clay tablets from 1800
B.C.; versions exist in manuscripts from India circa 600 B.C. and from the Han Dynasty. The first rigorous proof is ascribed to Greeks of the school of Pythagoras, in the mid-6th century B.C.
Eli Maor says that the Pythagorean Theorem is "arguably the most frequently used theorem in all of mathematics" and makes that the premise, or McGuffin, for touring a swath of mathematical history.
He aims at the general reader, wishing to provide both an intellectual adventure, complete with proofs, and a genial ramble. (An appended chronology notes that soon after "Einstein publishes his
general theory of relativity ... Stanley Jashemski, age nineteen, of Youngstown, Ohio, proposes possibly the shortest known proof of PT.")
He begins, regrettably, with a sin of anachronism--miscasting the original meaning of the Theorem into modern terms. Euclid's famous treatise on geometry presents us with a fact about area: If we
draw a square on each side of a right triangle, the area of the square on the hypotenuse is the total of the areas of the squares on the other sides. Nowadays we are inclined to express this as an
equation--a2+b2=c2--in which a, b, and c are numbers representing the lengths of the triangle's sides. Maor treats these as interchangeable formulations, and from the modern point of view they are.
But Pythagoras and Euclid would find the modern version unintelligible, for reasons interesting and deep. They distinguished numbers, which are "multitudes" (that can be counted), from lengths,
areas, and volumes, which are continuously varying "magnitudes." Multitudes differ essentially from magnitudes. And magnitudes themselves come in different kinds. We may meaningfully compare one line
segment to another line segment (is it greater?) but not to a different kind of magnitude, such as a circle or a cube.
It makes sense to total the magnitudes of two squares, but not to total a square with a line. It makes sense to multiply numbers, obtaining another number as a result--three groups of four things
amount to 12 things altogether. But it seems merely confused to speak of multiplying one line segment by another, of multiplying by something that is not a multiplicity.
Presented with these careful distinctions, and the rigorous and brilliant Greek science that respected them, a reader might suffer a profitable moment of uncertainty and discomfort, wondering how he
could have thought in any other way--uncertain, at least for that moment, how there could be any coherent sense in (or any use for) some mongrel notion of "number" and practice of "algebra" that
embraced the counting numbers and magnitudes of all kinds. Yet the massive triumphs of mathematical physics, for one thing, assure us that there can be.
We can't solve that problem here--to begin with, a rigorous mathematical account of the modern notion of number is highly technical--but it is illuminating to consider a simple strategy that holds
out hope of dissolving it: In ordinary speech we don't say that the length of a line is "three"--we say that it's "three feet" or "three furlongs" or some such thing. We choose a unit and measure the
line as some multiple of the unit--at least, when it comes out exactly.
That suggests a way to unify the distinct quantitative ideas of multitude and magnitude case-by-case: Given a right triangle, for example, choose a unit of which all three sides are exact multiples.
That assigns a number to each side and those numbers will satisfy a2+b2=c2.
This strategy fails, for an astonishing reason: The innocent assumption that we can always find such a unit is false. For example, there is no unit of which both the sides and the diagonal of a
square are exact multiples. The Pythagoreans not only discovered that but proved it. Here shines one particular brilliance of Greek mathematics: that its results are established by proof. And so far
as we know, the notion of mathematical proof--of developing an entire body of knowledge by rigorous deduction from a set of first principles--has emerged only once in human history.
For the Pythagoras cult, this had a tragic aspect. Scholars dispute about the precise beliefs of Pythagoras and his followers, but agree that they included a mystical conviction that numbers
(multitudes) are, in some sense, the fundamental constituents of the world.
This seems to have received powerful support from the discovery, attributed to Pythagoras, that basic musical intervals are "rational." Stop a tensed violin string somewhere in the middle and
consider the difference between the pitches produced by plucking its two parts. If the ratio between the lengths of those parts is two to one, the pitch difference is an octave; if two to three, it's
a perfect fifth; and so on.
Note that the ratio of two lengths is the same as the ratio of two counting numbers precisely when there is a unit of which both lengths are exact multiples. An irrefutable proof that the sides and
diagonal of a square are, in this sense, "irrational"--and that irrationality is an essential feature of the mathematical world--can only have been a metaphysical blow.
The first section of Maor's book stretches from the Babylonians to Archimedes, the greatest ancient mathematician, and one of the greatest ever. Insofar as it has a single theme, this section asks
which societies knew of the Pythagorean Theorem, and in what form and in what way they knew it. The next thousand or so years get brief treatment as an interlude, an era of "translators and
commentators"--illustrated by episodes from Chinese, Hindu, and Arabic, as well as Western, mathematics.
The final section begins in the mid-16th century with François Viète--often regarded as the first modern mathematician--and tells two related stories. One is the introduction of infinite methods and
infinities into mathematics--controversial but successful innovations that had to wait 300 years for a rigorous basis. The other develops the "non-Euclidean" geometry that plays a central role in
modern physics as the mathematical setting for Einstein's theory of general relativity.
The Pythagorean Theorem holds for figures drawn on a flat surface--that is, for the objects of Euclidean geometry. In other settings--figures drawn on the surface of a sphere, for example--it fails.
The Theorem is a characteristic of flatness, hence its ubiquity: The calculations of trigonometry, of the lengths of lines (straight or curved), etc., are all intimately tied to it. The converse
insight, that the geometry of a surface can be captured by describing the ways in which it deviates from the Pythagorean Theorem, makes it possible to represent and reason about unvisualizable
geometries such as the "curved space-time" of Einstein's theory.
Maor ventilates these stories with frequent digressions. One (rather dull) chapter called "The Pythagorean Theorem in Art, Poetry, and Prose" provides a laundry list. There is the patter song from
The Pirates of Penzance in which the modern major general boasts of his acquaintance "with many cheerful facts about the square of the hypotenuse." There are encomia to Pythagoras from Johannes
Kepler and (descending from the sublime) Jacob Bronowski. There is the famous story of Thomas Hobbes's exclamation, on first seeing the Pythagorean Theorem: "By God, this is impossible!"
Another chapter presents excerpts from the curious life work of Elisha Scott Loomis, who undertook to gather all known proofs of the Pythagorean Theorem--of which he found 371, including one by
President James Garfield (before his election). Also included are brain teasers, mathematical curiosities, and a short essay on the possibility of composing a message that would be understood by
intelligent life in distant galaxies.
Bits of potted history serve as glue. Not all of this is reliable, as when Plato's contribution to geometry is described as "his recognition of its importance to learning in general, to logical
thinking, and, ultimately, to a healthy democracy." It is not likely that Plato was fond of diseased democracy, but safe to say that promoting democracy was not one of his concerns. Plato's interest
in geometry was metaphysical: The relation between ideal geometric figures (grasped by reason) and the imperfect copies that we draw or otherwise encounter through our senses prefigures the relation
between a Platonic form, such as Goodness, and its imperfect realizations in the world of ordinary experience.
It is hard to predict who would be charmed by The Pythagorean Theorem, but all will recognize the author's enthusiasm for his subject and his respect for the reader: Three cheers for including those
David Guaspari is a writer in Ithaca, New York. | {"url":"http://www.weeklystandard.com/print/Content/Protected/Articles/000/000/014/747xxwah.asp?page=2","timestamp":"2014-04-19T02:02:32Z","content_type":null,"content_length":"21926","record_id":"<urn:uuid:c6814a02-00a2-4fc5-8219-e149d5175b29>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
Eigen vector
From Encyclopedia of Mathematics
(Redirected from
of an operator
A non-zero vector
The coefficient eigen value of
In fact, the existence of an eigen vector for operators on infinite-dimensional spaces is a fairly rare occurrence, although operators of special classes which are important in applications (such as
integral and differential operators) often have large families of eigen vectors.
Generalizations of the concepts of an eigen vector and an eigen space are those of a root vector and a root subspace. In the case of normal operators on a Hilbert space (in particular, self-adjoint
or unitary operators), every root vector is an eigen vector and the eigen spaces corresponding to different eigen values are mutually orthogonal.
[1] K. Yosida, "Functional analysis" , Springer (1980) pp. Chapt. 8, §1
[2] L.A. [L.A. Lyusternik] Lusternik, "Elements of functional analysis" , Hindushtan Publ. Comp. (1974) (Translated from Russian)
[3] L.V. Kantorovich, G.P. Akilov, "Functional analysis" , Pergamon (1982) pp. Chapt. 13, §3 (Translated from Russian)
It is also quite common to write eigenvector, eigenspace, etc., i.e. not two words but one.
Eigen vectors are sometimes also called characteristic vectors, eigen elements, eigen functions, or proper vectors; root vectors are usually called principal vectors in the Western literature. [a1]
and [a2] are good general Western references.
Various notions of generalized eigen vectors (or improper eigen functions) exist in the literature; e.g. see [a3] and [a4] for generalized eigen vectors and eigen function expansions in the context
of rigged Hilbert spaces (Gel'fand triplets; see also Rigged Hilbert space).
[a1] N. Dunford, J.T. Schwartz, "Linear operators. Spectral theory" , 2 , Interscience (1963) pp. Chapt. 10, §3
[a2] A.E. Taylor, D.C. Lay, "Introduction to functional analysis" , Wiley (1980) pp. Chapt. 5
[a3] I.M. Gel'fand, N.Ya. Vilenkin, "Generalized functions. Applications of harmonic analysis" , 4 , Acad. Press (1964) pp. Chapt. 1, §4 (Translated from Russian)
[a4] Yu.M. [Yu.M. Berezanskii] Berezanskiy, "Expansion in eigenfunctions of selfadjoint operators" , Amer. Math. Soc. (1968) pp. Chapt. 5, §2 (Translated from Russian)
[a5] S. Lang, "Linear algebra" , Addison-Wesley (1973)
How to Cite This Entry:
Eigenvector. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Eigenvector&oldid=13516 | {"url":"http://www.encyclopediaofmath.org/index.php/Eigenvector","timestamp":"2014-04-20T05:48:48Z","content_type":null,"content_length":"20830","record_id":"<urn:uuid:3a2a4490-4552-4311-80e3-5e2a8c8b288f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sudbury Algebra Tutor
Find a Sudbury Algebra Tutor
...I've tutored nearly all the students I've worked with for many years, and I've also frequently tutored their brothers and sisters - also for many years. I enjoy helping my students to
understand and realize that they can not only do the work - they can do it well and they can understand what they're doing. My references will gladly provide details about their own experiences.
11 Subjects: including algebra 1, algebra 2, geometry, precalculus
...In that time I've worked with countless students from elementary to college level to help them to become better students. Each student's needs are different, and I create individualized
instructional plans for all of my tutoring students. But here is a list of some of the skills I have helped s...
26 Subjects: including algebra 2, algebra 1, Spanish, reading
...I can help. I cannot promise you you will learn to love these subjects as much as I do, but I promise you there is a better approach to learning them, and I can help you find it. Whether you
need help grasping the fundamental concepts or you just have trouble with test taking anxiety, I will he...
12 Subjects: including algebra 2, algebra 1, chemistry, calculus
...I've also had several years of experience tutoring high school students and non-science majors algebra. Algebra is one of the fundamental tools used in theoretical physics. During my physics
education it was necessary to become proficient in algebra.
6 Subjects: including algebra 1, algebra 2, physics, chemistry
...I attended U.C. Santa Barbara, ranked 33rd in the World's Top 200 Universities (2013), and graduated with a degree in Communication. I've been tutoring for the past 9 years and have worked
with students at many different levels.
27 Subjects: including algebra 1, algebra 2, English, reading
Nearby Cities With algebra Tutor
Acton, MA algebra Tutors
Concord, MA algebra Tutors
Hudson, MA algebra Tutors
Lincoln Center, MA algebra Tutors
Lincoln, MA algebra Tutors
Maynard, MA algebra Tutors
Needham Jct, MA algebra Tutors
Newton Center algebra Tutors
Newton Centre, MA algebra Tutors
Wayland, MA algebra Tutors
Wellesley algebra Tutors
Wellesley Hills algebra Tutors
Westboro, MA algebra Tutors
Westborough algebra Tutors
Weston, MA algebra Tutors | {"url":"http://www.purplemath.com/sudbury_algebra_tutors.php","timestamp":"2014-04-16T19:22:27Z","content_type":null,"content_length":"23747","record_id":"<urn:uuid:52462ac8-d55f-4a17-a28d-096a9a9b3fcb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Example Problem on Torque
Rotational Motion
EXAMPLE PROBLEM ON TORQUE: The Swinging Door
In a hurry to catch a cab, you rush through a frictionless swinging door and onto the sidewalk. The force you extered on the door was 50N, applied perpendicular to the plane of the door. The door is
1.0m wide. Assuming that you pushed the door at its edge, what was the torque on the swinging door (taking the hinge as the pivot point)?
1. Where is the pivot point?
2. What was the force applied?
3. How far from the pivot point was the force applied?
4. What was the angle between the door and the direction of force?
The pivot point is at the hinges of the door, opposite to where you were pushing the door. The force you used was 50N, at a distance 1.0m from the pivot point. You hit the
door perpendicular to its plane, so the angle between the door and the direction of force was 90 degrees. Since Figure 1 Diagram of
Example Problem 1
r x F = r F sin(
then the torque on the door was: Note that this is only the magnitude of the torque; to complete the answer, we need to find the direction of torque. Using the right hand rule, we see that the
direction of torque is out of the screen.
Explanation of Torque
Continue to: Torque and Angular Acceleration
Return to: Rotational Motion Menu
Return to: Physics Tutorial Menu | {"url":"http://www.physics.uoguelph.ca/tutorials/torque/Q.torque.example.torque.html","timestamp":"2014-04-18T18:30:10Z","content_type":null,"content_length":"3593","record_id":"<urn:uuid:5f09867e-d229-4bc1-9379-9244418915e9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Winthrop, MA Algebra Tutor
Find a Winthrop, MA Algebra Tutor
...I had tutored Chinese to students aged 4 to 16 years old. I collected a lot of Chinese children's songs, stories and Chinese cartoons which make for good supplemental materials for Chinese
learning. You will have the opportunity to choose the method your feel comfortable with: either conversati...
5 Subjects: including algebra 1, algebra 2, physics, precalculus
...Computer Programming is the creating of computer programs! Everything that we do with computers, from Word, to the internet, every page of the internet, games, etc. Everything on a computer or
even a handheld device (cell phone, etc.) has a program on it.
19 Subjects: including algebra 2, algebra 1, physics, SAT math
...In private and public settings, I have spent the last 16 years having a wonderful time teaching English, Spanish, law & government, study skills, test prep, reading, writing, and math to
students ages four to adult. I have especially enjoyed using my skills to help students who have struggled be...
26 Subjects: including algebra 2, algebra 1, Spanish, reading
...As a grad student I taught undergraduate-level Astronomy and Physics for 3 years, and have tutored and taught high-school Math, Physics (honors), and Chemistry for and additional 3. A favorite
activity of mine, though, is coaching to take standardized tests and to pull up your grades in math, sc...
13 Subjects: including algebra 1, algebra 2, Spanish, calculus
...Teaching and coaching for years, I have developed some strategies and methodologies to help students to understand the subject and grasp the materials efficiently. I believe that WyzAnt
students will benefit from working with me. I will do my best to improve their knowledge and to help them to become successful.
23 Subjects: including algebra 1, algebra 2, chemistry, English | {"url":"http://www.purplemath.com/Winthrop_MA_Algebra_tutors.php","timestamp":"2014-04-18T04:03:55Z","content_type":null,"content_length":"23915","record_id":"<urn:uuid:acd63e11-3bfb-4170-b669-9dc1670bebbd>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
God Plays Dice
What is discreet mathematics?
Urbandictionary.com tells us: "Discreet mathematics refers to the subtle study of mathematics. Discreet mathematics is characterised by furtive looks in maths textbooks disguised as pr0n and by
secret maths lectures held in abandoned warehouses at midnight..."
Of course,
mathematics is an entirely different beast.
edit, 6:31 pm
: A poster on a cryptography mailing list
pointed out that cryptography is "discreet" mathematics.
2 comments:
unapologetic said...
Meanwhile you can tell the mathematicians in kinky CraigsList personal ads by the converse error.
Gah, with a new computer I have to go back and reset all these autofilled options again...
Blake Stacey said...
The first rule of journal club is you do not talk about journal club! | {"url":"http://godplaysdice.blogspot.com/2008/05/discreet-mathematics.html","timestamp":"2014-04-20T00:45:48Z","content_type":null,"content_length":"54888","record_id":"<urn:uuid:1f4b978b-1f5b-456d-946a-4a10539759b9>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Gas Laws
17. The Gas Laws
The earth's atmosphere is a complex mixture of several "gases", either atomic or molecular in nature. Air consists primarily of N[2] (78%) and O[2] (21%), with small amounts of several other
substances, including Ar (0.9%).
The gaseous form of substances that are solids or liquids under normal conditions are often called vapors.
● A gas expands spontaneously to fill its container. The volume of a gas equals the volume of the container in which it is held.
● A gas is highly compressible. When pressure is applied to a gas, its volume readily decreases.
● Gases form homogeneous mixtures with each other regardless of the identities or relative proportions of the component gases.
● Compared to solids and liquids, the molecules of gases are relatively far apart. In air, the molecules take up only about 0.1% of the total volume - compared to the individual molecules of a
liquid that occupy about 70% of the total space.
Among the most easily measured properties of a gas are its pressure, temperature, and volume.
Gases exert a pressure, or a force, on any surface with which they are in contact. Pressure (P) is the force (F) that acts on a given area (A).
P = F / A
The force exerted by the column of air represented by the diagram on the right can be calculated as:
F = ma = (10^4 kg) (9.8 m/s^2) = 1 X 10^5 N
This force can be converted to pressure by:
P = F / A = 1 X 10^5 N / 1 m^2 =
1 X 10^5 Pa = 100 kPa
The SI unit of pressure, N/m^2, is given the name pascal, Pa.
Standard atmospheric pressure, the typical pressure at sea level, is the pressure sufficient to support a column of mercury 760 mm high. In SI units, this pressure equals 1.01325 X 10^5 Pa.
These conversion factors are used in gas calculations. They are all equal:
● 1 atmosphere, atm =
● 760 mm Hg =
● 760 torr =
● 1.01325 X 10^5 Pa =
● 101.325 kPa =
● 1013 millibars, mb
● 14.7 lb/in^2, psi
In laboratories, a device called a manometer is often used to measure the pressure of enclosed gases. Although it operates on a principle similar to that of a mercury barometer, one end of the
manometer tube is usually open to the atmosphere instead of being sealed.
In the drawing above, manometer 1 indicates a gas pressure in the container higher than atmospheric pressure. Manometer 2 indicates a gas pressure in the container lower than atmospheric pressure.
Notice the difference in the two pressure calculations.
Temperature is a measure of the "heat content" or "particle motion" of matter. Temperature changes will cause the volume of matter to change. This volume change is most obvious in the gas phase of
The volume of a gas means nothing unless the
conditions under which it was measured are known.
Tips for working with gas laws:
● All gas calculations must use Kelvin temperatures.
● The conditions 0 ^oC and 1 atm are referred to as standard temperature and pressure - STP.
● The volume occupied by one mole of a gas at STP, 22.4 liters, is referred to as molar volume.
● Read the problem to see what conditions change.
● Decide which gas law to use and write its equation.
● Reread the problem to see what question is asked.
● If needed, manipulate the gas law equation.
● Plug numbers and units into the equation.
● Pickup your calculator and punch buttons.
● Write the answer to the problem, don't forget significant figures, and circle it.
Boyle's Gas Law:
The Pressure-Volume Relationship was established by Robert Boyle
Boyle's Law states: the volume of a fixed quantity of gas maintained at constant temperature is inversely proportional to the pressure.
When two measurements are inversely proportional, one gets smaller as the other gets bigger.
Boyle's Law is expressed by the equation:
P[1]V[1] = P[2]V[2]
Charles' Gas Law:
The Temperature-Volume Relationship was established by Jacques Charles
Charles' Law states: the volume of a fixed amount of gas maintained at constant pressure is directly proportional to its absolute temperature.
When two measurements are directly proportional, as one changes in size the other undergoes the same size change.
Charles' Law is expressed by the equation:
The Combined Gas Law:
Boyle's Law and Charles' Law can be used in combination when both pressure and temperature change. This relationship produces the Combined Gas Law Equation.
Dalton's Law of Partial Pressures, established by John Dalton, states: the total pressure of a mixture of gases equals the sum of the pressures that each would exert if it were present alone.
The pressure exerted by a particular component of a mixture of gases is called the partial pressure of that gas.
Dalton's Law of Partial Pressures is expressed by the equation:
P[total] = P[1] + P[2] + P[3] . . .
A collecting tube is filled with water and inverted in an open pan of water. Gas is then allowed to rise into the tube, displacing the water. By raising or lowering the collecting tube until the
water levels inside and outside the tube are the same, the pressure inside the tube is exactly that of the atmospheric pressure.
A gas collected "over water" is a mixture of the gas and water vapor. Dalton's law of partial pressures describes this situation as:
P[total] = P[gas] + P[H[2]O]
Charts like this one are readily available that give water vapor pressure at any common temperature.
The Quantity-Volume Relationship is named for Amedeo Avogadro
Equal volumes of gases at the same temperature and pressure contain equal numbers of molecules.
At 0 ^oC and 1 atm, 22.4 L of any gas contains 6.02 X 10^23 gas molecules.
Avogadro's Law states: The volume of a gas maintained at constant temperature and pressure is directly proportional to the number of moles of the gas.
Avogadro's Law is expressed by the equation:
The Ideal-Gas Equation
An ideal gas is a hypothetical gas whose molecules have no volume and no attraction to other molecules. While real gas molecules do have volume and are attracted to other molecules, at common
temperatures the difference is so small that it can be ignored.
The ideal-gas equation is: PV = nRT
● P is standard pressure in kPa
● V is molar volume
● n is number of moles
● T is standard temperature in K
● R is called the gas constant.
The value and units of R depend on the units of P, V, n, and T. Pressure units are the ones that most often are different.
NOTE: here are two commonly used values for R:
● Pressure units in atm, R = 0.0821 L-atm/mol-K
● Pressure units in Pa, R = 8.314 J/mol-K
An ideal-gas equation modification
● The number of moles, n, can be expressed as:
mass (m) / moleculear mass (M)
● The equation then becomes:
Sample problems using the ideal gas equation:
1. How many moles of gas are found in a 500 dm^3 container if the conditions inside the container are 25 ^oC and 200 kPa?
2. What volume will 50 grams of chlorine gas occupy at STP?
3. What is the molecular weight of a gas if 150 grams of the gas occupy 250 dm^3 at 500 mm Hg and 30 ^oC?
(This problem requires the ideal gas equation modification.)
Gas density can also be calculated using the ideal-gas equation.
Density is equal to mass divided by volume, d = m/v.
The ideal-gas equation can be arranged to give density in g/L:
This equations shows the density of a gas depends on its pressure, molar mass, and temperature. The higher the molar mass and pressure, the greater the gas density; the higher the temperature, the
less dense the gas.
Even though gases form homogeneous mixtures regardless of their identities, a less dense gas will lie above a more dense one if they are not physically mixed. The differences between the densities of
hot and cold gases is responsible for CO[2] being able to keep oxygen from reaching combustible materials (thus acting as a fire extinguisher) and for many weather phenomena, such as the formation of
large thunderhead clouds during thunderstorms. | {"url":"http://crescentok.com/staff/jaskew/ISR/chemistry/class17.htm","timestamp":"2014-04-16T19:10:52Z","content_type":null,"content_length":"16391","record_id":"<urn:uuid:8f075426-f2cc-4772-9024-d04d542b53ac>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quotient of two Laplace integrals (2)
up vote 3 down vote favorite
In one attempt to prove a probability theorem (of K.L. Chung and P. Erdős, 1951) using analytic argument, I try to prove the following Let $\varphi(x)$ and $\psi(x)$ be two complex-valued continuous
functions on $[a,b]$, and let $f(x)$ be a complex-valued continuously differentiable function on $[a,b]$. Suppose that $|f(x)|$ has an absolute maximum at an interior point, say $\xi$, of the
interval, and $f'(\xi)=0$. Then $$\label{eq3} \lim_{n\to\infty}\frac{\int_a^b\varphi(x)[f(x)]^ndx}{\int_a^b\psi(x)[f(x)]^ndx}=\frac{\varphi(\xi)}{\psi(\xi)}.$$
Remark 1: This is true for $f(x)\in C^2$, by Laplace's method.
Remark 2: Michael has given a counter example without the assumption $f'(\xi)=0$. This is a good example. Please see in the origin version of the problem: Quotient of two Laplace integrals
This problem is still open.
Thank you.
ca.analysis-and-odes asymptotics
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ca.analysis-and-odes asymptotics or ask your own question. | {"url":"http://mathoverflow.net/questions/50671/quotient-of-two-laplace-integrals-2","timestamp":"2014-04-18T03:21:05Z","content_type":null,"content_length":"46644","record_id":"<urn:uuid:0852b305-fa29-42f6-9bcb-41e0ed5904e5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
AutoCAD calculate area of irregular polygon? How to find the area of a polygon
AutoCAD calculate area of irregular polygon? How to find the area of a polygon
posts: 13
Registered: 2012-7-6
Message 1 of 2
How to use AutoCAD calculate area of irregular polygon? How to find the area of a polygon? It was combine with multiple overlapping polygons.
posts: 474
Registered: 2013-5-6
Message 2 of 2
See also | {"url":"http://www.zwsoft.com/forum/thread-2827-1-1.html","timestamp":"2014-04-20T18:24:01Z","content_type":null,"content_length":"32975","record_id":"<urn:uuid:2355cd92-2ed4-4a16-a337-c838039ae36e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve. Please show steps. lnx-ln(x-1)=1
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f9a6d44e4b000ae9ecffe9b","timestamp":"2014-04-17T16:37:37Z","content_type":null,"content_length":"41824","record_id":"<urn:uuid:b212cd93-66c6-46da-9d11-f5dc4a975a52>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Denton, TX Prealgebra Tutor
Find a Denton, TX Prealgebra Tutor
...I have tutored students in writing in content mastery and inclusion settings in high school and middle school. I used a great deal of statistical analysis while working in biomedical research
from 1983-1989. I was a licensed statistician in the state of Oklahoma in 1982.
24 Subjects: including prealgebra, English, reading, writing
...I am a member of Phi Theta Kappa Honor Society and received the certification "Five Star Chapter Development Plan Leader" at the 93rd Phi Theta Kappa International Convention in Seattle,
Washington [7-9 April 2011]. I am certified at Level 1 by the College Reading and Learning Association's Inte...
20 Subjects: including prealgebra, reading, writing, algebra 1
...I took separate classes in the study of algorithms, combinatorics (permutations/combination), graph theory, and order/lattices, passing each course with an A- or higher. A large portion of my
master's degree study in electrical engineering involved differential equations. While earning my BS degree I minored in mathematics.
48 Subjects: including prealgebra, chemistry, calculus, ASVAB
I have several years tutoring experience and have taught Computer Science in several training institutions as well as trained senior business executives in technology. My ability to translate
complex situations into simple fundamental processes, combined with my communication skills enable me to im...
28 Subjects: including prealgebra, statistics, algebra 1, algebra 2
...Over the course of my 30 year technical career, I was always teaching and educating because of my previous experiences as an educator. I have a very engaging style of teaching and I like to
use humor. I think you can be more successful, especially with young students if the learning activities are fun.
19 Subjects: including prealgebra, reading, writing, ESL/ESOL
Related Denton, TX Tutors
Denton, TX Accounting Tutors
Denton, TX ACT Tutors
Denton, TX Algebra Tutors
Denton, TX Algebra 2 Tutors
Denton, TX Calculus Tutors
Denton, TX Geometry Tutors
Denton, TX Math Tutors
Denton, TX Prealgebra Tutors
Denton, TX Precalculus Tutors
Denton, TX SAT Tutors
Denton, TX SAT Math Tutors
Denton, TX Science Tutors
Denton, TX Statistics Tutors
Denton, TX Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Carrollton, TX prealgebra Tutors
Corinth, TX prealgebra Tutors
Flower Mound prealgebra Tutors
Frisco, TX prealgebra Tutors
Garland, TX prealgebra Tutors
Keller, TX prealgebra Tutors
Lewisville, TX prealgebra Tutors
Mckinney prealgebra Tutors
N Richland Hills, TX prealgebra Tutors
N Richlnd Hls, TX prealgebra Tutors
North Richland Hills prealgebra Tutors
Northlake, TX prealgebra Tutors
Plano, TX prealgebra Tutors
Shady Shores, TX prealgebra Tutors
The Colony prealgebra Tutors | {"url":"http://www.purplemath.com/Denton_TX_Prealgebra_tutors.php","timestamp":"2014-04-21T10:43:18Z","content_type":null,"content_length":"24142","record_id":"<urn:uuid:c870ec3d-6aa9-47ec-b578-8b4aa75691c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
different values in one column
up vote 6 down vote favorite
Is there a way to get all different values in one column in libreoffice-calc?
If I have a sheet looking like that:
column1 column2 column3
A B C
A B C
A B C
A D C
A B C
A B C
I'd like to know how can I find out that column2 has 2 different values, and that those 2 values are B and D.
add comment
2 Answers
active oldest votes
Counting distinct values
AFAIK there's still no built-in formula to count distinct values in a range. But there are different formulas around that do the same. I've tested two formulas working fine
with your example data in LibreOffice 3.5:
• The first is (courtesy Bigyan Bhar):
• The second, more complex one, is an array formula, so you need to hit CTRL+SHIFT+ENTER after you entered it (courtesy David Chapman):
up vote 7 down vote
accepted =SUM(IF(FREQUENCY(IF(LEN(Data)>0;MATCH(Data;Data;0);"");IF(LEN(Data)>0;MATCH(Data;Data;0);""))>0;1))
each with "Data" replaced by the range to evaluate.
Listing distinct values
To list distinct values, just define a filter on the input range, excluding duplicates:
(There's currently a bug in libreoffice preventing the user from disabling the "Range contains column labels" checkbox, but it will be fixed in 3.5.2.)
thx a lot, the Listing distinct values part of your answer is exactly what I needed – OSdave Mar 22 '12 at 16:28
add comment
This basically libreoffice related question so it can be better if you can post it to libreoffice forums. and as far as having two different values in one cell is concerned (that is what
up vote 0 I have understood) I think you can have two different adjacent cells to store value and then merge above two cells to store single heading.
down vote
all values are in separated cells. – OSdave Mar 21 '12 at 9:32
add comment
Not the answer you're looking for? Browse other questions tagged libreoffice or ask your own question. | {"url":"http://askubuntu.com/questions/114732/different-values-in-one-column/114744","timestamp":"2014-04-23T10:01:03Z","content_type":null,"content_length":"68675","record_id":"<urn:uuid:f7e2644b-50dd-4c6f-aa8d-1d59c513dfa3>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Dr. Scott Annin
Research Statement
January 2011
I have been involved in a number of different research projects during my undergraduate and
graduate studies, and now as a faculty member at Cal State Fullerton (CSUF). To various
extents, these studies have broadly intersected with ring theory, group theory, semigroup
theory, linear algebra, combinatorics, and mathematics education.
My Ph.D. dissertation [3] was in the area of noncommutative ring theory. One of my
specific interests in that area is the associated prime ideals of a module over a ring. These
ideals have long played an important role in commutative rings, where they enjoy a rich
and extensive theory and applications, most notably to the well-known theory of primary
decomposition in computational algebra. At the same time, there is a useful and interesting
dual theory concerning "attached" prime ideals. These ideals were first introduced in 1973
by I.G. Macdonald [15] in the context of representable modules. The theory was somewhat
restrictive, applying only to a special class of modules over a commutative ring. In my
dissertation, I generalized the theory to arbitrary modules over (possibly) noncommutative
rings. In fact, my dissertation contributed heavily to the generalization of the theory of
both associated and attached primes and their properties to noncommutative ring theory.
The issue of ascertaining how various ring-theoretic and module-theoretic concepts (like
associated and attached prime ideals) behave under various types of change of rings, such as | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/655/3148389.html","timestamp":"2014-04-17T22:48:17Z","content_type":null,"content_length":"8620","record_id":"<urn:uuid:e15b3204-d581-4f59-a94b-632de4060c0d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differential equation problem (originally kinematics)
September 26th 2008, 10:27 AM #1
Sep 2008
A point mass ‘A’ is kept at the origin. Another B is kept at the x axis at x = H. Another C is kept at distance H from origin and distance h from B. A, B and C thus form an isosceles triangle
with vertex A. Given that A's mass >> B's mass >> C's mass; Newton’s gravitational law governs the bodies. C will have motion because of both A and B while B will have motion because of A only.
The equation of path of C in any coordinate system given that B and C have zero initial velocities.
This problem after a bit of physics will reduce to a problem of differential equations. I applied r, $\vartheta$ coordinate system and got two differential equations in r, $\vartheta$ and both
their single and double time derivatives. A very complex eqn indeed!
Solve or suggest a way to solve this problem, please!
Numerically. It's easy to code Mathematica to solve coupled sets. If you want, post the equations along with the appropriate initial conditions, parameter values and I'll show you how to set up
the NDSolve function to solve it.
The initial velocity of both B and C is 0. A is fixed to ground.
$<br /> <br /> r_{b/c}=\sqrt{r_b^2\,+\,r_c^2\,-2bc\cos \theta}\quad(1)$
$\frac{r_{b/c}}{\sin \theta}\,=\,\frac{r_b}{\sin \phi}\quad(2)<br />$
These two equations define the angle $\phi$ and $r_b/c$ for us.
$<br /> -\frac{k_b}{r_{b/c}^2}\,\left(1-\,\frac{r_b^2}{\sin^2 \theta r_{b/c}^2}\right)\,=\,\ddot{r_c}\,-\,r_c\dot{\theta^2}\quad(3)<br />$
$<br /> <br /> -\left(\frac{k_b}{r_{b/c}^2}\,\frac{r_b}{\sin \theta r_{b/c}}\,+\,\frac{k_a}{r_c^2}\right) \,= \,r_c\ddot{\theta}\,+\,2\dot{r_c}\dot{\theta}\quad (4)<br />$
$r_{b/c}<br />$ is defined by first two equations. Eqations 3 and 4 then need be solved for $r_c$ and $\theta$.
When we get $r_c$ and $\theta$, we shall have got the eqn of path of C.
Method applied to reach till the equations 3 and 4:
I used polar coordinate systems with the point A as origin. I found out radial and angular components of the accleration of C and compard them with the standard expressions with the same as given
Polar coordinate system - Wikipedia, the free encyclopedia
I don't have any intermediate values so I dont know how yu gonna go for a numerical method. Anyways, if you can solve it by whatever means, I will be very happy and grateful.
Last edited by taureau20; September 27th 2008 at 02:10 AM. Reason: boundary values not given
Here's some observations:
1. Need to better define what all the parameters and variables are.
2. You have an extraneous symbol: $\phi$. Either it's a constant or some external function of t.
3. I don't see the significance of equation (2): $r_{b/c}$ is defined in (1) and that can be substituted into (3) and (4).
4. Make up values for the parameters and initial conditions:
$r_c(0)=1;\quad r'_c(0)=1$
$\theta(0)=\pi/2;\quad \theta'(0)=2$
and any other values that need to be initialized.
5. Once you have done this, then it's a breeze to solve it numerically in Mathematica and plot the results
Parametric values!
$r_c(t=0)=132.44 \times 10^{24} metres\quad \dot {r_c}(t=0)=0$
$\theta(t=0)=1.297176448\times10^{-13}\deg \quad \dot {\theta}(t=0)=0$
In equations 3 and 4 below in the page, $k_b = G\times m_a$
Similarly for $k_a$.
masses: m_a=1.6 * 10^55; m_b=2 * 10^30; m_c=6 * 10^24.
This way we quantify $k_a$ and $k_b$ in equations, 3 and 4.
Further, the expression for $r_b$ in the two equations will actually bring along a $t$ (time) term, too complicating matters for us. that is, $r_b=((132.44 \times 10^{24})^{1.5}\, -\, k_a t)^{2/
Do you think it can be solved still? *dejected*
The expression for r_{b/c} (equation 1) also contains a r_b and we will have to put above equation in this equation.
After doing all this we have two equations (3 and 4 ie) which will thus have one $t$ term too along with those visible.
Last edited by taureau20; September 28th 2008 at 12:47 AM.
Taureau, I'm optimistic it can be solved but one has to be Zen about it: you don't just wip it out. You know that. Rather you approach the solution asymptotically, gradually building upon it's
Also, during the initial phase, don't use all those complicated initial values. Just use simple values like 1, 2, 100, -5, etc. Yea, I know, that may result in quite different solutions, but then
you gradually ramp up those values towards your actual values.
The important thing is to "rough it in" even if it's wrong. You can clean it up later. I've done that below although I suspect you won't be impressed. It's a start though and does yield a result
(no syntax, logic errors). That's a big start. Now you can start "cleaning it up".
Keep in mind that as you ramp up the parameters, you may reach a point in which the software has problems doing the numerical calculations. That's a hazard you have to accept. You can always
attempt to ramp up the precision and numerical accuracy.
I dropped all the subscripts in the code below as they interfered with the coding: I used "rbc" for r_{bc}, "rb" for r_b, and th(t) for $\theta(t)$ and so on.
Note when I do get (pseudo) results for $r_c(t)$ and $\theta(t)$, I assume these are in polar coordinates. I then plotted the trajectory as $r_c(t)\cos(\theta(t)),r_c(t)\sin(\theta(t))$. Not sure
that's what you want.
I plotted the results for this run below. Don't laugh. I realize it's not even close. Remember: Zen.
g = 0.0001;
ma = 1000;
mb = 1000;
b = 1;
c = 1;
ka = g ma;
kb = g mb;
rb[t_] := (100^(1/5) - ka t)^(2/3);
rbc[t_] := Sqrt[rb[t]^2 + rc[t]^2 - 2 b c Cos[th[t]]]
eqn1 = rc''[t] -
rc[t] th'[t]^2 == -(kb/
rbc[t]^2) (1 - rb[t]^2/(Sin[th[t]]^2 rbc[t]^2));
eqn2 = rc[t] th''[t] +
2 rc'[t] th'[t] ==
-(kb/rbc[t]^2 rb[t]/(Sin[th[t]] rbc[t]) + ka/
sol = NDSolve[{eqn1, eqn2, rc[0] == 100,
rc'[0] == 0, th[0] == 0.1,
th'[0] == 0}, {rc, th}, {t, 0, 10}]
Evaluate[{rc[t] Cos[th[t]], rc[t] Sin[th[t]]} /.
Flatten[sol]], {t,
0, 10}]
Dear shawsend,
You correctly anticipated that those are polar coordinates I was working with and the graph I want is properly drawn in $(x,y)$ where x takes the value $r cos\theta$, and y takes the value, $r
sin \theta$.
I much like the asymptotic approach you suggest (and will make it a point to follow it in future problems!).
I see that you have provided me with the code that you have employed in Mathematika to solve the equations. Much appreciated.
Albeit I do not have Mathematika on my computer, I will try and obtain the software and then work on it to get my graphic solution. Thanks for your cooperation, Shawsend, much appreciate it
I will try and stay in touch and notify you if my 'asymptotic' approach gives me any result. If not, I will ask for your help again!
Hey, just go to your local university or just pick one at random and show up in the math department and start asking people there (again randomly) that you have a differential equation you can't
solve and would like to run it on Mathematica. They love doing math there and would welcome the challenge.
Also, Mathematica (or other) in my opinion is indispensable in doing mathematics. You've got to find a way to get it and start learning how to use it.
I have installed Mathematica on my system. Now, when I use the NDSolve function or anything whose output is supposed to be an image displayed, it doesn't display that image (could be a plot,
etc.) rather it just says, -Graphics- and leaves it at that. Could you tell me how to get round it?
I write code on Mathematical 6 Kernel window; my OS is Vista.
September 26th 2008, 04:30 PM #2
Super Member
Aug 2008
September 27th 2008, 12:51 AM #3
Sep 2008
September 27th 2008, 02:12 AM #4
Super Member
Aug 2008
September 28th 2008, 12:34 AM #5
Sep 2008
September 28th 2008, 05:05 AM #6
Super Member
Aug 2008
September 28th 2008, 05:22 AM #7
Sep 2008
September 28th 2008, 05:26 AM #8
Sep 2008
September 28th 2008, 06:00 AM #9
Super Member
Aug 2008
September 30th 2008, 01:56 PM #10
Sep 2008 | {"url":"http://mathhelpforum.com/differential-equations/50725-differential-equation-problem-originally-kinematics.html","timestamp":"2014-04-18T12:06:00Z","content_type":null,"content_length":"66070","record_id":"<urn:uuid:47d8f600-7771-4eb7-9f5a-b1e3a01fc9ea>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inverse matrices=]
May 19th 2010, 04:50 AM #1
May 2010
Inverse matrices=]
Hi I have an assignment question as follows
A, B and C are ntimes n matrices such that AB=I and CA=I
show B= C (i have done this part)
b) i. A and B are n times n matrices that commute. Show A squared and B squared commute
ii. Give a generalisation of this result (without proof)
c. A and B are n times n matrices and n is invetible. Shoe
(A+B) A^-1(A-B)=(A-B)A^-1(A+B)
d. A and B are n times n invertible matrices that commute. Show that A^-1 and B^-1 also commute
it's fairly urgent-any help would be much appreciated
Since this is an assignment, you are expected to do it! Here are some hints.
A, B and C are ntimes n matrices such that AB=I and CA=I
show B= C (i have done this part)
b) i. A and B are n times n matrices that commute. Show A squared and B squared commute
ii. Give a generalisation of this result (without proof)
I presume you have done this- it's almost trivial.
c. A and B are n times n matrices and n is invetible. Shoe
(A+B) A^-1(A-B)=(A-B)A^-1(A+B)
I presume you mean "A is invertible".
Go ahead an multiply out left and right sides. You should get the same result. The only difference between this and elementary algebra is that you have to be careful not to commute A and B.
d. A and B are n times n invertible matrices that commute. Show that A^-1 and B^-1 also commute
it's fairly urgent-any help would be much appreciated
This is also close to being trivial.
Look at $(AB)^{-1}$ and $(BA)^{-1}$. Of course since A and B commute, those must be equal.
May 19th 2010, 05:38 AM #2
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/advanced-algebra/145528-inverse-matrices.html","timestamp":"2014-04-16T04:25:23Z","content_type":null,"content_length":"34692","record_id":"<urn:uuid:56e17e96-53ae-45ff-940d-0fb9af7b43a0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
List of Visiting Speakers: Dr. Leon H. Seitelman
Dr. Leon H. Seitelman
110 Cambridge Drive
Glastonbury, CT 06033
(formerly with the Department of Mathematics, University of Connecticut, Storrs, CT and United Technologies, East Hartford, CT)
Phone: 860-633-0140
E-mail: lseitelman@aol.com
Lee Seitelman worked at Pratt & Whitney for 30 years after receiving his Ph.D. in Applied Mathematics from Brown University in 1967. He provided support for problem solving and applied research in a
broad spectrum of engineering and manufacturing applications, particularly in computer-aided design and manufacturing (CAD/CAM), working in cooperation with engineers on a variety of projects in jet
engine design and development. His background, including degrees in engineering and pure mathematics, was a valuable resource for meeting the challenge of the interdisciplinary technical problems
that characterize industrial work. His professional interests include K-12 mathematics education renewal efforts at state and national levels, and he organized the SIAM Visiting Lecturer Program,
chairing it from 1993 - 2000.
What's a Mathematician Like You Doing in a Place Like That? (An introduction to industrial problem solving)
Industrial problem solving rarely if ever consists simply of the study of clearly defined, well-posed, straightforward mathematical problems. A large part of the applied mathematician's work in
industry is devoted to setting up, defining, redefining, and developing numerical solutions for problems that arise from a physical description of nature or an engineering process. We illustrate the
nature of these assignments by using examples from jet engine design and analysis; occupational experience is similar in other industries.
The technical problems to be solved are not perceived to be mathematical by their originators. Because those people may have limited experience and/or training in mathematics, it is important that
the industrial mathematician be comfortable working in an environment in which the iteration focuses on the engineering and scientific bases for problems, rather than on the mathematics required to
solve them. Careful course selection and interpersonal skills development are essential if a prospective industrial mathematician wishes to maximize the likelihood that he/she will contribute
effectively in such a team problem solving environment.
Natural Cubic Splines are Unnatural
(An introduction to CAD/CAM curve and surface fitting)
Suggestions from the Real World About Improving Math Education | {"url":"https://www.siam.org/visiting/speakers/seitelman.php","timestamp":"2014-04-19T05:39:00Z","content_type":null,"content_length":"8930","record_id":"<urn:uuid:97aff175-4c23-42ee-8ddc-8a6e3b34b037>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fracture Squares of Bousfield Localizations of Spectra
up vote 2 down vote favorite
Suppose I have a spectrum $X$ and two homology theories $E$ and $F$. If I look at the Bousfield localizations, $L_E$, $L_F$, $L_{E\vee F}$ and $L_{E\wedge F}$, do I have a homotopy pullback square
whose top row is $L_{E\vee F}(X)\to L_E(X)$, and whose lower row is $L_F(X)\to L_{E\wedge F}(X)$? If not, is it known what conditions I need to place on $E$ and $F$ to make this all work out? Does
anyone know if I can iterate this process over some set of homology theories?
I went ahead and made this a reference request, because I imagine it could a rather significant answer.
stable-homotopy at.algebraic-topology reference-request
I might add that there is the well known case where we do this with completion at primes and rationalization. I think... – Jon Beardsley Mar 12 '12 at 21:20
As well as situations with the Morava $K$ and $E$ theories. – Jon Beardsley Mar 12 '12 at 22:04
1 You seem to have your arrows backwards. And it's possible that the well-known case you are thinking of involves a composition $L_E\circ L_F$ rather than $L_{E\wedge F}$ or $L_{E\vee F}$. – Tom
Goodwillie Mar 12 '12 at 22:04
And yes... you're right about the composition, to build the $E(n)$ localizations. I guess... hmm, what am I saying. I guess it should be something like that. In that case, it should be like
wedging right? Since that's how we build our $E(n)$'s? – Jon Beardsley Mar 12 '12 at 22:26
1 On the other hand, you have that $K(n) \wedge K(m)$ is contractible for $n \neq m$, and the same identity holds for their Bousfield classes. The situation you're describing actually relies on
something special - namely, that for $n > m$ anything $K(m)$-local is $K(n)$-acyclic. – Tyler Lawson Mar 12 '12 at 22:40
show 1 more comment
2 Answers
active oldest votes
I think the best available statement is as follows. Suppose that $E$ and $F$ have the property that whenever $F\wedge X=0$ we also have $F\wedge L_EX=0$. (This holds if $L_E$ is
smashing, for example when $E$ is the Johnson-Wilson spectrum $E(n)$.) Then there is a natural homotopy pullback square $$ \begin{array}{ccc} L_{E\vee F}X & \rightarrow & L_EX \\\\ \
downarrow & & \downarrow \\\\ L_FX & \rightarrow & L_EL_FX \end{array} $$ Note that $L_{E\wedge F}X$ does not occur here. Probably the most important example is where $E=E(n-1)$ and $F=K
(n)$ so $E\vee F$ is Bousfield equivalent to $E(n)$ but $E\wedge F=0$ and also $L_FL_E=0$ (but $L_EL_F\neq 0$).
up vote 9 For another important example, we can take $E=S\mathbb{Q}$ and $F=S/p$ so $E\vee F$ is Bousfield equivalent to $S_{(p)}$. In this case $L_{E\vee F}X=X_{(p)}$ and $L_EX=X\mathbb{Q}$ and
down vote $L_FX=X^\wedge_p$ and $L_EL_FX=(X^\wedge_p)\mathbb{Q}$. This gives the $p$-local arithmetic fracture square. For the global arithmetic fracture square, take $F=S(\mathbb{Q}/\mathbb{Z})$
accepted (which is Bousfield equivalent to $\bigvee_pS/p$) instead.
I think that these ideas are all due to Mike Hopkins, but I don't remember what is the best place to read about them. I think there is a good paper by Mark Hovey.
Ah yes Neil thankyou. I see the point now, where smash is a problem. I guess I'm trying to have some kind of descent property, so what you say may indeed work anyway. Thanks! – Jon
Beardsley Mar 13 '12 at 14:47
I think the relevant paper here might be Hovey's paper on the chromatic splitting conjecture? – Jon Beardsley Mar 13 '12 at 22:38
add comment
I am not really a MathOverflow reader, but I just came across this discussion. I first saw the fracture square that Neil describes (in the classic case of interest as above) in a
(handwritten) letter to me from Pete Bousfield dated January 22, 1987. It is in the midst of a paragraph that begins with " ... I'll make some little comments which may be well known to
you.", and describes how to (easily) construct distinct nice spectra X and Y whose K(n)-localizations agree for all n. (His letter was part of a correspondence we had around then about how
up vote one could generalize his telescopic functor for n=1 to all n.)
6 down
vote Very possibly Pete knew the fracture square result in the late 1970's, when he was thinking about the Boolean algebra of localization functors and such. But it doesn't have a lot of meat
until one has some naturally arising smashing localizations, which needed developments in the 1980's.
Thanks @Nick! I too am thinking about the Boolean algebra of localization functors! But probably if Bousfield didn't do much more with it, neither will I. – Jon Beardsley Oct 29 '12 at
add comment
Not the answer you're looking for? Browse other questions tagged stable-homotopy at.algebraic-topology reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/91021/fracture-squares-of-bousfield-localizations-of-spectra/91057","timestamp":"2014-04-18T18:37:05Z","content_type":null,"content_length":"65144","record_id":"<urn:uuid:0748c825-c135-460e-83d8-c4fbd9ad8beb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 15
, 1987
"... Contexts have been proposed as a means of performing strictness analysis on non-flat domains. Roughly speaking, a context describes how much a sub-expression will be evaluated by the surrounding
program. This paper shows how contexts can be represented using the notion of projection from domain theo ..."
Cited by 98 (4 self)
Add to MetaCart
Contexts have been proposed as a means of performing strictness analysis on non-flat domains. Roughly speaking, a context describes how much a sub-expression will be evaluated by the surrounding
program. This paper shows how contexts can be represented using the notion of projection from domain theory. This is clearer than the previous explanation of contexts in terms of continuations. In
addition, this paper describes finite domains of contexts over the non-flat list domain. This means that recursive context equations can be solved using standard fixpoint techniques, instead of the
algebraic manipulation previously used. Praises of lazy functional languages have been widely sung, and so have some curses. One reason for praise is that laziness supports programming styles that
are inconvenient or impossible otherwise [Joh87, Hug84, Wad85a]. One reason for cursing is that laziness hinders efficient implementation. Still, acceptable efficiency for lazy languages is at last
being achieved...
- JOURNAL OF FUNCTIONAL AND LOGIC PROGRAMMING , 1998
"... ..."
, 1997
"... Context-sensitive rewriting is a simple restriction of rewriting which is formalized by imposing fixed restrictions on replacements. Such a restriction is given on a purely syntactic basis: it
is (explicitly or automatically) specified on the arguments of symbols of the signature and inductively ..."
Cited by 43 (30 self)
Add to MetaCart
Context-sensitive rewriting is a simple restriction of rewriting which is formalized by imposing fixed restrictions on replacements. Such a restriction is given on a purely syntactic basis: it is
(explicitly or automatically) specified on the arguments of symbols of the signature and inductively extended to arbitrary positions of terms built from those symbols. Termination is not only
preserved but usually improved and several methods have been developed to formally prove it. In this paper, we investigate the definition, properties, and use of context-sensitive rewriting
strategies, i.e., particular, fixed sequences of context-sensitive rewriting steps. We study how to define them in order to obtain efficient computations and to ensure that context-sensitive
computations terminate whenever possible. We give conditions enabling the use of these strategies for root-normalization, normalization, and infinitary normalization. We show that this theory is
suitable for formalizing ...
- In Conference Record of the 15th Annual ACM Symposium on Principles of Programming Languages. ACM , 1988
"... Analysing time complexity of functional programs in a lazy language is problematic, because the time required to evaluate a function depends on how much of the result is "needed" in the
computation. Recent results in strictness analysis provide a formalisation of this notion of "need", and thus can ..."
Cited by 35 (0 self)
Add to MetaCart
Analysing time complexity of functional programs in a lazy language is problematic, because the time required to evaluate a function depends on how much of the result is "needed" in the computation.
Recent results in strictness analysis provide a formalisation of this notion of "need", and thus can be adapted to analyse time complexity. The future of programming may be in this paradigm: to
create software, first write a specification that is clear, and then refine it to an implementation that is efficient. In particular, this paradigm is a prime motivation behind the study of
functional programming. Much has been written about the process of transforming one functional program into another. However, a key part of the process has been largely ignored, for very little has
been written about assessing the efficiency of the resulting programs. Traditionally, the major indicators of efficiency are time and space complexity. This paper focuses on the former. Functional
programming can be sp...
- In Proceedings of the 1990 ACM Conference on Lisp and Functional Programming , 1990
"... Projection analysis is a technique for finding out information about lazy functional programs. We show how the information obtained from this analysis can be used to speed up sequential
implementations, and introduce parallelism into parallel implementations. The underlying evaluation model is evalu ..."
Cited by 15 (6 self)
Add to MetaCart
Projection analysis is a technique for finding out information about lazy functional programs. We show how the information obtained from this analysis can be used to speed up sequential
implementations, and introduce parallelism into parallel implementations. The underlying evaluation model is evaluation transformers, where the amount of evaluation that is allowed of an argument in
a function application depends on the amount of evaluation allowed of the application. We prove that the transformed programs preserve the semantics of the original programs. Compilation rules, which
encode the information from the analysis, are given for sequential and parallel machines. 1 Introduction A number of analyses have been developed which find out information about programs. The
methods that have been developed fall broadly into two classes, forwards analyses such as those based on the ideas of abstract interpretation (e.g. [9, 18, 19, 7, 17, 12, 4, 20]), and backward
analyses such as those based...
- LISP and Symbolic Computation , 1988
"... Implementations of lazy evaluation for non-strict functional languages usually involve the notion of a delayed representation of the value of an expression, which we call a thunk. We present
several techniques for implementing thunks and formalize a class of optimizations that reduce both the space ..."
Cited by 15 (0 self)
Add to MetaCart
Implementations of lazy evaluation for non-strict functional languages usually involve the notion of a delayed representation of the value of an expression, which we call a thunk. We present several
techniques for implementing thunks and formalize a class of optimizations that reduce both the space and time overhead of these techniques. The optimizations depend on a compile-time inferencing
strategy called path analysis, a generalization of strictness analysis that uncovers order-of-evaluation information. Although the techniques in this paper are focused on the compilation of a
non-strict functional language for a conventional architecture, they are directly applicable to most of the virtual machines commonly used for implementing such languages. The same techniques also
apply to other forms of delayed evaluation such as futures and promises. 1
- Math. Struct. in Comp. Science , 1991
"... this paper, that results from this kind of analysis are, in a sense, polymorphic. This confirms an earlier conjecture [19], and shows how the technique can be applied to first-order polymorphic
functions. The paper is organised as follows. In the next section, we review projection-based strictness a ..."
Cited by 6 (1 self)
Add to MetaCart
this paper, that results from this kind of analysis are, in a sense, polymorphic. This confirms an earlier conjecture [19], and shows how the technique can be applied to first-order polymorphic
functions. The paper is organised as follows. In the next section, we review projection-based strictness analysis very briefly. In Section 3 we introduce the types we will be working with: they are
the objects of a category. We show that parameterised types are functors, with certain cancellation properties. In Section 4 we define strong and weak polymorphism: polymorphic functions in
programming languages are strongly polymorphic, but we will need to use projections with a slightly weaker property. We prove that, under certain conditions, weakly polymorphic functions are
characterised by any non-trivial instance. We can therefore analyse one monomorphic instance of a polymorphic function using existing techniques, and apply the results to every instance. In Section 5
we choose a finite set of projections for each type, suitable for use in a practical compiler. We call these specially chosen projections contexts, and we show examples of factorising contexts for
compound types in order to facilitate application of the results of Section 4. We give a number of examples of polymorphic strictness analysis. Finally, in Section 6 we discuss related work and draw
some conclusions. 2. Projections for Strictness Analysis
, 1997
"... this paper, we provide a precise and formal characterizationof the loss of information that leads to this incompleteness. Specifically, we establish the following characterization theorem for
Mycroft's strictness analysis method and a natural generalization of his method to non-flat domains called e ..."
Cited by 5 (0 self)
Add to MetaCart
this paper, we provide a precise and formal characterizationof the loss of information that leads to this incompleteness. Specifically, we establish the following characterization theorem for
Mycroft's strictness analysis method and a natural generalization of his method to non-flat domains called ee-analysis: Mycroft's method will deduce a strictness property for program P iff the
property is independent of any constant appearing in any evaluation of P . To prove this, we specify a small set of equations called E-axioms, that capture the information loss in Mycroft's method
and develop a new proof technique called E-rewriting. E-rewriting extends the standard notion of rewriting to permit the use of reductions using E-axioms interspersed with standard reduction steps.
E-axioms are a syntactic characterization of information loss and E-rewriting provides an algorithm independent proof technique for characterizing the power of analysis methods. It can be used to
answer questions on completeness and incompleteness of Mycroft's method on certain natural classes of programs. Finally, the techniques developed in this paper provide a general principle for
establishing similar results for other analysis methods such as those based on abstract interpretation. As a demonstration of the generality of our technique, we give a characterization theorem for
another variation of Mycroft's method called dd-analysis. Categories and Subject Descriptors: D.3.1 [Programming Languages]: Formal Definitions and Theory; D.3.2 [Programming Languages]: Language
Classifications---applicative languages; D.3.4 [Programming Languages]: Processors---compilers ; optimization General Terms: Languages, Theory, Measurement Additional Key Words and Phrases: Program
analysis, abstract interpretation, str...
- Lecture Notes in Computer Science , 1998
"... . A new denotational semantics is introduced for realistic non-strict functional languages, which have a polymorphic type system and support higher order functions and user definable algebraic
data types. It maps each function definition to a demand propagator, which is a higher order function, t ..."
Cited by 4 (0 self)
Add to MetaCart
. A new denotational semantics is introduced for realistic non-strict functional languages, which have a polymorphic type system and support higher order functions and user definable algebraic data
types. It maps each function definition to a demand propagator, which is a higher order function, that propagates context demands to function arguments. The relation of this "higher order demand
propagation semantics" to the standard semantics is explained and it is used to define a backward strictness analysis. The strictness information deduced by this analysis is very accurate, because
demands can actually be constructed during the analysis. These demands conform better to the analysed functions than abstract values, which are constructed alone with respect to types like in other
existing strictness analyses. The richness of the semantic domains of higher order demand propagation makes it possible to express generalised strictness information for higher order functions even
, 1994
"... Domain In this section we construct a domain of abstract constraints called ACon, which abstracts the domain #(C). In the construction of ACon, we use two domains called D and D V , also
introduced in this section, which consist of non-ground, downwards-closed types representing sets of terms in #( ..."
Cited by 4 (0 self)
Add to MetaCart
Domain In this section we construct a domain of abstract constraints called ACon, which abstracts the domain #(C). In the construction of ACon, we use two domains called D and D V , also introduced
in this section, which consist of non-ground, downwards-closed types representing sets of terms in #(H V ) and some basic types, such as the set of integers. (H V is ordered by t 1 t 2 if t 1 is a
substitution instance of t 2 .) The domain of types is given by D ::= ? j? j j c(D; : : : ; D) j numj D D j :D. Program variables are not mentioned by types in D. In the syntax of D, c ranges over
constructor symbols and is a fixpoint operator. Type variables are given by 2 TV , which are used only for fixpoint constructions. The base types ?, ? (read, "non-var"), and num represent H V , H V n
V , and the set of integers, respectively. Example 3.1. fX = ?; Y = ?g is an element of ACon representing the downwardsclosed set of constraints where X is constrained arbitrarily (including not at | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1586807","timestamp":"2014-04-16T10:15:28Z","content_type":null,"content_length":"40228","record_id":"<urn:uuid:846bfcd1-2b27-4a69-bbe4-830da6798419>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
quasi conformal, area preserving homomorphisms of the disc
up vote 7 down vote favorite
Restricting a quasi-conformal homeomorphism of the disc to the boundary gives a surjective homomorphism from $QC(D^2)$ (quasi-conformal homeos of $D^2$) to $QS(S^1)$ (quasi-symmetric homeos of the
circle). Surjetivity here follows, for example, from the existence of natural extensions of QS homeos like Douady-Earle.
The same restriction-to-the-boundary map from the group of area preserving smooth diffeos (symplectomorphisms if you will) to Diff$(S^1)$ is also surjective -- one way to see this is to construct a
by-hand extension defined in a collar neighborhood and then use Moser's theorem (here), and details of perhaps another approach have been written up here.
My question is: what are the possible "boundary values" of area preserving QC homeos of the disc? My guess is that the map from area preserving QC homeos to $QS(S^1)$ is not surjective... perhaps
someone with a good understanding of symplectic homeos has an idea of what the image is.
gt.geometric-topology sg.symplectic-geometry quasiconformal
add comment
1 Answer
active oldest votes
Claim. A QS homeomorphism $f$ of the circle extends to a QC area preserving map of the disk if and only if $f$ is BL (bi-Lipschitz).
1. Suppose that $f: S^1\to S^1$ is a QS map which admits an area-preserving quasiconformal extension $F: D^2\to D^2$. Then it immediately follows from the definition of quasiconformality
that $F$ has to be (globally) bi-Lipschitz. In particular, $f$ was bi-Lipschitz to begin with.
up vote 6 2. Suppose now that $f$ is a BL homeomorphism of the unit circle. I claim that it extends to a BL area preserving homeomorphism $F$ of the disk. By looking at the BL flow from the
down vote identity to $f$, we see that the problem reduces to the case of Lipschitz vector fields $v$ tangent to the circle which we have to extend to Lipschitz divergence-free vector fields
accepted $V$ on the disk. By removing a point on the circle, we reduce the question to the case when $v=u'$, where $u$ is a $C^{1,1}$ function on the circle (maybe minus a point). Now, extend
$u$ to a harmonic function $U$ on the disk. Then the gradient $V$ of $U$ has zero divergence and is Lipschitz. Furthermore, $V$ will be Lipschitz at the boundary (except maybe for one
point), to it will extend to the remaining point as well. Qed
I was assuming here that $f$ is orientation-preserving. However, by composing with a symmetry of the disk, the general case reduces to this one, provided that you allow QC maps to reverse
add comment
Not the answer you're looking for? Browse other questions tagged gt.geometric-topology sg.symplectic-geometry quasiconformal or ask your own question. | {"url":"http://mathoverflow.net/questions/127371/quasi-conformal-area-preserving-homomorphisms-of-the-disc?sort=newest","timestamp":"2014-04-18T21:05:27Z","content_type":null,"content_length":"52911","record_id":"<urn:uuid:f8abb15f-0c91-4fb2-95da-989197bd8f49>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Peoria, AZ Algebra 2 Tutor
Find a Peoria, AZ Algebra 2 Tutor
...I have a solid background in general arithmetic and algebra, which are important fundamentals to understanding precalculus. Physical science is interesting because in encompasses a variety of
topics such as chemistry, physics, and earth science to name a few. It's a general science rather than a speciality science, which makes it appealing to students and teachers.
26 Subjects: including algebra 2, reading, physics, geometry
...Languages I have used are: Visual Basic, C#, C++, and some Assembly code. I also have recently starting learning how to use Linux systems. I am currently earning a degree in Linux Network
Administration as well as a degree in Programming and Systems Analysis.
30 Subjects: including algebra 2, chemistry, reading, geometry
...As I mentioned in my profile, I love numbers, and wish to help others appreciate how mathematics impacts our lives in countless ways every day. With an appreciation for numbers comes a desire
to grasp and master mathematical concepts, leading to success in the classroom and beyond. With a bache...
20 Subjects: including algebra 2, English, writing, calculus
...In my classes, I have many students on the Autism spectrum, including students with Asperger's syndrome, and have great success helping them achieve their educational, social, and behavioral
goals. I am a certified Cross-Categorical Special Education teacher in the state of Arizona which means I...
40 Subjects: including algebra 2, English, writing, reading
...Thank you for your consideration in working with me as a tutor; I look forward to hearing from you. Sincerely, DaveI have a minor in computational math to go along with my computer science
degree. I really enjoy teaching the fundamentals of math like algebra because it is so important to understand the basics before moving on to more advanced subjects.
20 Subjects: including algebra 2, calculus, computer programming, C
Related Peoria, AZ Tutors
Peoria, AZ Accounting Tutors
Peoria, AZ ACT Tutors
Peoria, AZ Algebra Tutors
Peoria, AZ Algebra 2 Tutors
Peoria, AZ Calculus Tutors
Peoria, AZ Geometry Tutors
Peoria, AZ Math Tutors
Peoria, AZ Prealgebra Tutors
Peoria, AZ Precalculus Tutors
Peoria, AZ SAT Tutors
Peoria, AZ SAT Math Tutors
Peoria, AZ Science Tutors
Peoria, AZ Statistics Tutors
Peoria, AZ Trigonometry Tutors | {"url":"http://www.purplemath.com/peoria_az_algebra_2_tutors.php","timestamp":"2014-04-16T13:32:46Z","content_type":null,"content_length":"24075","record_id":"<urn:uuid:6e4c5752-98f6-4747-bbc8-2fc39851417b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marius Sophus Lie Summary
This section contains 907 words
(approx. 4 pages at 300 words per page)
World of Mathematics on Marius Sophus Lie
Marius Sophus Lie was one of the first prominent Norwegian scientists and among the last of the great 19th-century mathematicians. His main contribution was his theory of groups. Lie groupsand Lie
algebrasare fundamental tools in many parts of 20th century mathematics, from the theory of differential equations to the understanding of elementary particle physics. Although he was an isolated
academic who generally lacked regular contact with colleagues or interested students, Lie produced his finest work in collaboration with Felix Klein and later Friedrich Engel.
Lie was born on December 17, 1842, in Nordfjordeide, Norway, the youngest of six children. His father, Johann Herman Lie, was a Lutheran pastor. Lie's education was standard: he first studied in
Moss, moving onto Kristiania (present-day Oslo) to study at Nissen's Private Latin School from 1857 to 1859. For the next six years, Lie studied mathematics and science at Kristiania University,
graduating without distinction in 1865.
This section contains 907 words
(approx. 4 pages at 300 words per page) | {"url":"http://www.bookrags.com/biography/marius-sophus-lie-wom/","timestamp":"2014-04-18T07:13:00Z","content_type":null,"content_length":"32384","record_id":"<urn:uuid:e9c1a9a4-d8f9-4056-a044-0108cdb1570e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electrohydrodynamic linear stability of two immiscible fluids in channel flow
Electrohydrodynamic linear stability of two immiscible fluids in channel flow
Electrochimica Acta 04/2006; 51206585(85). DOI:10.1016/j.electacta.2006.02.002
ABSTRACT The electrohydrodynamic instability of the interface between two viscous fluids with different electrical properties in plane Poiseuille flow has recently found applications in mixing and
droplet formation in microfluidic devices. In this paper, we perform the stability analysis in the case where the fluids are assumed to be leaky dielectrics. The two-layer system is subjected to an
electric field normal to the interface between the two fluids. We make no assumption on the magnitude of the ratio of fluid to electric time scales, and thus solve the full conservation equation for
the interfacial charge. The electric field is found to be either stabilizing or destabilizing, and the influence of the various parameters of the problem on the interface stability is thoroughly
[show abstract] [hide abstract]
ABSTRACT: Based on a modified-Darcy—Maxwell model, two-dimensional, incompressible and heat transfer flow of two bounded layers, through electrified Maxwell fluids in porous media is performed.
The driving force for the instability under an electric field, is an electrostatic force exerted on the free charges accumulated at the dividing interface. Normal mode analysis is considered to
study the linear stability of the disturbances layers. The solutions of the linearized equations of motion with the boundary conditions lead to an implicit dispersion relation between the growth
rate and wave number. These equations are parameterized by Weber number, Reynolds number, Marangoni number, dimensionless conductivities, and dimensionless electric potentials. The case of long
waves interfacial stability has been studied. The stability criteria are performed theoretically in which stability diagrams are obtained. In the limiting cases, some previously published results
can be considered as particular cases of our results. It is found that the Reynolds number plays a destabilizing role in the stability criteria, while the damping influence is observed for the
increasing of Marangoni number and Maxwell relaxation time.
Communications in Theoretical Physics 06/2011; 55(6):1077. · 0.95 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: This paper investigates analytically and experimentally electrohydrodynamic instability of the interface between two viscous fluids with different electrical properties under constant
flow rates in a microchannel. In the three-dimensional analytical model, the two-layer system is subjected to an electric field normal to the interface between the two fluids. There is no
assumption on the magnitude of the ratio of fluid to electric time scales, and thus the linear Poisson–Boltzmann equation are solved using separation of variable method for densities of bulk
charge and surface charge. The electric field and fluid dynamics are coupled only at the interface through the tangential and normal interfacial stress balance equations. In the experiments, two
immiscible fluids, aqueous NaHCO3 (the high electrical mobility fluid) and silicone oil (polydimethylsiloxane, the low electrical mobility fluid) are pumped into a microchannel made in polymethyl
methacrylate) (PMMA) substrate. The normal electric field is added using a high voltage power supply. The results showed that the external electric field and increasing width of microchannel
destabilize the interface between the immiscible fluids. At the same time, the viscosity of the high electrical mobility fluid and flow rates of fluids has a stabilizing effect. The experimental
results and the analytical results show a reasonable agreement.
International Journal of Heat and Mass Transfer 01/2012; 55(23-24):6994-7004. · 2.32 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: This report is about microfluidic extraction systems based on droplets of aqueous two-phase system. Mass transfer between continuous phase and dispersed droplet is demonstrated by
microextraction of ruthenium red in a microfluidic device. Droplets are generated with electrohydrodynamic method in the same device. By comparing brightness in the digital image of a solution
with known concentrations of ruthenium red and those of a droplet in the microextraction, ruthenium red concentration was measured along the microextraction channel, resulting in good agreement
with a simple diffusion model. The maximum partition coefficient was 9.58 in the experiment with the 70-mm-long-channel microextractor. The method is usable for terminating microextraction by
electrohydrodynamic manipulation of droplet movement direction. Droplets of different ruthenium red concentration, 0.12 and 0.24% (w/w) in this experiment, can be moved to desired place of
microfluidic system for further reaction through respectively branched outlets. In this study droplet-based microextraction is demonstrated and the mass transport is numerically analyzed by
solving the diffusion-dissolution model.
Journal of chromatography. A 06/2010; 1217(24):3723-8. · 4.19 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does
not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
43 Downloads
Available from
Dec 28, 2013 | {"url":"http://www.researchgate.net/publication/228855568_Electrohydrodynamic_linear_stability_of_two_immiscible_fluids_in_channel_flow","timestamp":"2014-04-21T12:31:46Z","content_type":null,"content_length":"244424","record_id":"<urn:uuid:c0f969b3-01e6-4e00-b597-1307d746a7bd>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Further adventures with higher moments
December 23, 2013
By Pat
Additional views of the stability of skewness and kurtosis of equity portfolios.
A post called “Four moments of portfolios” introduced the idea of looking at the stability of the mean, variance, skewness and kurtosis of portfolios through time.
That post gave birth to a presentation at the London Quant Group.
That talk gave birth to several questions — the present post explores one of them.
A critique of the plots like Figures 3 and 4 of “Four moments of portfolios” was that there is autocorrelation in the ranks from one day to the next because they are based on 250-day windows.
Figures 1 through 4 here change that — these look at the mean absolute difference of days that are 250 days apart. So the two ranks in the difference are formed in blocks that are adjacent but not
Figure 1: Mean absolute value of the differences of ranks 250 days apart for the 200-name random portfolios.
Figure 1 is the same as Figure 4 of “Four moments of portfolios” except for the lag in the difference. When there are overlapping windows, the higher moments look more stable than the mean (but see
below); while in the non-overlapping case here, they have ever so slightly worse stability than the mean.
Figure 2: Mean absolute value of the differences of ranks 250 days apart for the 200-name random portfolios using returns Winsorized at 3 (robust) standard deviations.
Comparing Figure 2 with Figure 1 we see Winsorization to have a minimal effect for the higher moments when the windows are not overlapping. In the overlapping case Winsorization made the higher
moments look much less stable.
Figure 3: Mean absolute value of the differences of ranks 250 days apart for the 200-name random portfolios using returns with and without Winsorizing at 3 (robust) standard deviations for the mean
and variance.
In Figure 3 we see that Winsorization still makes the variance less stable for non-overlapping windows, but the difference is more muted. Figure 4 shows that for skewness and kurtosis in the
non-overlapping case the only effect of Winsorization seems to be to decrease variability slightly.
Figure 4: Mean absolute value of the differences of ranks 250 days apart for the 200-name random portfolios using returns with and without Winsorizing at 3 (robust) standard deviations for skewness
and kurtosis.
One of the questions during the presentation was why are the outliers for the mean all on the upside and all the outliers for the variance on the downside. It looks like Antonia Lim’s intuition of
autocorrelation causing it seems believable — there is no such pattern in Figure 3 here. However, I still don’t understand the idea.
Total instability
We can see what to expect if there is no stability at all by randomly permuting the returns each day among the portfolios.
Figure 5: Mean absolute value of the differences of ranks on adjacent days for permuted returns.
Figure 5 confirms Robert Macrae’s suggestion that the line of usefulness is downward sloping.
Figure 6: Mean absolute value of the differences of ranks 250 days apart for permuted returns.
Comparing Figures 3 and 4 with Figure 6 suggests that the mean, skewness and kurtosis might have a sliver of stability.
Why is Antonia right about autocorrelation?
Why is Robert right about differing baselines in the overlapping window case (Figure 5)?
This new view makes skewness and kurtosis appear even less useful in equities than before. Any stability they have on this time scale is minimal. However, there might be more stability on a shorter
time scale.
and they like some wild river flow as we go further in
from “Further In” by Greg Brown
Appendix R
Computations and plots were done in R.
lagged differences
It was trivial to change from doing a difference between adjacent days and a difference between days 250 apart:
mrngmean250d250.w200.lo <- apply(rankmean250.w200.lo, 2,
function(x) mean(abs(diff(x, lag=250))))
Usually diff is only used with its first argument but there are two more arguments that are sometimes of interest. In this case the lag argument was what was needed.
> diff(1:9)
[1] 1 1 1 1 1 1 1 1
> diff(1:9, lag=3)
[1] 3 3 3 3 3 3
permute returns
The rows of the original return matrix were permuted (independently of each other) with:
permrret.w200.lo <- t(apply(rret.w200.lo, 1, sample))
The sample function with only one argument does a random permutation. The transpose of the result of this apply call is necessary for a reason explained in Circle 8.1.47 of The R Inferno.
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/further-adventures-with-higher-moments/","timestamp":"2014-04-17T04:15:03Z","content_type":null,"content_length":"42629","record_id":"<urn:uuid:0377a514-2355-4a8d-9090-551630745ec7>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics : The Exploration and Analysis of Data
ISBN: 9780495390879 | 0495390879
Edition: 6th
Format: Hardcover
Publisher: Cengage Learning
Pub. Date: 7/18/2007
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/statistics-exploration-analysis-data-6th/bk/9780495390879","timestamp":"2014-04-21T05:42:13Z","content_type":null,"content_length":"29952","record_id":"<urn:uuid:9216d87b-2b93-4205-b8e3-37b2b9968abc>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Q in mathematica ??
Date: Dec 6, 2012 4:57 AM
Author: Q in mathematica
Subject: Q in mathematica ??
Write Mathematica Blocks that can solve the problem.
Write a code that verifies Fermat' s Little Theorem which says that : If [Phi](n) is the Euler Phi of n, i.e. the number of positive integers less than or equal to n which are relatively prime to n, then a^[Phi](n)[Congruent]1mod n for all a relatively prime to n. | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7933065","timestamp":"2014-04-16T16:56:47Z","content_type":null,"content_length":"1280","record_id":"<urn:uuid:5b8dbf2b-ca55-41b7-8b03-c241a8d9903a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sorting Algorithm
- SIAM Journal on Computing , 1989
"... .We assume a parallel RAM model which allows both concurrent reads and concurrent writes of a global memory. Our main result is an optimal randomized parallel algorithm for INTEGER SORT (i.e.,
for sorting n integers in the range [1; n]). Our algorithm costs only logarithmic time and is the first kno ..."
Cited by 64 (12 self)
Add to MetaCart
.We assume a parallel RAM model which allows both concurrent reads and concurrent writes of a global memory. Our main result is an optimal randomized parallel algorithm for INTEGER SORT (i.e., for
sorting n integers in the range [1; n]). Our algorithm costs only logarithmic time and is the first known that is optimal: the product of its time and processor bounds is upper bounded by a linear
function of the input size. We also give a deterministic sub-logarithmic time algorithm for prefix sum. In addition we present a sub-logarithmic time algorithm for obtaining a random permutation of n
elements in parallel. And finally, we present sub-logarithmic time algorithms for GENERAL SORT and INTEGER SORT. Our sublogarithmic GENERAL SORT algorithm is also optimal. Key words. Randomized
algorithms, parallel sorting, parallel random access machines, random permutations, radix sort, prefix sum, optimal algorithms. AMS(MOS) subject classifications. 68Q25. 1 A preliminary version of
this paper ...
"... There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently (e.g. see
[Vitter,KrishnanSl], [Karlin,Philips,Raghavan92], [Raghavan9 for use of Markov models for on-line algorithms, e.g., cashi ..."
Cited by 17 (4 self)
Add to MetaCart
There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently (e.g. see [Vitter,KrishnanSl],
[Karlin,Philips,Raghavan92], [Raghavan9 for use of Markov models for on-line algorithms, e.g., cashing and prefetching). Their results used the fact that compressible sources are predictable (and
vise versa), and showed that on-line algorithms can improve their performance by prediction. Actual page access sequences are in fact somewhat compressible, so their predictive methods can be of
benefit. This paper investigates the interesting idea of decreasing computation by using learning in the opposite way, namely to determine the difficulty of prediction. That is, we will ap
proximately learn the input distribution, and then improve the performance of the computation when the input is not too predictable, rather than the reverse. To our knowledge,
, 1986
"... We give an optimal parallel algorithm for selection on the EREW PRAM. It requires a linear number of operations and O(log n log* n) time. ..."
Cited by 1 (0 self)
Add to MetaCart
We give an optimal parallel algorithm for selection on the EREW PRAM. It requires a linear number of operations and O(log n log* n) time.
, 2003
"... 3.1.1 Randomized Algorithms The technique of randomizing an algorithm to improve its efficiency was first introduced in 1976 independently by Rabin and Solovay & Strassen. Since then, this idea
has been used to solve myriads of computational problems ..."
Add to MetaCart
3.1.1 Randomized Algorithms The technique of randomizing an algorithm to improve its efficiency was first introduced in 1976 independently by Rabin and Solovay & Strassen. Since then, this idea has
been used to solve myriads of computational problems
"... There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently, (e.g. see
[Vitter,Krishnan,FOCS91], [Karlin,Philips,Raghavan,FOCS92] [Raghavan92]) for use of Markov models for on-line algorithms ..."
Add to MetaCart
There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently, (e.g. see
[Vitter,Krishnan,FOCS91], [Karlin,Philips,Raghavan,FOCS92] [Raghavan92]) for use of Markov models for on-line algorithms e.g., cashing and prefetching). Their results used the fact that compressible
sources are predictable (and vise versa), and show that on-line algorithms can improve their performance by prediction. Actual page access sequences are in fact somewhat compressible, so their
predictive methods can be of benefit. This paper investigates the interesting idea of decreasing computation by using learning in the opposite way, namely to determine the difficulty of prediction.
That is, we will approximately learn the input distribution, and then improve the performance of the computation when the input is not too predictable, rather than the reverse. To our knowledge, this
is first case of a computational problem where we do not assume any particular fixed input distribution and yet computation is decreased when the input is less predictable, rather than the reverse.
We concentrate our investigation on a basic computational problem: sorting and a basic data structure problem: maintaining a priority queue. We present the first known case of sorting and priority
queue algorithms whose complexity depends on the binary entropy H ≤ 1 of input keys where assume that input keys are generated from an unknown but arbitrary stationary ergodic source. This is, we
assume that each of the input keys can be each arbitrarily long, but have entropy H. Note that H
- ILLUSTRATION OF REIF MACROS , 2000
"... ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2480274","timestamp":"2014-04-21T08:05:08Z","content_type":null,"content_length":"24687","record_id":"<urn:uuid:6a916bb4-f4ae-466f-8be1-786edb6baca5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Other pairing functions
int pairing_is_symmetric(pairing_t pairing)
Returns true if G1 and G2 are the same group.
int pairing_length_in_bytes_G1(pairing_t pairing)
Returns the length in bytes needed to represent an element of G1.
int pairing_length_in_bytes_x_only_G1(pairing_t pairing)
Returns the length in bytes needed to represent the x-coordinate of an element of G1.
int pairing_length_in_bytes_compressed_G1(pairing_t pairing)
Returns the length in bytes needed to represent a compressed form of an element of G1. There is some overhead in decompressing.
int pairing_length_in_bytes_G2(pairing_t pairing)
Returns the length in bytes needed to represent an element of G2.
int pairing_length_in_bytes_compressed_G2(pairing_t pairing)
Returns the length in bytes needed to represent a compressed form of an element of G2. There is some overhead in decompressing.
int pairing_length_in_bytes_x_only_G2(pairing_t pairing)
Returns the length in bytes needed to represent the x-coordinate of an element of G2.
int pairing_length_in_bytes_GT(pairing_t pairing)
Returns the length in bytes needed to represent an element of GT.
int pairing_length_in_bytes_Zr(pairing_t pairing)
Returns the length in bytes needed to represent an element of Zr. | {"url":"http://crypto.stanford.edu/pbc/chunked/ch03s03.html","timestamp":"2014-04-19T06:53:01Z","content_type":null,"content_length":"5326","record_id":"<urn:uuid:a6d96f57-ce79-4c8a-a299-21244d083a45>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
a Heartbeat developer comments on my blog post
Recent Comments
• etbe on Swap Space and SSD
• Anon on Swap Space and SSD
• Martin on Phone Based Lectures
• Jeff Licquia on Phone Based Lectures
• Don Marti on Swap Space and SSD
• Gunnar Wolf on Replacement Credit Cards and Bank Failings
• etbe on Replacement Credit Cards and Bank Failings
• Jeroen Dekkers on Replacement Credit Cards and Bank Failings
• Mika on Replacement Credit Cards and Bank Failings
• etbe on Comparing Telcos Again
Alan Robertson (a major contributor to the Heartbeat project) commented on my post failure probability and clusters. His comment deserves wider readership than a comment generally gets so I’m making
a post out of it. Here it is:
One of my favorite phrases is “complexity is the enemy of reliability” . This is absolutely true, but not a complete picture, because you don’t actually care much about reliability, you care about
Complexity (which reduces MTBF) is only worth it if you can use it to drastically cut MTTR – which in turn raises availability significantly. If your MTTR was 0, then you wouldn’t care if you ever
had a failure. Of course, it’s never zero
But, with normal clustering software, you can significantly improve your availability, AND your maintainability.
Your post makes some assumptions which are more than a little simplistic. To be fair, the real mathematics of this are pretty darn complicated.
First I agree that there are FAR more 2-node clusters than larger clusters. But, I think for a different reason. People understand 2-node clusters. I’m not saying this isn’t important, it is
important. But, it’s not related to reliability.
Second, you assume a particular model of quorum, and there are many. It is true that your model is the most common, but it’s hardly the only one – not even for heartbeat (and there are others we want
to implement).
Third, if you have redundant networking, and multiple power sources, as it should, then system failures become much less correlated. The normal model which is used is completely uncorrelated
This is obviously an oversimplification as well, but if you have redundant power supplies supplied from redundant power feeds, and redundant networking etc. it’s not a bad approximation.
So, if you have an MTTR of 4 hours to repair broken hardware, what you care about is the probability of having additional failures during those four hours.
If your HA software can recover from an error in 60 seconds, then that’s your effective MTTR as seen by (a subset) of users. Some won’t see it at all. And, of course, that should also go into your
computation. This depends on knowing a lot about what kind of protocol is involved, and what the probability of various lengths of failures is to be visible to various kinds of users. And, of course,
no one really knows that either in practice.
If you have a hardware failure every 5 years approximately, and a hardware repair MTTR of 4 hours, then the probability of a second failure during that time is about .009%. The probability of two
failures occuring during that time is about 8^10-7% – which is a pretty small number.
Probabilities for higher order failures are proportionately smaller.
But, of course, like any calculation, the probabilities of this are calculated using a number of simplifying assumptions.
It assumes, for example, that the probabilities of correlated failures are small. For example, the probability of a flood taking out all the servers, or some other disaster is ignored.
You can add complexity to solve those problems too ;-), but at some point the managerial difficulties (complexity) overwhelms you and you say (regardless of the numbers) that you don’t want to go
Mangerial complexity is minimized by uniformity in the configuration. So, if all your nodes can run any service, that’s good. If they’re asymmetric, and very wildly so, that’s bad.
I have to go now, I had a family emergency come up while I was writing this. Later…
End quote.
It’s interesting to note that there are other models of quorum, I’ll have to investigate that. Most places I have worked have had a MTTR that is significantly greater than four hours. But if you have
hot-swap hard drives (so drive failure isn’t a serious problem) then having machines have an average of one failure per five years should be possible.
April 19th, 2007 | Category: Ha | {"url":"http://etbe.coker.com.au/2007/04/19/a-heartbeat-developer-comments-on-my-blog-post/","timestamp":"2014-04-21T12:09:06Z","content_type":null,"content_length":"74294","record_id":"<urn:uuid:a37ee0c0-8293-4272-8341-771c964e129c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Full Waveform Inversion in Laplace Domain
Seminar Room 1, Newton Institute
Seismic Full Waveform Inversion (FWI) consists in the estimation of Earth's subsurface structure based on measurements of physical fields near its surface. It is based on the minimization of an
objective function measuring the difference between predicted and observed data. FWI is mostly formulated in time or Fourier domain. However FWI diverges if the starting model is far from the true
model. This is consequence of the lack of low frequency in the seismic sources which limits the recovery of the large-scale structures in the velocity model. Re-formulating FWI in the Laplace domain
using a logarithmic objective function introduces a fast and efficient method capable to recover long-wavelength velocity structure starting from a very simple initial solution and independent of the
frequency content of the data. In this presentation we will present the FWI formulated in Laplace domain and its application to synthetic and field seismic data. | {"url":"http://www.newton.ac.uk/programmes/INV/seminars/2011121415301.html","timestamp":"2014-04-17T21:36:53Z","content_type":null,"content_length":"4423","record_id":"<urn:uuid:24c1a26b-5858-4417-aa4b-befb826644df>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Laws of Exponents
June 2nd 2011, 04:03 PM
Laws of Exponents
First, I just want to say "Hello!" since I am new here. I am just starting my PreCal class this summer. I am a CS major, but have to take it for my degree.
My problem is this:
I have to express the following as a number in the form a/b, assuming a and b are integers.
-2^4 + 3^-1
I know that to get rid of the negative exponent I need to make it 1/3^1. However, I cannot seem to figure it out completely. Any help would be appreciated.
June 2nd 2011, 04:10 PM
First, I just want to say "Hello!" since I am new here. I am just starting my PreCal class this summer. I am a CS major, but have to take it for my degree.
My problem is this:
I have to express the following as a number in the form a/b, assuming a and b are integers.
-2^4 + 3^-1
I know that to get rid of the negative exponent I need to make it 1/3^1. However, I cannot seem to figure it out completely. Any help would be appreciated.
You have this
Now compute 2 to the forth power and then get a common denominator and add.
June 2nd 2011, 04:35 PM
You have this
Now compute 2 to the forth power and then get a common denominator and add.
I did it this way and came up with \frac{49}{3}
I know the correct answer is \frac{47}{3} , though.
I have tried to reverse engineer it for a while now, but can't seem to figure out exactly where I am messing it up. I am sure this is a very simple mistake I am making since I did the other
questions in the same section without much trouble.
Also, any idea why my fraction tag isn't working? :)
June 2nd 2011, 04:48 PM
Do I need to change the + sign to a - sign when I am making the exponent positive?
June 2nd 2011, 05:06 PM
You need to follow the order of operations. So you do exponents first
$-2^4+3^{-1} \iff -16+\frac{1}{3}$
Now you need to get a common denominator
$-16 \cdot \frac{3}{3}+\frac{1}{3}=...$
Can you finish from here?
June 2nd 2011, 05:13 PM
That makes sense now that I look at it...I forgot that {-2}^{ 4} is different than {(-2)}^{4 } . I was using 16 instead of -16. Thanks for the help! | {"url":"http://mathhelpforum.com/pre-calculus/182269-laws-exponents-print.html","timestamp":"2014-04-20T17:16:10Z","content_type":null,"content_length":"8078","record_id":"<urn:uuid:19be558b-8fa5-4a57-bbfb-d414f13d6724>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Correct the mistake in the equation and balance it: LI+O2-->LIO2
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Is this for the combustion of Lithium? Lithium ions have a charge of 1+ so the equation should be \[4Li ^{+} + O _{2} \rightarrow 2Li _{2}O\]
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5070d759e4b0c2dc8340aa2e","timestamp":"2014-04-18T23:35:22Z","content_type":null,"content_length":"27674","record_id":"<urn:uuid:7e756664-08f5-4801-a94c-002e69844ff4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fast decimal arithmetic module released
today I have released the following packages for fast arbitrary precision
decimal arithmetic:
1. libmpdec
Libmpdec is a C (C++ ready) library for arbitrary precision decimal
arithmetic. It is a complete implementation of Mike Cowlishaw's
General Decimal Arithmetic specification.
2. fastdec.so
Fastdec.so is a Python C module with the same functionality as decimal.py.
With some restrictions, code written for for decimal.py should work
3. deccheck.py
deccheck.py performs redundant calculations using both decimal.py and
fastdec.so. For each calculation the results of both modules are compared
and an exception is raised if they differ. This module was mainly developed
for testing, but could in principle be used for redundant calculations.
Libmpdec passes IBM's official test suite and a multitude of additional tests.
Fastdec.so passes (with minor modifications) all Python unit tests. When run
directly, deccheck.py performs very exhaustive tests that compare fastdec.so
with decimal.py.
All tests complete successfully under Valgrind.
In a couple of initial benchmarks, libmpdec compares very well against
decNumber and the Intel decimal library. For very large numbers, the speed
is roughly the same as the speed of the apfloat library.
Fastdec.so compares quite well against gmpy and even native Python floats.
In the benchmarks, it is significantly faster than Java's BigDecimal class.
All tests have been completed on Linux 64/32-bit, Windows 64/32-bit, OpenSolaris
32-bit, OpenBSD 32-bit and Debian Mips 32-bit. For 32-bit platforms there is
a pure ANSI C version, 64 bit platforms require a couple of asm lines.
Further information and downloads at:
Stefan Krah | {"url":"http://www.velocityreviews.com/forums/t700207-fast-decimal-arithmetic-module-released.html","timestamp":"2014-04-19T09:49:29Z","content_type":null,"content_length":"39149","record_id":"<urn:uuid:c8c0280d-cda0-4692-be99-acf58d7fe3ea>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need to differentiate y = x^(log base5 (x)) in terms of natural log; already tried
March 4th 2013, 03:35 PM
Need to differentiate y = x^(log base5 (x)) in terms of natural log; already tried
So, I've got such a function: y = x^(log base5 (x)), which I have to differentiate in terms of ln;
I started by solving the equation y = x^(lnx/ln5) = x^(lnx - ln5) (wonder if here is my mistake)
after that ln y = ln x^(lnx - ln5)=> 1/y*y' = (lnx - ln5)'(ln x) + (ln x)'(lnx - ln5) => (1/x - 1/5)(ln x) + (1/x)(lnx - ln5) =>
y' = x^(lnx - ln5)[(1/x - 1/5)(ln x) + ((lnx - ln5)/x)], that's what I got, but putting it in webwork it consideres it as incorrect,
tell me please were is my mistake,
March 4th 2013, 10:17 PM
Re: Need to differentiate y = x^(log base5 (x)) in terms of natural log; already tri
So, I've got such a function: y = x^(log base5 (x)), which I have to differentiate in terms of ln;
I started by solving the equation y = x^(lnx/ln5) = x^(lnx - ln5) (wonder if here is my mistake)
Yea this is your mistake. you would only be able to do this if it was $\displaystyle{ x^{\ln \frac{x}{5}} = x^{\ln x - \ln 5}}$
after that ln y = ln x^(lnx - ln5)=> 1/y*y' = (lnx - ln5)'(ln x) + (ln x)'(lnx - ln5) => (1/x - 1/5)(ln x) + (1/x)(lnx - ln5) =>
y' = x^(lnx - ln5)[(1/x - 1/5)(ln x) + ((lnx - ln5)/x)], that's what I got, but putting it in webwork it consideres it as incorrect,
tell me please were is my mistake,
Carrying on from $y = x^{\frac{\ln x}{\ln 5}}$
$\ln y = \ln (x^{\frac{\ln x}{\ln 5}})$
$\ln y = \frac{\ln x}{\ln 5} \cdot \ln (x)$
Now just do some implicit differentiation and use the product rule on the right hand side to get your answer.
March 5th 2013, 04:48 AM
Re: Need to differentiate y = x^(log base5 (x)) in terms of natural log; already tri
so, I assume the right answer would be
y'/y = [((1/x)ln 5 - ln x (1/5))/(ln5)^(2)]*ln x + (lnx/ln5)*(1/x) =>
y' = (lnx/ln5)*ln(x) {[((1/x)ln 5 - ln x (1/5))/(ln5)^(2)]*ln x + (lnx/ln5)*(1/x)}
y' = (lnx/ln5)*ln(x){[((1/x)ln 5 - ln x (1/5))/(ln5)^(2)]*ln x + ((x lnx/ln5)}, there is a way to contract it a little bit or its better to leave it like this?
March 5th 2013, 06:34 AM
Re: Need to differentiate y = x^(log base5 (x)) in terms of natural log; already tri
oh, let me just continue, y' = (lnx/ln5)*ln(x){[((1/x)ln 5 - ln x (1/5))/(ln5)^(2)]*ln x + ((x lnx/ln5)}, =>
y' = (lnx/ln5)*ln(x){[ (ln 5/x) - (ln x /5)/(ln5)^(2)]*ln x + ((x lnx/ln5)}, =>
y' = (lnx/ln5)*ln(x) [((lnx/((ln5)^(2))*(ln5/x)-(lnx/5)) + ((x lnx/ln5)]
does that makes sense?
March 5th 2013, 06:53 AM
Re: Need to differentiate y = x^(log base5 (x)) in terms of natural log; already tri
It's hard to read your equations, so I'll try to translate them:
$\frac{y'}{y} = [((\frac{1}{x})\ln{5} - (\ln{x}) (\frac{1}{5}))/(\ln{5})^2]*\ln{x} + (\frac{\ln{x}}{\ln{5}})*(\frac{1}{x}) \rightarrow$
$y' = (\frac{\ln{x}}{\ln{5}})*\ln{x} {[((\frac{1}{x})\ln{5} - (\ln{x}) (\frac{1}{5}))/(\ln{5})^2]*\ln{x} + (\frac{\ln{x}}{\ln{5}})*(\frac{1}{x})}$
$y' = (\frac{\ln{x}}{\ln{5}})*\ln{x}{[((\frac{1}{x})\ln{5} - (\ln{x}) (\frac{1}{5}))/(\ln{5})^2]*\ln{x} + (x \frac{\ln{x}}{\ln{5}})}$
Here's how I would have done it:
$\frac{y'}{y}=\frac{1}{\ln{5}} (2\ln{x})\left(\frac{1}{x}\right)$
$y'=\frac{1}{\ln{5}} (2\ln{x})\left(\frac{1}{x}\right) (x^\frac{\ln{x}}{\ln{5}})$
- Hollywood | {"url":"http://mathhelpforum.com/calculus/214217-need-differentiate-y-x-log-base5-x-terms-natural-log-already-tried-print.html","timestamp":"2014-04-20T13:31:53Z","content_type":null,"content_length":"10834","record_id":"<urn:uuid:7822f5cc-a4ce-48ef-8f01-7c69475eb78f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Pi Computation Record Using a Desktop PC
8337820 story
Posted by
from the more-digits-than-you dept.
hint3 writes
"Fabrice Bellard has calculated Pi to about 2.7 trillion decimal digits, besting the previous record by over 120 billion digits. While the improvement may seem small, it is an outstanding achievement
because only a single desktop PC, costing less than $3,000, was used — instead of a multi-million dollar supercomputer as in the previous records."
This discussion has been archived. No new comments can be posted.
New Pi Computation Record Using a Desktop PC
Comments Filter:
• Re:So... umm... (Score:4, Insightful)
by MichaelSmith (789609) on Tuesday January 05, 2010 @04:06AM (#30652352) Homepage Journal
Could someone fill me in what purpose that may be?
Parent Share
• Re:One thing to say (Score:1, Insightful)
by Anonymous Coward on Tuesday January 05, 2010 @04:15AM (#30652414)
Interesting, but it didn't really answer the question.
Parent Share
• Re:silly (Score:4, Insightful)
by David Jao (2759) <djao@dominia.org> on Tuesday January 05, 2010 @04:18AM (#30652428) Homepage
There is an algorithm now for calculating the nth digit of Pi at a whim.
The algorithm [wikipedia.org] only works for hexadecimal digits. There is no known formula or algorithm for calculating the n-th decimal digit directly.
Having said that, the existence or non-existence of an n-th digit algorithm does not have any relevance on the silliness or non-silliness of computing trillions of digits of pi, unless the
algorithm is extremely trivial (i.e. computing the digit takes less CPU time than a byte of I/O), which is not the case here.
Parent Share
• Re:One thing to say (Score:5, Insightful)
by digitalhermit (113459) on Tuesday January 05, 2010 @05:04AM (#30652682) Homepage
In another thread someone had posted that there was no reason for any modern CPUs; the idea being that anything one could reasonably want to do with a computer was possible with decade old
This.. *This* article is why I enjoy the breakneck pace of processor speed improvements. The thought of being able to do some pretty serious computing on a relatively inexpensive bit of hardware
-- even if it takes half a year to get results -- does what the printing press did. It allows the unwashed masses (of which I am one) a chance to do things that were once only the realm of
researchers in academia or the corporate world. Sure, all that you need to do some serious mathematics is a pen and paper, but more and more discoveries occur using methods that can only be
performed with a computer.
There's always the argument that cheap computers and cheap access to powerful software pollutes the space with hacks and dilletantes. People have said this about desktop publishing, ray tracing,
and even the growth of Linux. But it's this ability to do some amazing things with computers that makes it all worthwhile.
Parent Share
• Re:So... umm... (Score:0, Insightful)
by Anonymous Coward on Tuesday January 05, 2010 @06:55AM (#30653200)
although unfortunately he says he doesn't plan to release the code (somewhat unusual, since most of his projects are free software).
More than unusual - it also means that for all practical purposes, his record is worthless. If we cannot look at the program he used to calculate these digits and verify (i.e., prove) that it's
actually correct, what have we actually gained?
Without the program OR the data, all we really have is one guy's claim that he set a new world record, in secret, with the result not even available.
Now, I have no reason to distrust Bellard, and I don't really doubt he really did what he claims to have done; make no mistake about that. I don't think he's lying or anything, but I'd like to be
able to verify what he did for myself, or at least have the possibility to. That's what science works like.
Parent Share
• Would have been nice to see some code. (Score:2, Insightful)
by flimflammer (956759) on Tuesday January 05, 2010 @07:04AM (#30653254)
I don't think many people will be running his program that takes 116 days to complete to get as far as he did. Would have been nice to at least see how the code worked.
• Re:this guy has a pretty impressive track record (Score:2, Insightful)
by Rapha222 (1648549) on Tuesday January 05, 2010 @07:39AM (#30653428)
Does this guy is God ?
Parent Share
• Re:Verification (Score:1, Insightful)
by Anonymous Coward on Tuesday January 05, 2010 @08:08AM (#30653520)
But the fact that the algorithm runs is the best proof of it working.
And it is also an exercise in practicality; the author notes in the PDF that there was a high probability of a random bit error during the 100+ day computation period. It is in a way important to
show that yes, long computations can be done and verified on ordinary hardware without ECC.
Parent Share
• Re:One thing to say (Score:3, Insightful)
by stephentyrone (664894) on Tuesday January 05, 2010 @12:16PM (#30655962)
Pi is interesting in that regard -- there are algorithms that can compute the Nth digit without needing to compute the intermediate digits. If you want to compute all digits from 0 to N, however,
there are more efficient algorithms.
Parent Share
• Re:Finally! (Score:3, Insightful)
by HTH NE1 (675604) on Tuesday January 05, 2010 @02:59PM (#30658780)
I think you'll find you don't need anywhere near that level of precision of pi to find the radius of the Universe in Planck lengths. 50 digits is sufficient.
Parent Share
Related Links Top of the: day, week, month. | {"url":"http://science.slashdot.org/story/10/01/05/006243/New-Pi-Computation-Record-Using-a-Desktop-PC/insightful-comments","timestamp":"2014-04-20T09:05:14Z","content_type":null,"content_length":"97534","record_id":"<urn:uuid:e7635afe-ef21-45b7-a421-54178c9b58d3>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
April 1st 2008, 04:02 AM
find lim (x,y) aproach(0,0) 2xy/((x^2)+2y^2 if it exists, explain why the limit does or does not exist
Use polar coordinates to find lim(x,Y)approach(0,0)
April 1st 2008, 05:08 AM
mr fantastic
find lim (x,y) aproach(0,0) 2xy/((x^2)+2y^2 if it exists, explain why the limit does or does not exist
Mr F asks: What value do you get if you approach (0, 0) along the line y = mx ....? Therefore .....
Use polar coordinates to find lim(x,Y)approach(0,0)
Mr F says: Then go to polars!: ${\color{red}\lim_{r \rightarrow 0} \frac{\sin r^2}{r^2} = \lim_{t \rightarrow 0} \frac{\sin t}{t} = .....}$ | {"url":"http://mathhelpforum.com/calculus/32801-limits-print.html","timestamp":"2014-04-17T12:09:29Z","content_type":null,"content_length":"4559","record_id":"<urn:uuid:d1d59f0b-074e-4997-a5b5-e0a5b2d52d9c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Peter Suber, "Non-Standard Logics"
In the kinds of non-standard logics included, this bibliography aims for completeness, although it has not yet succeeded. In the coverage of any given non-standard logic, it does not at all aim
for completeness. Instead it aims to include works suitable as introductions for those who are already familiar with standard first-order logic.
Looking at these non-standard logics gives us an indirect, but usefully clear and comprehensive idea of the usually hazy notion of "standardness". In standard first-order logics:
● Wffs are finite in length (although there may be infinitely many of them).
● Rules of inference take only finitely many premises.
● There are only two truth-values, "truth" and "falsehood".
● Truth-values of given proposition symbols do not change within a given interpretation, only between or across interpretations.
● All propositional operators and connectives are truth-functional.
● "p
● Contradictions are always simply false (as opposed to both true and false).
● Contradictions imply everything. If the axioms contain an inconsistency, then all wffs are theorems.
● "
● Inferences are from wffs to wffs, or from truth-values to truth-values (by means of rules), not from meanings to meanings. Rules of inference refer to syntactic features of wffs or to
semantic truth-values, but not to other semantic features beyond truth-value such as meaning or intension.
● There are individual variables (as opposed to none).
● There are quantifiers (as opposed to none).
● Predicates take only individuals as arguments (as opposed to other predicates).
● Quantifiers bind only individual variables (as opposed to binding predicates).
● Domains are non-empty by default, or at least one individual exists in every interpretation.
● Universal quantifiers lack existential import. (Hence, Aristotle is non-standard.)
● All structure inside predicates is ignored (except the order of arguments and quantifiers, which can help us distinguish the subject from the objects of the verb), e.g. tense on verbs;
adverbs that modify verbs, adjectives, and other adverbs.
This list of standard features is limited to the features that someone has thought it interesting to deny or alter. There are many other features that are not interesting (or perhaps "not
logical") to deny or alter. For example, in standard logics:
● There are wffs; there are proofs; there are propositional symbols; there are connectives; there are predicates; etc. Would it be interesting to omit any of these?
● The set of wffs is decidable; hence the sets of symbols and rules of grammar are decidable. Would it be interesting to make these sets undecidable?
● The set of proofs is decidable; hence the set of axioms and rules of inference are decidable. Would it be interesting to make these sets undecidable?
● We can do without axioms if our rules allow us to prove logically valid wffs from zero premises. We can do without rules if all the theorems we want are axioms. But we can't do without both
axioms and rules unless we are willing to do without theorems too. Even if we were willing, would we say we then had a non-standard system or a non-system?
There are other features that might be interesting to deny or alter but that seem to have gone unexplored so far. For example, in standard logics:
● Standard truth-functions are total functions from the domain of sequences of truth-values to the range truth-values. Try permitting partial truth functions.
● Predicates are total functions from the domain of individuals to the range of truth-values. Try permitting partial predicate functions.
● Rules of inference are total functions from the domain of sequences of wffs to the range of wffs. Try permitting partial rule of inference functions.
● The formal language and rules of inference are arranged so that the language is closed under the operations of the rules. That is, when the rules of inference take wffs as input, they
generate only wffs as output. Try making the language open under the rules of inference.
● Theoremhood is binary (yes or no). Try making it fuzzy (in real degrees from 0 to 1). That would make the set of theorems a fuzzy set
● The order of theorem-derivation does not matter, provided that a theorem's own premises are derived before the theorem. Try making order matter such that self-amendment becomes possible, i.e.
theorems of a given type change the axioms, rules, or language for subsequent theorems. Or try making order matter such that a wff is provable iff it is derivable in the usual way and the
previously proved theorem has such-and-such a form.
● Use your imagination to extend this list!
Categorical logic
Goldblatt, R. Topoi: The Categorical Analysis of Logic. North-Holland, 1979; rev. ed., 1984.
Lambek, J., and P.J. Scott. Introduction to Higher Order Categorical Logic. Cambridge University press, 1988.
Moortgat, M., "Categorial Logics: A Computational Perspective", in the proceedings of Computing Science in the Netherlands, 1990.
Reyes, G.E., "Logic and Category Theory," in Agazzi, 235-252.
Combinatory logic
Logics that replace variables with functions in order to clarify intuitive operations on variables such as substitution. Systems of arithmetic built from combinatory logic can contain all
partial recursive functions and avoid Gödel incompleteness.
Curry, Haskell B. Combinatory Logic. Vol. 1 by Curry and R. Feys; vol. 2 by Curry, J.R. Hindley, and J.P. Seldin. North-Holland, 1958, 1972.
Fitch, Frederic. Elements of Combinatory Logic. Yale University Press, 1974.
Hindley, R., B. Lercher, J. Seldin. Introduction to Combinatory Logic. Cambridge University Press, 1972.
Conditional logic
Logics that deal with the truth of conditional sentences, particularly in the subjunctive mood. The logic of counterfactual assertions.
Nute, Donald, "Conditional Logic," in Gabbay and Guenthner, vol. II.
Nute, Donald. Topics in Conditional Logic. D. Reidel, 1980.
Constructive logic
Logics in which a wff is true iff it is provable. Therefore, undecidable truths (like Gödel's G) are ruled out by definition.
Beeson, Michael J. Foundations of Constructive Mathematics. Freeman Cooper and Co., 1980.
Bridges, Douglas, and Fred Richman. Varieties of Constructive Mathematics. Cambridge University Press, 1987.
Goodstein, R.L. Constructive Formalism. University College, Leicester, 1951.
Heyting A. (ed.) Constructivity in Mathematics: Proceedings of the Colloquium Held at Amsterdam, 1957. North-Holland, 1959.
Troelstra, A.S. and D. van Dalen. Constructivism in Mathematics. North-Holland, 1988.
Cumulative logic
A logic extending the theory of types. Predicates are true of objects of all lower types, not (as in the simple theory of types) only of objects of the immediately preceding type.
Degen, J. Wolfgang. Systeme der kumulativen Logik. Philosophia Verlag, 1984.
Deontic logic
Logics of permission and obligation (derived from modal logics of possibility and necessity); hence the logic of norms and normative systems.
● See modal logic; prohairetic logic.
Anderson, A.R. The Formal Analysis of Normative Systems. Technical Report No. 2, U.S. Office of Naval Research Contract No. SAR/Nonr-609 (16) (1956).
Aqvist, Lennart, "Deontic Logic," in Gabbay and Guenthner, vol. II.
Copi and Gould, 4 essays.
Forrester, James. Being Good and Being Logical: Philosophical Groundwork for a New Deontic Logic. M.E. Sharpe, 1996.
Hilpinen, R. (ed.). Deontic Logic: Introductory and Systematic Readings. Reidel, 1971.
Hilpinen, R. (ed.). New Studies in Deontic Logic. Reidel, 1981.
Rescher, Nicholas. The Logic of Commands. Routledge & Kegan Paul, 1966.
Ross, Alf. Directives and Norms. Humanities Press, 1968.
Wright, Georg Henrik von. An Essay on Deontic Logic and the General Theory of Action. North-Holland, 1968.
Wright, Georg Henrik von. Norm and Action: A Logical Inquiry. Kegan Paul, 1963.
Dynamic logic
Logics for reasoning about computer programs, especially for proving that a program is "correct" or lacks semantic bugs or does what it is intended to do without error. In dynamic logics, the
truth-values of wffs can change according to the rules or functions of a program.
Turner, chapter 2.
Harel, David, "Dynamic Logic," in Gabbay and Guenthner, vol. II.
Harel, David. First Order Dynamic Logic. Springer-Verlag, 1979.
Epistemic logic
The logic of non-truth-functional operators such as "believes" and "knows". For example, let *p mean that I know proposition p. If *p and p
Hintikka, Jaakko. Knowledge and Belief: An Introduction to the Logic of the Two Notions. 1962.
Hintikka, Jaakko and Merrill. The Logic of Epistemology and the Epistemology of Logic: Selected Essays. Kluwer Academic Publishers, 1988.
Schlesinger, George N. The Range of Epistemic Logic. Aberdeen University Press, 1985.
Erotetic logic
The logic of questions and answers. When does a proposition answer a question (correctly or incorrectly)? What's wrong with questions that presuppose false propositions (such as "Have you
stopped beating your spouse?")? Do questions bear truth-values? What is the most efficient strategy of asking questions to get an answer from a data base?
Aqvist, L.E. A New Approach to the Logical Theory of Questions, Part I. Filosofiska Foreningen, 1965.
Belnap, N.D. and T.B. Steel. The Logic of Questions and Answers. Yale University Press, 1976.
Harrah, David, "Erotetic Logics," pp. 3-21 of K. Lambert (ed.), The Logical Way of Doing Things. Yale University Press, 1969.
Harrah, David, "The Logic of Questions," in Gabbay and Guenthner, vol. II.
Harrah, David, "A System for Erotetic Sentences," in A.R. Anderson et al. (eds.), The Logical Enterprise.
Hintikka, Jaakko. The Semantics of Questions and the Questions of Semantics. North-Holland, 1976.
Kubinski, Tadeusz. An Outline Of the Logical Theory of Questions. Akademie-Verlag, 1980.
Lehnert, W. The Process of Question Answering. Wiley, 1978.
MacMillan, C.J.B. A Logical Theory of Teaching: Erotetics and Intentionality. Kluwer Academic Publishers, 1988.
Wisniewski, Andrzej. The Posing of Questions: Logical Foundations of Erotetic Inferences. Kluwer Academic Publishers, 1995.
Free logic
Standard logic without any existence assumptions. While quantifiers do have existential import, singular terms may sometimes denote no existing object or not denote at all. Logical truths
must be true for the empty domain as well as all non-empty domains. One motive is to make logic purer by eliminating some remaining metaphysical implications; another is to make translations
from natural languages more direct.
Bencivenga, Ermanno, "Free Logics," in Gabbay and Guenthner, vol. III.
Lambert, Karel (ed.). Philosophical Applications of Free Logics. Oxford University Press, 1991.
Schock, R. Logics Without Existence Assumptions. Stockholm: Almqvist and Wiksell, 1968.
Fuzzy logic
Logics in which the underlying set theory is fuzzy set theory. In fuzzy set theory, set membership is not a binary predicate (yes/no, or in/out), but a continuous quantity from 1 to 0. Fuzzy
logic introduces a similar gradation of truth-values.
Bandemer, Hans, and Siegfried Gottwald. Fuzzy Sets, Fuzzy Logic, Fuzzy Methods With Applications. Wiley, 1996.
Gupta, Madan M. and Takeshi Yamakawa (eds.). Fuzzy Logic in Knowledge-Based Systems, Decision and Control. Elsevier Science Publishers, 1988.
Mamdari, E.H., and J. Efstathiou, "Fuzzy Logic," Proceedings of the 1982 ACM Symposium on Expert Systems, 1982.
McNeill, Daniel, and Paul Freiberger. Fuzzy Logic. Simon and Schuster, 1993.
Nguyen, Hung T., and Elbert A. Walker. A First Course in Fuzzy Logic. CPC Press, 1996.
Turner, chapter 7.
Zadeh, L.A., "Fuzzy Logic and Approximate Reasoning," Synthese, 30 (1975) 407-28.
Higher-Order logic
Predicate logics in which quantifers bind predicate variables, and predicates can take other predicates as arguments. In first-order predicate logic, by contrast, quantifiers bind only
individual variables, and predicates take only individual terms as arguments.
Benthem, Johan van, and Kees Doets, "Higher-Order Logic," in Gabbay and Guenthner I, pp. 275-329.
Boolos, George, "On Second Order Logic," Journal of Philosophy, 72 (1975) 509-27.
Hickman, Larry. Modern Theories of Higher Level Predicates: Second Intentions in the Neuzeit. Philosophia Verlag, 1980. (Contemporary theories and those of post-medieval scholasticism c.
Lambek, J., and P.J. Scott. Introduction to Higher-Order Categorical Logic. Cambridge University Press, 1986.
Leblanc, Hughes, "Alternatives to Standard First-Order Semantics," in Gabbay and Guenthner, vol. I, pp. 189-274.
Shapiro, Stewart. Foundations without Foundationalism: A Case for Second-Order Logic. Oxford University Press, 1991.
Infinitary logic
Logics permitting infinitely long wffs, especially disjunctive strings to replace existential quantifiers and conjunctive strings to replace universal quantifiers, or permitting rules of
inference that take infinitely many premises. Spurred by Gödel's proof of the incompleteness of finitary logic and arithmetic.
Barwise, J., "Infinitary Logics," in Agazzi, pp. 93-112.
Intensional logic
Logics that include apparatus for signifying when two meanings (as opposed to two wffs, truth-values, sets, predicates, functions) are identical, and that analyzes inferences involving
meanings. (Non-intensional logics are called extensional.)
Anderson, Anthony C., "General Intensional Logic," in Gabbay and Guenthner, vol. II.
Benthem, Johan van. A Manual of Intensional Logic. Second ed., revised and expanded. University of Chicago Press, 1985.
Slater, B.H. Intensional Logic: An Essay in Analytical Metaphysics. Avebury, 1994.
Zalta, Edward N. Intensional Logic and the Metaphysics of Intensionality. MIT Press, 1988.
Intuitionistic logic
Propositional logics (and their predicate logic extensions) in which neither "p
Dalen, Dirk van, "Intuitionistic Logic," in Gabbay and Guenthner, vol. III, pp. 225-339.
Dummett, M. Elements of Intuitionism. Oxford University Press, 1977.
Dummett, M., "The Philosophical Basis of Intuitionistic Logic," in H.E. Rose and J.C. Sheperdson (eds.), Logic Colloquium 1973, North-Holland, 1973, pp. 5-40; reprinted in Dummett's Truth and
Other Enigmas, Duckworth, 1978, pp. 215-47.
Heyting, A. Intuitionism, An Introduction. North-Holland, 1956.
Linear logic
Abramsky, S., "Computational Interpretations of Linear Logic," Imperial College Research Report DOC 90/20 (Oct. 1990).
Lafont, Y., "Introduction to Linear Logic," Lecture Notes for Summer School on Constructive Logics and Category Theory (Isle of Thorns, August 1988).
Scedrov, A., "A Brief Guide to Linear Logic," Bulletin of the EATCS, 41 (June 1990) 154-165.
Troelstra, A. Lectures on Linear Logic. University of Chicago Press, 1992.
Many-sorted logic
Logics in which variables are "typed" as they are in many computer programming languages.
Many-valued logic
Logics in which there are more than the two standard truth-values "truth" and "falsehood". Motivated by semantic paradoxes like the liar ("this statement is false") and by future contingents
("tomorrow there will be a sea-battle"), that don't easily take either standard truth-value, and by attempts to deal with uncertainty, ignorance, and "fuzziness".
Ackermann, R. Introduction to Many-Valued Logics. Routledge & Kegan Paul, 1967.
Copi and Gould, 2 essays.
Dunn, J.H. and G. Epstein (eds.). Modern Uses of Multiple-Valued Logics. D. Reidel Pub. Co., 1975.
Haack 1974 and 1978.
Malinowski, Grzegorz. Many-Valued Logics. Oxford University Press, 1994.
Rescher, Nicholas. Many-Valued Logics. McGraw-Hill, 1969.
Rine, D. (ed.). Computer Science and Multiple-Valued Logics: Theory and Applications. Amsterdam, 1977.
Rose, A., "Many-Valued Logics," in Agazzi, pp. 113-130.
Rosser, J.B. and A.R. Turquette. Many-Valued Logics. North- Holland, 1952.
Turner, chapter 3.
Urquhart, Alisdair, "Many-Valued Logic," in Gabbay and Guenthner, vol. III.
Zinoviev, A. Philosophical Problems of Many-Valued Logic. Reidel, 1963.
Modal logic
The logic of the modal categories (possibility, actuality, and necessity) and their use in reasoning, for example, in "strict" implication.
● See deontic logic; relevant logic.
Benthem, Johan van. Modal Logic and Classical Logic. Naples: Bibliopolis, 1983.
Bradley, Raymond, and Norm Schwartz. Possible Worlds: An Introduction to Logic and Its Philosophy. Hackett Pub. Co., 1979.
Bull, Robert A., and Krister Segerberg, "Basic Modal Logic," in Gabbay and Guenthner, vol. II.
Copi and Gould, 4 essays.
Chellas, Brian F. Modal Logic: An Introduction. Cambridge University Press, 1980.
Hughes, G.E., and M.J. Cresswell. A Companion to Modal Logic. Routledge, 1984.
Hughes, G.E., and M.J. Cresswell. An Introduction to Modal Logic. Methuen, 1968. And: A Companion to Modal Logic, Methuen, 1984.
Konyndyk, Kenneth, Jr. Introductory Modal Logic. University of Notre Dame Press, 1986.
Lemmon, Edward. The "Lemmon Notes": An Introduction to Modal Logic. Basil Blackwell, 1977.
Lewis, C.I., and C. Langford. Symbolic Logic. Dover Publications; Original 1932.
Mints, Grigori. A Short Introduction to Modal Logic. University of Chicago Press, 1992.
Popkorn, Sally. First Steps in Modal Logic. Cambridge University Press, 1995.
Turner, chapter 2.
Zeman, J.J. Modal Logic. Oxford University Press, 1973.
Non-monotonic logic
Logics in which the set of implications determined by a given group of premises does not necessarily grow, and can shrink, when new wffs are added to the set of premises.
Brewka, G. Nonmonotonic Reasoning: From Theoretical Foundation to Efficient Computation. Cambridge University Press, 1990.
Davis, M., "The Mathematics of Non-Monotonic Reasoning," Artificial Intelligence, 13 (1980) 73-80.
Gabbay, Dov M., et al. Handbook of Logic in Artificial Intelligence, Vol. 3: Nonmonotonic Reasoning and Uncertain Reasoning. Oxford University Press, 1994.
Genesereth and Nilsson, chapter 6.
Ginsberg, Matthew L. (ed.). Readings in Nonmonotonic Reasoning. Morgan Kaufmann Pub. Inc., 1987.
Rankin, Terry L., "When is Reasoning Nonmonotonic?" in James H. Fetzer (ed.), Aspects of Artificial Intelligence, Kluwer Academic Publishers, 1988, pp. 289-308.
Paraconsistent logic
Logics in which it is generally false that contradictions imply any and every proposition. Contradictory statements (p·~p) are both true and false, as opposed to simply false. Hence the
principle of excluded middle is affirmed, while the principle of non-contradiction denied. Paraconsistent logics can be "lived" if one vows to accept all truths, but does not insist on
rejecting all falsehood. Paraconsistent logics do not hold that all paradoxes can be "solved" and urges that they be recognized as contradictions.
Priest, Graham, "Contradiction, Belief, and Rationality," Proceedings of the Aristotelian Society, 86 (1986) 99-116.
Priest, Graham. In Contradiction: A Study of the Transconsistent. Kluwer Academic Publishers, 1987.
Priest, Graham, R. Routley, and J. Norman. Paraconsistent Logics. Philosophia Verlag, 1986.
Priest, Graham, R. Routley, and J. Norman (eds.). Paraconsistent Logic: Essays on the Inconsistent. Philosophia Verlag, 1987.
Priest, Graham, "When Inconsistency is Inescapable: A Survey of Paraconsistent Logics," South African Journal of Philosophy 7 (May 1988) 83-89.
Partial logic
Logics in which wffs need not be either true or false, or in which singular terms need not denote anything, or both. Logics that can cope with "truth-value gaps" and "denotation failures".
Blamey, Steven, "Partial Logic," in Gabbay and Guenthner, vol. III.
Prohairetic logic
The logic of preference. For example, if someone prefers A to B and B to C, must she prefer A to C? Must preference be transitive?
Halldén, Sren. The Logic of Better. Copenhagen, 1957.
Moutafakis, Nicholas J. The Logics of Preference: A Study of Prohairetic Logics in Twentieth Century Philosophy. Kluwer Academic Publishers, 1987.
Wright, Georg Henrik von. The Logic of Preference. Edinburgh, 1963.
Quantum logic
To reflect quantum indeterminacy and uncertainty, quantum logic adds a third truth-value ("indeterminate"); hence the metatheory denies the principle of excluded middle (PEM). Nevertheless,
for every p, "p
Dalla Chiara, Maria-Luisa, "Quantum Logic," in Gabbay and Guenthner, vol. III.
Dalla Chiara, Maria-Luisa, "Logical Foundations of Quantum Mechanics," in Agazzi, pp. 331-352.
Hooker, C.A. (ed.) The Logico-algebraic Approach to Quantum Mechanics. D. Reidel Pub. Co., [1975]-1979.
Jauch, J.M. and C. Piron, "What is Quantum Logic?", in Quanta, University of Chicago Press, 1969.
Mittelstaedt, P. Quantum Logic. D. Reidel, 1978.
Mittelstaedt, P. and W. Stachow, "The Principle of Excluded Middle in Quantum Logic", Journal of Philosophical Logic, 7 (1978) 181-208.
Stachow, W., "Completeness of Quantum Logic", Journal of Philosophical Logic, 5 (1976) 237-280.
van Fraassen, Bas, "The Labyrinth of Quantum Logics," in R. Cohen et al. (eds.), Logical and Epistemological Studies in Contemporary Physics, D. Reidel, 1974, pp. 224-54.
Relevant logic
Logics in which "p implies q" only if p is relevant to q. Designed to prevent the paradoxes of material implication from arising; p should never imply q simply because p is false or because q
is true. The advantage is that implication claims in natural language are better translated; the disadvantage is that implication loses truth-functionality.
● See linear logic, modal logic.
Anderson, Alan Ross and Nuel D. Belnap, Jr. Entailment: The Logic of Relevance and Necessity. Princeton University Press, Vol. 1, 1975, Vol. 2 (with J. Michael Dunn), 1993.
Diaz, M. Richard. Topics in the Logic of Relevance. Philosophia Verlag, 1981.
Dunn, J. Michael, "Relevance Logic and Entailment," in Gabbay and Guenthner, vol. III.
Norman, Jean, and Richard Sylvan (eds.). Directions in Relevant Logic. Kluwer Academic Publishers, 1989.
Read, Stephen. Relevant Logic: A Philosophical Examination of the Basis of Inference. Basil Blackwell, 1989.
Routley, Richard, et al. Relevant Logics and Their Rivals. Ridgeview Pub. Co., 1987.
Stoic logic
The logic of the ancient Stoics, marked by the introduction of tense operators.
Frede, Michael. Die Stoische Logik. Vandenhoeck and Ruprecht, 1974.
Mates, Benson. Stoic Logic. University of California Press, 1953.
Substance logic
The logic of entities related to one another by such indices as time, space, and possible worlds. Complex entities can model situations normally modelled by n-place relations.
Zemach, Eddy, and E. Walther, "Substance Logic," in Boston Studies in the Philosophy of Science, Reidel, vol 43, YEAR?, pp. 55-74.
Zemach, Eddy, "A Plea for a New Nominalism," Canadian Journal of Philosophy, 12 (YEAR?) 527-37.
Zemach, Eddy, "Numbers," Synthese, 64 (YEAR?) 225-39.
Substructural logic
Restall, Greg. An Introduction to Substructural Logics. Routledge, 2000. (Details.)
Schroeder-Heister, Peter (ed.). Substructural Logics. Oxford University Press, 1994.
Temporal (Tense) logic
Logics in which the times at which propositions bear certain truth-values can be indicated, in which the "tense" of the assertion can be indicated, and in which truth-values can be affected
by the passage of time.
● See dynamic logic, Stoic logic.
Benthem, Johan van. The Logic of Time. Second, revised edition. Kluwer Academic Publishers, 1991.
Burgess, John, "Basic Tense Logic," in Gabbay and Guenthner, vol. II.
Gabbay, Dov M., and Mark Reynolds. Temporal Logic: Mathematical Foundations and Computational Aspects. Oxford University Press, 1994.
Goldblatt, Robert. Logics of Time and Computation. University of Chicago Press, 1982.
McArthur, R.P. Tense Logic. D. Reidel, 1976
Prior, Arthur. Papers on Time and Tense. Oxford University Press, 1968; contains a good bibliography.
Prior, Arthur. Past, Present, and Future. Oxford University Press, 1978 (original 1967). (Sequel to his Time and Modality.)
Prior, Arthur. Time and Modality. Oxford University Press, 1957.
Prior, Arthur, and Kit Fine. Worlds, Times, and Selves. University of Massachusetts Press, 1977.
Rescher, Nicholas, and Alasdair Urquhart. Temporal Logic. Springer-Verlag, 1971; contains a good bibliography.
Turner, chapter 6.
Van Fraassen, Bas C., "Report on Tense Logic," in Agazzi, pp. 425-38.
Other logics
Barwise, K.J., Kaufman, M. and Makkai, M, "Stationary Logic," Annals of Mathematical Logic, 13 (1978) 171-224; a correction appears in 16:231-32.
Goddard, L., and R. Routley. The Logic of Significance and Context. Aberdeen, 1973. (More than one vol.)
Halmos, P.R. Algebraic Logic. New York, 1962.
Rescher, Nicholas, and Robert Brandom. The Logic of Inconsistency: A Study in Non-Standard Possible-World Semantics and Ontology. Rowman and Littlefield, 1979.
Non-Standard Logics in General
● Including works on more than one non-standard logic
Agazzi, Evandro. Modern Logic: A Survey. D. Reidel Pub. Co., 1981.
Belnap, N.D., Jr., "Modal and Relevance Logics: 1977," in Agazzi, 131-151.
Benthem, Johan van, "Modal Logic as Second-Order Logic," Mathematisch Instituut (Amsterdam), Report 77-04, 1977.
Cleave, John P. A Study of Logics. Oxford University Press, 1992.
Cocchiarella, Nino B., "Philosophical Perspectives on Quantification in Tense and Modal Logic," in Gabbay and Guenthner, vol II.
Copi, Irving M. and James A. Gould (eds.). Contemporary Philosophical Logic. St. Martin's Press, 1978 (4 essays on modal logic, 4 on deontic logic, 2 on many-valued).
Gabbay, Dov M., and Franz Guenthner (eds.). Handbook of Philosophical Logic. 4 vols. Kluwer Academic Publishers, 1983-89.
Gallin, D. Intensional and Higher-Order Modal Logic. North- Holland, 1975.
Genesereth, Michael R., and Nils J. Nilsson. Logical Foundations of Artificial Intelligence. Morgan Kaufmann Publishers, 1987.
Haack, Susan. Deviant Logics. Cambridge University Press, 1974.
Haack, Susan. The Philosophy of Logics. Cambridge University Press, 1981.
Hintikka, Jaakko, "Standard vs. Nonstandard Logic: Higher-Order, Modal, and First-Order Logics," in Agazzi, pp. 283-296.
Lambek, J. and P. J. Scott. Introduction to Higher Order Categorical Logic. Cambridge University Press, 1986.
Leblanc, Hughes, "Free Intuitionistic Logic: A Formal Sketch," pp. 133-45 in J. Agassi and R. Cohen (eds.), Scientific Philosophy Today. D. Reidel, 1982.
McCulloch, Warren S. Embodiments of Mind. MIT Press, 1965.
Rasiowa, H. An Algebraic Approach to Non-Classical Logics. North-Holland, 1974.
Thomason, Richmond H., "Combinations of Tense and Modality," in Gabbay and Guenthner, vol II.
Turner, Raymond. Logics for Artificial Intelligence. Ellis Horwood Ltd., 1984. | {"url":"http://legacy.earlham.edu/~peters/courses/logsys/nonstbib.htm","timestamp":"2014-04-16T04:20:55Z","content_type":null,"content_length":"39295","record_id":"<urn:uuid:6bf48a2f-adf9-43db-bb2e-fe46a0ff2b53>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
Faster way to work out multiples
March 18th 2013, 04:19 PM #1
Oct 2011
Faster way to work out multiples
Hi all, my fractions are not great to be honest but here is my question.
Using addition, solve for a and b:
So I find a common multiple of b, I choose to use 24.
Now I go ahead and do the addition:
I'm left with: $-20a=0 == a=0$
Is this correct so far? If so then I plug the answer back into the equation and get:
I only know from using a website that $b=-\frac{5}{2}$
My question is I just don't know how to go about finding the correct fraction, is there a simple method?
Last edited by uperkurk; March 18th 2013 at 04:37 PM.
Re: Faster way to work out multiples
After you've found your answers plug them back in the two equations and see if LHS = RHS
Re: Faster way to work out multiples
I don't know what LHS = RHS means. Could you elaborate?
Re: Faster way to work out multiples
LHS means left hand side, RHS means right hand side
Re: Faster way to work out multiples
But my question was with regards to finding the quick way to the correct fraction
Re: Faster way to work out multiples
Hi Uperkurk.
Your method is correct.
Your answer also seems to be correct. You are left with $a=0$ So by plugging that into either one of your equations (I'm choosing the first one for my example), you get $4\cdot 0-6b\right)=15$ It
seems that by solving this equation you get $b=\frac{-5}{2}$ which is harmonious with the answer.
For a more in-depth view on how to solve the equation
$\\4\cdot 0-6b=15\\\\0-6b=15\\\\-6b=15\\\\b=\frac{15}{-6}\\\\b=-\frac{5}{2}$
Does this clear it up for you?
Last edited by Paze; March 18th 2013 at 05:55 PM.
Re: Faster way to work out multiples
Hi Uperkurk.
Your method is correct.
Your answer also seems to be correct. You are left with $a=0$ So by plugging that into either one of your equations (I'm choosing the first one for my example), you get $4\cdot 0-6b\right)=15$ It
seems that by solving this equation you get $b=\frac{-5}{2}$ which is harmonious with the answer.
For a more in-depth view on how to solve the equation
$\\4\cdot 0-6b=15\\\\0-6b=15\\\\-6b=15\\\\b=\frac{15}{-6}\\\\b=-\frac{5}{2}$
Does this clear it up for you?
Perfect, thank you.
March 18th 2013, 04:49 PM #2
March 18th 2013, 04:58 PM #3
Oct 2011
March 18th 2013, 05:11 PM #4
March 18th 2013, 05:24 PM #5
Oct 2011
March 18th 2013, 05:48 PM #6
March 18th 2013, 06:40 PM #7
Oct 2011 | {"url":"http://mathhelpforum.com/algebra/215023-faster-way-work-out-multiples.html","timestamp":"2014-04-18T00:21:28Z","content_type":null,"content_length":"48235","record_id":"<urn:uuid:d3858f69-6f39-411c-a9c5-b6b0ca421bd7>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A gardener is installing a circular garden. The circumference of the circle 132 feet. What radius should he use to layout the circle if he uses = pi 22/7 The radius is____ feet.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/506667d4e4b08d1852123f38","timestamp":"2014-04-16T22:38:28Z","content_type":null,"content_length":"58613","record_id":"<urn:uuid:8e1b990e-62a3-4589-8afd-a2bc461f2c60>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
OpenFOAM guide/Introduction to OpenFOAM Programming, A Walk Through reactingFOAM
From OpenFOAMWiki
1 Chapter 1: Why reactingFOAM ?
reactingFOAM is (one of the simpler ) reacting flow solvers of OpenFOAM. Understanding how this particular code works in a line by line manner, gives an idea not just of this particular solver but
also the general programming techniques employed in OpenFOAM. Besides, this also gives an understanding of both the flow calculations as well as the reaction calculations in OpenFOAM.
2 Chapter 2: What the initial header files do?
The beginning part of reactingFoam, (of course after the GNU license comments) reads
#include "fvCFD.H"
#include "hCombustionThermo.H"
#include "compressible/turbulenceModel/turbulenceModel.H"
#include "chemistryModel.H"
#include "chemistrySolver.H"
#include "multivariateScheme.H"
We shall now see what each of the header files do, one by one
2.1 Section 1:
This file brings in the most fundamental tools for performing any finite volume calculation. This file in fact just includes a bunch of other files, each of which represents a building block of the
edifice of the finite volume technique. This file reads
#include "parRun.H"
#include "Time.H"
#include "fvMesh.H"
#include "fvc.H"
#include "fvMatrices.H"
#include "fvm.H"
#include "linear.H"
#include "uniformDimensionedFields.H"
#include "calculatedFvPatchFields.H"
#include "fixedValueFvPatchFields.H"
#include "adjustPhi.H"
#include "findRefCell.H"
#include "constants.H"
#include "OSspecific.H"
#include "argList.H"
#include "timeSelector.H"
#ifndef namespaceFoam
#define namespaceFoam
using namespace Foam;
A small listing of each of the above header files is as follows.
Provides routines for initializing the parallel run and for finalizing it. Its instance is declared in the file arglist.H This controls the information of Time during the simulations
This controls the information of Time during the simulations.
This contains all the topological and geometric information related to the mesh for finite volume descretization
(finite volume calculus) This contains other include files which provide the following routines for performing operations like interpolation, integration and finding derivatives of scalars
vectors and tensors. With fvc all operations are explicit where as with fvm all operations are implicit.
Provides Neumann type boundary conditions.
Provides Dirichlet type boundary condition.
Sets the cell for the reference pressure.
Provides mathematical constants e.g. $\pi,\, e$
Provides functionality like file writing, deleting etc.
--Jens 20:55, 11 Dec 2007 (CET) | {"url":"http://openfoamwiki.net/index.php/Introduction_to_OpenFOAM_Programming,_A_Walk_Through_reactingFOAM","timestamp":"2014-04-18T16:45:09Z","content_type":null,"content_length":"31077","record_id":"<urn:uuid:414e3b70-5af1-438e-ad25-70f70ffbea8a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rule of Twelfths for quick tidal estimates
The Rule of Twelfths or if you are more used to metric, the Rule of Tenths is used to give a rough estimate of the state of the tide.
Most Tide Tables only give the times and heights of high and low water, in relation to the chart datum.
So, assuming that the tide in your area has a normal semi-diurnal cycle, either of these rules of thumb can be used to provide a quick estimate of the height and flow of the tide for times in between
high and low water.
The Rule of Twelfths.
The Rule of tenths.
The Calculation.
Some other rules of thumb.
The Rule of Twelfths.
For working out accurate tidal heights you should use a graph.
However, the rule of twelfths allows you to make a fair estimate quickly in your head.
The rule works on the assumption that the tidal flow increases smoothly to a maximum at half tide then decreases smoothly to zero flow at slack water.
The assumption is that the flow and rise in water level during the first hour after low tide will be one twelfth of the total range.
In the second hour it will be two twelfths, during the third and fourth hours three twelfths, then two twelfths in the fifth and finally one twelfth in the sixth hour.
Of course in order to use this rule of thumb, you must first find the time and height of high and low water for you area and the day in question.
Then subtract one from the other to get the tidal range. And remember, this is only a rough and ready estimate so add a safety margin to your answer.
An even shorter short cut to find the height at half tide, is to simply divide the range by two and add the result to the height of low water and then add to the chart datum.
Weems & Plath Endurance Series Clocks & Barometer Quartz Time Tide
Weems & Plath Endurance Series Clocks & Barometer Quartz Time Tide . Weatherproof instruments that won t tarnish or corrode! Ultra-hard mirror finish stands up to tough outdoor conditions; and is
guaranteed never to corrode. Case Material: Brass Movement Type: Quartz clock (AA battery included) Bezel: Fixed Dial Diameter: 4-7/8'' Case Diameter: 6'' Case Depth: 1-5/8'' Case Drilled for
Mounting: Yes Warranty: Limited lifetime
Rule of tenths.
The rule of twelfths works well enough for anyone used to working with feet, fathoms, etc.
For anyone more at home working in decimal the rule of tenths might be easier to use.
So, instead of using twelfths uses percentages.
• 10% for the 1st hour of range
• 15% for the 2nd hour of range
• 25% for the 3rd hour of range
• 25% for the 4th hour of range
• 15% for the 5th hour of range
• and 10% for the 6th hour of range
Now before any mathematician writes in to point out that 10% isn’t the same as a 12th I should reiterate that the rule of twelfths is only used to give an estimate.
The Calculation.
First thing you need is the tidal range that’s the difference between high water and low water for the day in question.
So at the end of the first hour of a rising tide, it will have risen one twelfth or 10% of the range.
At the end of the second hour it will have risen 1/12 for the first hour plus 2/12 for the second hour making a total of 3/12.
Using percentages you would have 10% for the first hour plus 15% for the second making a total of 25% at the end of the second hour.
In the same way at the end of the 4th hour the tide would be 75% of the range higher than low water or 25% lower than high water.
To get the actual depth of the water two hours after low water, that 25% of the range needs to be added to the height of low water and the height shown on the chart (some of these figures might be
Back to Top of Page
Some other rules of thumb.
The Rule of Seven
This rule of thumb can be used to work out the change of range between springs and neaps.
As the change can be assumed as being linear the range will change each day by 1/7th of difference between the spring and neap range.
In other words subtract the neap range from the spring range and divide by seven to get the daily change in range.
The One In Sixty Rule
This rule of thumb can be used to work out your lateral displacement from your deviation.
For instance if you are one degree off course after 60 miles you will be 1 mile from you intended track.
So for every one degree off course the distance-off will be 1/60 of the distance covered.
For two degrees off then 2/60 of the distance traveled.
This only works with small distances and amounts of deviation.
I feel I must reiterate that these rules are just methods for giving a quick, rough approximation only and should be used with great caution when used for navigation.
Tidal streams for instance don't always have a simple relationship with the tidal height.
There are many locations where the nature of the sea bed and coastline cause all sorts off anomalies.
Some coasts have double high waters and others double low waters and there can be abnormal flow rates within the tidal sequence.
And the Tide Tables themselves are predictions based on previous experience and the figures for times and heights are subject to modification by wind and atmospheric pressure.
So, while rules of thumb, such as the rule of twelfths, allows you to do quick mental calculations, navigating at sea should be an accumulation of input from as many sources as possible, particularly
input from your own eyeballs. | {"url":"http://www.diy-wood-boat.com/Rule_of_twelfths.html","timestamp":"2014-04-16T09:01:38Z","content_type":null,"content_length":"30989","record_id":"<urn:uuid:2f6d4d48-3c1d-4113-b7db-ce809811c1cd>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bounded linear operator
June 1st 2011, 08:54 PM #1
Junior Member
Apr 2011
Bounded linear operator
Show that the range $\mathcal{R}(T)$ of a bounded linear operator $T: X \rightarrow Y$ is not necessarily closed.
Hint: Use the linear bounded operator $T: l^{\infty} \rightarrow l^{\infty}$ defined by $(\eta_{j}) = T x, \eta_{j} = \xi_{j}/j, x = (\xi_{j})$.
Attempt to solution
My idea was to find an element $y \in l^{\infty}$ that does not belong to the range and then try to build a convergent sequence in $\mathcal{R}(T)$ that has limit $y$. The element $y = (1, 1, \
ldots)$ satisfy the criteria because $T^{-1}y = \{ x\}$, with $x = (\xi_{j}), \xi_{j} = j$, but, clearly, $x ot\in l^{\infty}$, therefore, $y ot\in \mathcal{R}(T)$. The problem arise when I try
to build the sequence, because $(T x_{m})$ with $x_{m} \in l^{\infty}$ cannot converge to $y$. Briefly, my problem is that I canīt find a limit point of $\mathcal{R}(T)$ that doesnīt belong to $\
Show that the range $\mathcal{R}(T)$ of a bounded linear operator $T: X \rightarrow Y$ is not necessarily closed.
Hint: Use the linear bounded operator $T: l^{\infty} \rightarrow l^{\infty}$ defined by $(\eta_{j}) = T x, \eta_{j} = \xi_{j}/j, x = (\xi_{j})$.
Attempt to solution
My idea was to find an element $y \in l^{\infty}$ that does not belong to the range and then try to build a convergent sequence in $\mathcal{R}(T)$ that has limit $y$. The element $y = (1, 1, \
ldots)$ satisfy the criteria because $T^{-1}y = \{ x\}$, with $x = (\xi_{j}), \xi_{j} = j$, but, clearly, $x ot\in l^{\infty}$, therefore, $y ot\in \mathcal{R}(T)$. The problem arise when I try
to build the sequence, because $(T x_{m})$ with $x_{m} \in l^{\infty}$ cannot converge to $y$. Briefly, my problem is that I canīt find a limit point of $\mathcal{R}(T)$ that doesnīt belong to $\
That's the right idea. How about something like defining the $m^{\text{th}}$ element of your sequence has $1$ in the first $m$ positions and zero afterwards-- i.e. $(1,\cdots,\underbrace{1}_{m^{\
text{th}}},0,\cdots, 0,\cdots)$. This is clearly in $\text{im}(T)$ since it is equal to $T(1,\cdots,m,0,\cdots)$ and it converges to $(1,\cdots,1,\cdots)$
How about something like defining the $m^{\text{th}}$ element of your sequence has $1$ in the first $m$ positions and zero afterwards-- i.e. $(1,\cdots,\underbrace{1}_{m^{\text{th}}},0,\cdots, 0,
\cdots)$. This is clearly in $\text{im}(T)$ since it is equal to $T(1,\cdots,m,0,\cdots)$ and it converges to $(1,\cdots,1,\cdots)$
The trouble there is that $(1,\ldots,1,0,\ldots)$ does not converge to $(1,\ldots,1,\ldots)$ in the space $l^\infty$ (because the norm there is the sup norm). To avoid this difficulty, you could
modify the construction slightly by taking $y = \bigl(1,\tfrac1{\sqrt2},\tfrac1{\sqrt3},\ldots \bigr).$ Then $(1,\ldots,\tfrac1{\sqrt m},0,\ldots,0,\ldots)$ is in $\text{im}(T)$ since it is equal
to $T(1,\ldots,\sqrt m,0,\ldots)$, and it does converge to $(1,\ldots,\tfrac1{\sqrt m},\ldots)$ in the $l^\infty$ norm.
Last edited by Opalg; June 2nd 2011 at 12:54 AM. Reason: corrected typos
The trouble there is that $(1,\ldots,1,0,\cdots)$ does not converge to $(1,\ldots,1,\ldots)$ in the space $l^\infty$ (because the norm there is the sup norm). To avoid this difficulty, you could
modify the construction slightly by taking $y = \bigl(1,\tfrac1{\sqrt2},\tfrac1{\sqrt3},\ldots \bigr).$ Then $(1,\cdots,\tfrac1{\sqrt m},0,\cdots,0,\cdots)$ is in $\text{im}(T)$ since it is equal
to $T(1,\ldots,\sqrt m,0,\ldots)$' and it does converge to $(1,\ldots,\tfrac1{\sqrt m},\ldots)$ in the $l^\infty$ norm.
You're right, that was a stupid mistake.
June 1st 2011, 09:25 PM #2
June 1st 2011, 11:50 PM #3
June 2nd 2011, 12:04 AM #4 | {"url":"http://mathhelpforum.com/differential-geometry/182211-bounded-linear-operator.html","timestamp":"2014-04-24T00:21:21Z","content_type":null,"content_length":"57239","record_id":"<urn:uuid:c59b45ec-9485-4b3a-933f-3fa7de260379>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Commerce, CA Prealgebra Tutor
Find a Commerce, CA Prealgebra Tutor
...Above all other things, I love to learn how other people learn and to teach people new things in ways so that they will find the material interesting and accessible.I took Spanish I-IV in high
school, and I took the AP Spanish exam. I received my high school's Spanish award for excellence in bot...
28 Subjects: including prealgebra, chemistry, Spanish, calculus
...That being said, many students do not have a good system in place for when or how to study and many are unaware of how to start. When I tutor I think it is important to assess the study habits
that the student currently has in place and determine their effectiveness. If the student reports that...
9 Subjects: including prealgebra, Spanish, algebra 1, ESL/ESOL
...TESTIMONIAL “Greg is a gifted teacher with an exceptional ability to connect with people and particularly, kids. He has that unusual combination of absolute fluency with the subject matter,
ability to customize the teaching approach to the individual student's needs, and overlaying it all, on a ...
24 Subjects: including prealgebra, chemistry, writing, geometry
I am an experienced tutor in math and science subjects. I have an undergraduate and a graduate degree in electrical engineering and have tutored many students before. I am patient and will always
work with students to overcome obstacles that they might have.
37 Subjects: including prealgebra, chemistry, calculus, English
...It's such a joy for me to see improvement in the students I work with. For example, one of my previous students received an award for Most Improved Student at the end of the academic year that
I worked with her. I try to be efficient with time during each session.
10 Subjects: including prealgebra, Spanish, algebra 1, SAT math
Related Commerce, CA Tutors
Commerce, CA Accounting Tutors
Commerce, CA ACT Tutors
Commerce, CA Algebra Tutors
Commerce, CA Algebra 2 Tutors
Commerce, CA Calculus Tutors
Commerce, CA Geometry Tutors
Commerce, CA Math Tutors
Commerce, CA Prealgebra Tutors
Commerce, CA Precalculus Tutors
Commerce, CA SAT Tutors
Commerce, CA SAT Math Tutors
Commerce, CA Science Tutors
Commerce, CA Statistics Tutors
Commerce, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/commerce_ca_prealgebra_tutors.php","timestamp":"2014-04-19T12:27:03Z","content_type":null,"content_length":"24159","record_id":"<urn:uuid:8d4ccbb1-df6a-4d3b-8550-12ff69b3e968>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unresolved external 'convert(float, float)'
Ipsec Espah
Unresolved external 'convert(float, float)'
Anyone know why i'm getting the following error?
[C++ Warning] Unit1.cpp(24): W8054 Style of function definition is now obsolete
[Linker Error] Unresolved external 'convert(float, float)' referenced from C:\Program Files\Borland\Cbuilder6\Projects\Project2\Unit1.ob j
#include <clx.h>
#include <iostream>
#pragma hdrstop
#pragma argsused
using namespace std;
float convert(float, float);
int main(int argc, char* argv[])
float height, pounds, BMI;
cout << "Enter your height in feet and inches: ";
cin >> height;
cout << "Enter your weight in pounds: ";
cin >> pounds;
BMI = convert(height, pounds);
cout << "Your Body Mass Index is " << BMI << ".";
return 0;
float convert(height, pounds)
float inches, meters, kilograms, BMI;
inches = height * 12;
meters = inches * 0.0254;
kilograms = pounds / 2.2;
BMI = kilograms / meters * meters;
return BMI;
I'm sure this code isn't exactly right and theres better ways of writing it... If someone wouldn't mind giving a go at this and helping me understand why there way is better i'd appriciate it :)
For some reason i think i might have added a little to many floats...
Heres the question:
Write a short program that asks for your height in feet and inches, and your weight in pounds. Use three variables to store the information. Have the program report your Body Mass Index. To
calculate the BMI, first convert yoru height in feet and inches to your height in inches. Then, convert your height in inches to your height in meters by multiplying by 0.0254. Then, convert your
weight in pounds into your mass in kilograms by dividing by 2.2. Finally, compute your BMI by dividing your mass in kilograms by the square of your height in meters. Use symbolic constrants to
represent the various conversion factors.
Zach L.
C++Builder gives the "obsolete style" error when it thinks you are writing a C-style function - one which does not specify argument type until inside the function body.
Your problem is here that
float convert(height, pounds)
should be
float convert(float height, float pounds)
Hope this helps.
also make your prototype on the top also says the float convert(float height, float pounds)
Zach L.
Okay, and responding to your second question (which I somehow completely missed earlier), you can cut down on the number of floats you use by directly modifying the arguments. An example,
height *= 12 * 0.0254;
pounds /= 2.2; // Though perhaps 'weight' is a more appropriate name for it now
Also, because of order of operations, your function is only returning weight in kilograms right now (equivalent to b/a*a = a*(b/a) = b ), so you might want parenthesis around the 'meters'
Ipsec Espah
Got it, thanks man | {"url":"http://cboard.cprogramming.com/cplusplus-programming/39691-unresolved-external-convert-float-float-printable-thread.html","timestamp":"2014-04-20T22:03:02Z","content_type":null,"content_length":"10457","record_id":"<urn:uuid:a99dcddd-a194-4925-86ee-5b9351069eca>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
fourier transform, laplace (2) questions . can someone post solutions please.
May 7th 2012, 01:14 PM #1
fourier transform, laplace (2) questions . can someone post solutions please.
Re: fourier transform, laplace (2) questions . can someone post solutions please.
solutions will help with my study. this does not count to my grade.
Re: fourier transform, laplace (2) questions . can someone post solutions please.
You won't get solutions here, you are expected to do your own work. We can help guide you if you show some effort.
Re: fourier transform, laplace (2) questions . can someone post solutions please.
well i have been trying to work out part a of q3 and i got (1/2pi i k) (e^-ik - e^ik) but don't think i've been doing it right as it doesn't give me a value for cos r like in the question. and i
need that to solve (b)
im clue less as how to solve 4 or 5.
once i see how there done ill be able to them. thats why id like solutions if possible.
May 7th 2012, 01:32 PM #2
May 7th 2012, 07:15 PM #3
May 8th 2012, 02:37 AM #4 | {"url":"http://mathhelpforum.com/differential-equations/198507-fourier-transform-laplace-2-questions-can-someone-post-solutions-please.html","timestamp":"2014-04-17T08:49:58Z","content_type":null,"content_length":"39156","record_id":"<urn:uuid:5e638065-1cf7-4601-99fc-54e5c10d620c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angular Speed
February 22nd 2010, 01:42 PM #1
Feb 2010
Angular Speed
Hi everyone... new to the forum but I've read posts before that have helped me out a lot.
A patrol car is parked 50' from a long warehouse. The revolving light on top of the car turns at a rate of 30 revolutions per minute. Write theta as a function of x. How fast is the light beam
moving along the wall when the beam makes an angle of theta = 45 degrees with the line perpendicular from the light to the wall.
I'm kind of lost as to where to even start here. I know I'm looking for dx/dt.
do I start by saying tan(theta) = x/50
theta = [arctan](x/50)
then differentiate that dtheta/dt (which is 30) = d/dx [arctan](x/50)
u = (x/50) so u' = 1/50
30 = (1/50)/[1+(x/50)^2]
Trying to see if I'm on the right path here, and if so, what is the next step. Any help would be greatly appreciated.
Hi everyone... new to the forum but I've read posts before that have helped me out a lot.
A patrol car is parked 50' from a long warehouse. The revolving light on top of the car turns at a rate of 30 revolutions per minute. Write theta as a function of x. How fast is the light beam
moving along the wall when the beam makes an angle of theta = 45 degrees with the line perpendicular from the light to the wall.
I'm kind of lost as to where to even start here. I know I'm looking for dx/dt.
do I start by saying tan(theta) = x/50
theta = [arctan](x/50)
then differentiate that dtheta/dt (which is 30) = d/dx [arctan](x/50)
u = (x/50) so u' = 1/50
30 = (1/50)/[1+(x/50)^2]
Trying to see if I'm on the right path here, and if so, what is the next step. Any help would be greatly appreciated.
everything is ok except for $\frac{d\theta}{dt}$ ... you have to convert 30 rpm to units of rad/min.
$\frac{dx}{dt}$ will be in ft/min
Awesome. I had actually thought about that too, but wanted to at least make sure I was on the right path. Thanks.
February 22nd 2010, 04:33 PM #2
February 22nd 2010, 08:07 PM #3
Feb 2010 | {"url":"http://mathhelpforum.com/calculus/130173-angular-speed.html","timestamp":"2014-04-17T15:47:35Z","content_type":null,"content_length":"36471","record_id":"<urn:uuid:a2349fce-8727-4d2b-8062-fa7e3513d8d0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is 7S MULTIPLICATION QUIZ?
Mr What? is the first search engine of definitions and meanings, All you have to do is type whatever you want to know in the search box and click WHAT IS!
All the definitions and meanings found are from third-party authors, please respect their copyright.
© 2014 - mrwhatis.net | {"url":"http://mrwhatis.net/7s-multiplication-quiz.html","timestamp":"2014-04-16T05:20:25Z","content_type":null,"content_length":"35683","record_id":"<urn:uuid:6d23d727-e128-402e-b898-52bdf767c6e3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
Flux in different coordinate systems
I'm not sure what you mean by "unfixed co-ordinate system mathematics". You mean vector spaces without choosing a basis?
As for tensors, this is what I know: a tensor product of two vector spaces [itex]V[/itex] and [itex]W[/itex] both over field [itex]K[/itex] is a pair [itex](T,\otimes)[/itex] where [itex]T[/itex] a
vector space over [itex]K[/itex] and [itex]\otimes\colon V\times W\rightarrow T[/itex] a bilinear map with the property that for any bilinear map [itex]B_{L}\colon V\times W\rightarrow X[/itex] with
[itex]X[/itex] a vector space over [itex]K[/itex], there exists a unique linear map [itex]F_{\otimes}\colon T\rightarrow X[/itex] so that [itex]B_{L}=F_{\otimes}\circ\otimes[/itex]. Furthermore if
[itex](T,\otimes)[/itex] and [itex](T',\otimes')[/itex] are two tensor products of [itex]V[/itex] and [itex]W[/itex] then there exists a unique isomorphism [itex]F\colon T\rightarrow T'[/itex] such
that [itex]\otimes'=F\circ\otimes[/itex]. Although I understand what all this says (I know vector spaces, bilinear maps, linear maps, bijective linear maps = vector space isomorphisms), I don't
really grasp the idea or the practical implications. But any suggestions you have are welcome, if I don't understand I'll try to learn it. | {"url":"http://www.physicsforums.com/showthread.php?p=4161816","timestamp":"2014-04-25T00:07:03Z","content_type":null,"content_length":"29701","record_id":"<urn:uuid:810c1306-6c87-4089-8994-8b60ff0ad048>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics 180 (Spring 2011)
Complete tarball of the software used
This course is an introduction to biophysics examining many topics in this broad area. This will be the first biophysics course taught by the Physics department. However the participation of upper
division students from other majors is encouraged. The course will cover a wide range of topics, applying physical principles and techniques to different problems in biology. There will be a number
of projects for students to collaborate on. Varied backgrounds in a team, such as biology, and physics, will enhance the learning experience.
These are a preliminary list of topics to be covered. The exact list will depend on the interest and backgrounds of the students taking the course.
Diffusion and Brownian motion
Physical and mathematical underpinnings:
• Langevin eqn, diffusion eqn, Einstein Relation.
Biological applications
• Sedimentation, bacterial metabolism, pattern formation
Electrostatic interactions
Physical and mathematical underpinnings:
• Poisson-Boltzmann eqn and its solution,
Chemical Forces
• Chemical Potential and Chemical reactions
• Electrophoresis
• Self-assembly, micelles, cell membranes
Cooperative transitions
• Helix coil transition
• Stretching of macromolecules
• Protein folding
• Unzipping of DNA
Machines in membranes
• Electro-osmotic effects
• Ion pumping
Nerve Impulses
• Action Potentials
• Ion Channels
Physical Techniques and related biology
• X-ray diffraction, light and neutron scattering
• Nuclear magnetic Resonance
• Fluorescence
• DNA Microarrays
• Manipulation of bio-molecules using optical tweezers.
• Tomography
• Patch clamps
These are some simulation projects using "scipy" to illustrate and explore many of the biophysics problems above.
Two dimensional diffusion
Three dimensional Brownian motion
• Absorption of a diffusion particle to a site on a surface
Brownian motion of a tethered molecule in an optical trap
Pattern formation and diffusion
• How nonlinear partial differential equations produce patterns
• Why a cheetah has stripes on their tails but spots on their body.
• How instabilities produce patterns
• You are given 1d projections of an object at different angles
• and will be guided through how to construct the original object
X-ray crystallography
• 2D structure reconstruction using heavy atom substitution
De-noising images
• Filtering, and deconvolution
Stretching DNA
Reference Material
A lot of material can be found on the web. See the useful links page. However there is also an excellent hardcopy book by Philip Nelson "Biological Physics" available at the Bay Tree Bookstore that
will has a clear exposition and good problems in many areas of biophysics. In assignments, I will give reference to web material as well.
Grading and Evaluations
Since this is an interdisciplinary topic, the way students participate in the course will vary. Therefore evaluation of student performance will depend on this. The instructor will try hard to gauge
how much has been learned and this will be based on several factors.
This is divided up into book problems and projects. Most of the projects are designed to be collaborative and collaboration is strongly encouraged.
I will give 3 half hour quizzes during the quarter on Tuesday: April 19, May 3, and May 17, largely based on the homework.
A final exam Monday June 6, 8:00AM-11:00AM
This work was funded by National Science Foundation CCLI Grant DUE-0942207. | {"url":"http://physweb.ucsc.edu/drupal/courses/physics-180-spring-2011","timestamp":"2014-04-19T07:10:37Z","content_type":null,"content_length":"30806","record_id":"<urn:uuid:08b25250-11e4-4003-84d3-976edc301839>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
'Jayadevan's rain formula better than D/L for T20'
I just heard the Frank Duckworth interview on Cricinfo and one comment particularly caught my attention.
"The scoring patterns in Twenty20 matches fitted in absolutely perfectly with the formula that we'd always used satisfactorily with 50-over games."
Okay, so Duckworth-Lewis (D/L) has no plan to redraw their curves. They are saying that the current curves are just fine and only twice in 70 occasions has there been dissent. And both times from
England captain Paul Collingwood!
So there is nothing wrong with D/L, although something might indeed be the matter with a loser named Paul.
It's good of Frank Duckworth to say this because it confirms that we now indeed have a real problem on our hands. The problem is that D/L simply doesn't do the job if the chasing teams have just 5, 6
or 7 overs to bat.
So what exactly is this problem?
The problem (look at the picture below) is that the ten D/L curves (respectively for 0, 1, 2 ... 9 wickets in hand) practically overlap when only 5, 6 or 7 overs remain. You aren't seeing ten
different curves distinctly; you are simply seeing one fat conglomerate of these curves.
So? Is that a problem?
That's the very crux of the problem. It means that D/L target is no longer computed based on a combination of 'overs remaining' and 'wickets in hand' ... it now depends almost entirely on 'overs
remaining' ... the 'wickets in hand' hardly matter!
The D/L target is therefore, for all practical purposes, the target set by the run rate. In the England-West Indies match, England had a run rate of 9.55 and West Indies (who won by scoring 60/2 in 6
overs) were just about required to match England's run rate ... not score significantly faster as you would want and expect given that they had so many wickets in hand.
The real merit of D/L is that it 'punishes' teams that lose wickets by raising the target; lose a wicket and your D/L target goes up! Sadly all this wonderful modelling breaks down when you have
these silly 5, 6 or 7 over innings.
In the England-WI match, the hosts only had to score 43/0 in 5 overs or 46/1 in 5 overs to win. So a top wicket is worth just 3 runs in this absurd and frenetic end game!
In fact Duckworth said in that interview that Kieron Pollard being stumped off a wide ball actually helped WI: they gained a run and a ball ... and that was apparently more valuable than losing
Pollard! Tell that to Nita Ambani someone.
When matches are reduced to a ridiculously low number of overs, D/L is not designed to do the job. I would have liked to hear Frank Duckworth truthfully tell us that. Surely Frank knows this!
When I look at the curves, say at the 30 overs remaining mark, I see a wonderful (and colourful) spread of those ten curves.
Exactly, and that's the real beauty of D/L! That's when D/L is really doing its job very well. Why even at the 10 overs remaining mark, the 10 curves are sufficiently well spaced out.
What are you getting at?
I'm suggesting that even if we require the team chasing to bat a minimum of 10 overs, D/L will do the job sufficiently well. In fact many readers suggested this option on the Rediff.com message
The problem is that everyone wants a result! When we can accept a draw after five days of Test cricket, why not after a couple of hours of Twenty20?
How then do we solve this problem?
It's a tough one, especially with Duckworth now apparently ruling out a review of his model. That's why I talked of the most productive overs (MPO) model. But, at first sight, it does seem that MPO
swings too strongly away from the team chasing. D/L sets an easier-than-fair target and MPO sets a harder-than-fair target --- that's the dilemma!
There was a suggestion on Tuesday to take the average of the D/L and MPO target, but in practice such marriages don't work. It's not a solution, it's more a 'fix'.
A second suggestion -- a MPO variant -- was to look at the total runs scored in the best consecutive sequence of overs. It's an intriguing idea -- and the sort that might appeal to a modeler. But
where does it take us? And do we also consider the wickets lost during this sequence?
A large number of solutions involve limiting both runs and wickets for the team chasing. The general idea is: if England score 200 in 20 overs, and West Indies only have 5 overs to play, let them
score 50 for the loss of 2.5 wickets yes, that's the problem! Should that be 2 wickets or 3 wickets? And what if England opened with someone like Andrew Strauss -- because they thought they had 20
overs to bat -- while WI opened with Chris Gayle?
So is there no way out?
Well, we could (should!) check the sort of targets that V Jayadevan's method (VJD method) would set. The VJD method has the advantage of considering field restrictions in the first six overs -- so we
should expect it to set targets higher than D/L in the 5, 6 and 7 over games, where we have this serious difficulty with D/L. The VJD method was used in the Indian Cricket League, and performed well.
Otherwise -- and we said this on Tuesday too in these columns -- MPO is still a reasonable alternative. Here is something that I thought of last night. Why not use MPO with the proviso that the first
five overs must be bowled by five different bowlers? Every team is trying to hide its fifth bowler, and this fifth bowler usually gets walloped for runs.
To summarize, although MPO sets a very high target in the first five overs, the chasing team has the advantage of (a) 10 wickets, (b) field restrictions and (c) five different bowlers. Add a harder
ball into the equation and things seem to fall nicely into place! | {"url":"http://cricket.rediff.com/report/2010/may/05/jayadevan-rain-formula-better-than-duckworth-lewis-srinivas-bhogle.htm","timestamp":"2014-04-18T03:00:21Z","content_type":null,"content_length":"59375","record_id":"<urn:uuid:a74d5cd6-0e80-4b88-a14d-4f943db8d81c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proper families for Anosov flows
up vote 4 down vote favorite
So I've been skimming Bowen's 1972 paper "Symbolic Dynamics for Hyperbolic Flows" hoping it would give me some insight into how to build a Markov family for the cat flow (i.e., the Anosov flow
obtained by suspension of the cat map with unit height). For the sake of completeness, the cat flow $\phi$ is obtained as follows:
i. Consider the cat map $A$ on the 2-torus and identify points $(Ax,z)$ and $(x,z+1)$ to obtain a 3-manifold $M$
ii. Equip $M$ with a suitable metric (e.g., $ds^2 = \lambda_+^{2z}dx_+^2 + \lambda_-^{2z}dx_-^2 + dz^2$, where $x_\pm$ are the expanding and contracting directions of $A$ and $\lambda_\pm$ are the
corresponding eigenvalues.)
iii. Consider the flow generated by the vector field $(0,1)$ on $M$--that's the cat flow.
Unfortunately I'm getting stuck at the first part of Bowen's quasi-constructive proof, which requires finding a suitable set of disks and subsets transverse to the flow. Rather than rehash the
particular criteria for a set of disks and subsets used in Bowen's construction, I will relay a simpler but very similar set of criteria, for a proper family (which if it meets some auxiliary
criteria is also a Markov family):
$\mathcal{T} =$ {$T_1,\dots,T_n$} is called a proper family (of size $\alpha$) iff there are differentiable closed disks $D_j$ transverse to the flow s.t.
1. the $T_j$ are closed
2. $M = \phi_{[-\alpha, 0]}\Gamma(\mathcal{T})$, where $\Gamma(\mathcal{T}) = \cup_j T_j$
3. $\dim D_j = \dim M - 1$
4. diam $D_j < \alpha$
5. $T_j \subset$ int $D_j$ and $T_j = \bar{T_j^*}$ where $T_j^*$ is the relative interior of $T_j$ in $D_j$
6. for $j \ne k$, at least one of the sets $D_j \cap \phi_{[0,\alpha]}D_k$, $D_k \cap \phi_{[0,\alpha]}D_j$ is empty.
I've been stuck on even constructing such disks and subsets (let alone where the subsets are rectangles in the sense of hyperbolic dynamics). Bowen said this sort of thing is easy and proceeded under
the assumption that the disks and subsets were already in hand. I haven't found it to be so. The thing that's killing me is 6, otherwise neighborhoods of the Adler-Weiss Markov partition for the cat
map would fit the bill and the auxiliary requirements for the proper family to be a Markov family.
I've really been stuck in the mud on this one, could use a push.
ds.dynamical-systems gt.geometric-topology symbolic-dynamics
Just a wild guess, maybe this can help you arxiv.org/PS_cache/arxiv/pdf/0810/0810.5269v1.pdf ? they construct here a lot of Markov partitions for hyperbolic maps of 2-dimensional torus. – Dmitri
Nov 30 '09 at 22:59
Thanks, but no dice. Markov partitions of 2D hyperbolic toral automorphisms aren't hard for me to come by. I'm frankly surprised that I've been unable to adapt such a partition to a Markov family
for the corresponding flow. – Steve Huntsman Dec 1 '09 at 1:22
add comment
2 Answers
active oldest votes
Take Adler-Weiss on $0\times\mathbb T^2$, $1/3\times\mathbb T^2$ and $2/3\times\mathbb T^2$. Take neighborhoods of this tripled Adler-Weiss. Then this collection would satisfy
all the properties with $\alpha=1/3$.
I am not sure why are you particularly interested in suspension flow, everything is determined by the base Anosov diffeo.
up vote 30 down vote Edit: Indeed, this has to be tinkered a bit. Say Adler Weiss has two rectangles with neighborhoods $D_1$ and $D_2$. Then take collection $0\times D_1$, $1/3\times D_1$, $2/3\
accepted times D_1$, $\varepsilon\times D_2$, $1/3+\varepsilon\times D_2$, $2/3+\varepsilon\times D_2$.
To ensure the second property take $\alpha=1/3+\varepsilon$.
Unless I'm mistaken, this violates 6) because the neighborhoods at a given z-value have nonempty intersections. – Steve Huntsman Dec 1 '09 at 1:13
Also it appears to me that staggering the rectangles along the z-direction in the putative solution above doesn't work, because then the flow doesn't cover M. – Steve Huntsman
Dec 1 '09 at 1:18
See comment above from 6 minutes before your edit. – Steve Huntsman Dec 1 '09 at 1:55
Re: $\alpha = 1/3 + \epsilon$: consider the set $[0,\epsilon] \times D_2$. This set will have a point not in the image of $(2/3 + \epsilon) \times D_2$ under $\phi_{[0,\alpha]}
$ – Steve Huntsman Dec 1 '09 at 2:45
But it will be in the image of $2/3\times D_1$. – Andrey Gogolev Dec 1 '09 at 3:00
show 2 more comments
Don't upvote this. I just figured since this is getting some attention from another question on MO I'd communicate the blurb that is going to go into what I'm writing now. (See also a
related bit on meta.) A comment won't allow TeX at this point, hence the use of an answer.
Given a Markov partition $\mathcal{R} > =$ {$R_1,\dots, R_n$} for the cat map and $m \ge 3$, consider the sets $R_{jk} := R_j \times$ {$\frac{k}{m} - > j\epsilon$}, where $1 \le k \le
up vote 2 m$ and $\epsilon < \frac{1}{mn}$. The family $\mathcal{R}' :=$ {$R_{jk}$} is readily seen to be a proper family for the cat flow. [A. Gogolev, private communication] The Poincaré map
down vote for $\mathcal{R}'$ sends $R_{jk}$ to $R_{j,k+1}$ for $1 > \le k \le m-1$, and because $\mathcal{R}$ is a Markov partition for the cat map it follows that $\mathcal{R}'$ is a Markov
family for the cat flow.
add comment
Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems gt.geometric-topology symbolic-dynamics or ask your own question. | {"url":"http://mathoverflow.net/questions/7315/proper-families-for-anosov-flows/7325","timestamp":"2014-04-21T05:06:04Z","content_type":null,"content_length":"64738","record_id":"<urn:uuid:6877e465-ead9-443f-8af8-a57798ecbe13>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Northborough Math Tutor
Find a Northborough Math Tutor
Hello. I am a Massachusetts educator certified in to teach English grades 8 through 12, math grades 5 through 12, and theatre grades Kindergarten through 12. For six years I have been employed as
a high school math teacher, mostly specialized in MCAS prep but spending some time teaching Algebra 2, trigonometry, and pre-calculus.
27 Subjects: including calculus, ACT Math, statistics, algebra 1
I have 9 years of experience teaching all levels of high school mathematics in the public schools. I also have more than 6 years of experience tutoring mathematics to students ranging from 7
years old through adult learners. I have taught and/or tutored mathematics from basic addition and subtraction through calculus.
14 Subjects: including discrete math, differential equations, C, linear algebra
...Typically this involves having students work on problems relevant to the material they are studying. I make sure that students do as much as possible on their own, and I take on for myself the
role as a guide rather than simply an instructor. I use many examples and problems, starting with easy ones and working up to harder ones.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...There, I added more advanced techniques to my basic following a pattern and mending abilities. I have since worked with a variety of materials to create costuming as well as clothing patterns
and some upholstery as well. I finished my undergraduate degree at the Catholic University of America where it is required to take a minimum of four religion classes.
16 Subjects: including prealgebra, reading, grammar, public speaking
...I am a graduate of Framingham State University where I received my bachelors of science with a concentration in food and nutrition. I obtained my licensure as a Registered Dietitian and am
currently pursuing a master’s in education with a concentration in nutrition. I am passionate about health...
18 Subjects: including algebra 1, reading, elementary math, public speaking
Nearby Cities With Math Tutor
Ashland, MA Math Tutors
Auburn, MA Math Tutors
Berlin, MA Math Tutors
Boylston Math Tutors
Clinton, MA Math Tutors
Holliston Math Tutors
Hopkinton, MA Math Tutors
Hudson, MA Math Tutors
Millbury, MA Math Tutors
Shrewsbury, MA Math Tutors
Southboro, MA Math Tutors
Southborough Math Tutors
West Boylston Math Tutors
Westboro, MA Math Tutors
Westborough Math Tutors | {"url":"http://www.purplemath.com/northborough_math_tutors.php","timestamp":"2014-04-16T22:13:09Z","content_type":null,"content_length":"24035","record_id":"<urn:uuid:c1445043-fd42-45fe-b05a-254c97dfbe08>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
D3hockey Computer Rankings Discussion
how hard would it be to tweak the computer rankings and come up with a league rankings like how does the SUNYAC as a whole compare to the ECAC-W or the ECAC-E as a whole, I'm not sure on the metrics
of how you'd apply points values but I think it would be an interesting and novel approach that I don't think anyone has done yet. | {"url":"http://www.d3boards.com/index.php?topic=7628.msg1468990","timestamp":"2014-04-17T09:35:43Z","content_type":null,"content_length":"47018","record_id":"<urn:uuid:cf999ac6-80f2-4072-914a-24f234b7af65>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Distance Matrices of Some Graphs Related to Wheel Graphs
Journal of Applied Mathematics
Volume 2013 (2013), Article ID 707954, 5 pages
Research Article
The Distance Matrices of Some Graphs Related to Wheel Graphs
School of Mathematics and Information Science, Yantai University, Yantai, Shandong 264005, China
Received 25 November 2012; Revised 29 May 2013; Accepted 30 May 2013
Academic Editor: Maurizio Porfiri
Copyright © 2013 Xiaoling Zhang and Chengyuan Song. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Let denote the distance matrix of a connected graph . The inertia of is the triple of integers (), where , , and denote the number of positive, 0, and negative eigenvalues of , respectively. In this
paper, we mainly study the inertia of distance matrices of some graphs related to wheel graphs and give a construction for graphs whose distance matrices have exactly one positive eigenvalue.
1. Introduction
A simple graph consists of , a nonempty set of vertices, and , a set of unordered pairs of distinct elements of called edges. All graphs considered here are simple and connected. Let be a simple
connected graph with vertex set and edge set . The distance between two vertices , is denoted by and is defined as the length of the shortest path between and in . The distance matrix of is denoted
by and is defined by . Since is a symmetric matrix, its inertia is the triple of integers , where , , and denote the number of positive, , and negative eigenvalues of , respectively.
The distance matrix of a graph has numerous applications to chemistry [1]. It contains information on various walks and self-avoiding walks of chemical graphs. Moreover, the distance matrix is not
only immensely useful in the computation of topological indices such as the Wiener index [1] but also useful in the computation of thermodynamic properties such as pressure and temperature virial
coefficients [2]. The distance matrix of a graph contains more structural information compared to a simple adjacency matrix. Consequently, it seems to be a more powerful structure discriminator than
the adjacency matrix. In some cases, it can differentiate isospectral graphs although there are nonisomorphic trees with the same distance polynomials [3]. In addition to such applications in
chemical sciences, distance matrices find applications in music theory, ornithology [4], molecular biology [5], psychology [4], archeology [6], sociology [7], and so forth. For more information, we
can see [1] which is an excellent recent review on the topic and various uses of distance matrices.
Since the distance matrix of a general graph is a complicated matrix, it is very difficult to compute its eigenvalues. People focus on studying the inertia of the distance matrices of some graphs.
Unfortunately, up to now, only few graphs are known to have exactly one positive -eigenvalue, such as trees [8], connected unicyclic graphs [9], the polyacenes, honeycomb and square lattices [10],
complete bipartite graphs [11], , and iterated line graphs of some regular graphs [12], and cacti [13]. This inspires us to find more graphs whose distance matrices have exactly one positive
The wheel graph of vertices is a graph that contains a cycle of length plus a vertex (sometimes called the hub) not in the cycle such that is connected to every other vertex. In this paper, we first
study the inertia of the distance matrices in wheel graphs if one or more edges are removed from the graph, and then, with the help of the structural characteristics of wheel graphs, we give a
construction for graphs whose distance matrices have exactly one positive eigenvalue.
2. Preliminaries
We first give some lemmas that will be used in the main results.
Lemma 1 (see [14]). Let be a Hermitian matrix with eigenvalues and one of its principal submatrices. Let have eigenvalues . Then the inequalities hold.
For a square matrix, let denote the sum of cofactors of . Form the matrix by subtracting the first row from all other rows then the first column from all other columns and let denote the principle
submatrix obtained from by deleting the first row and first column.
Lemma 2 (see [15]). .
A cut vertex is a vertex the removal of which would disconnect the remaining graph; a block of a graph is defined to be a maximal subgraph having no cut vertices.
Lemma 3 (see [15]). If is a strongly connected directed graph with blocks , then
Lemma 4. Let Then
Proof. Let Comparing to , we get the following: Expanding the determinant according to the last column and then the last line, we get the following incursion: that is, Since , , and , from the above
incursion, we get the following: So, we have the following: This completes the proof.
3. Main Results
In the following, we always assume that , where is the hub of .
Theorem 5. Let . Then where .
Proof. Without loss of generality, we may assume that . Let
Then Expanding the determinant according to the second line, we get the following incursion: where and are defined as in Lemma 4.
By Lemma 4, we get the following: Since , , , and , according to the above incursion, we get the following: where . This completes the proof.
Corollary 6. Let . Then
Proof. We will prove the result by induction on .
If , is obviously true.
Suppose that the result is true for ; that is, , , .
Since is a principle submatrix of , by Lemma 1, the eigenvalues of interlace the eigenvalues of . By Theorem 5, . So, has one negative eigenvalue more than . According to the induction hypothesis, we
get , , and . This completes the proof.
Theorem 7. One has where .
Proof. Consider the following Expanding the above determinant according to the second line, we get the following: where is defined as in Theorem 5.
By Theorem 5, when , we get the following:
This completes the proof.
Similar to Corollary 6, we can get the following corollary.
Corollary 8. (i) If is even, , , .
(ii) If is odd, , , .
Denote by the graph obtained from by deleting the vertex and all the edges adjacent to ; that is, . Let be any subset of with . In the following, we always denote by the graph obtained from by
deleting all the edges in .
Theorem 9. One has , , .
Proof. Denote the components of by . Let denote the graph that contains plus the vertex such that is connected to every other vertex, . Then each is isomorphism to or , where is an edge of . By Lemma
2 and some direct calculations, we get the following: It is easy to check that is also true when is isomorphism to .
In the following, we will prove the theorem by introduction on .
For , , where is an edge of , by Corollary 6, we get the result.
Suppose the result is true for .
For , let . Then by the induction hypothesis, , , and , which implies that where is a positive integer.
Since by Lemma 3,
Then where , if is even and , if is odd.
In this case, similar to Corollary 6, we can easily get , , and .
Up to now, we have proved the result.
Let denote the graph formed by only identifying the vertex of with the vertex of , where and are arbitrary vertices of and , respectively.
Lemma 10 (see [13]). Let denote the Cartesian product of connected graphs and , where and . Then we have(i); (ii); (iii).
Theorem 11. Let and be the hubs of and , respectively. Suppose and are any subsets of and with , , respectively. Then, the distance matrix of the graph has exactly one positive eigenvalue.
Proof. Since and are the hubs of and , respectively, must be isomorphism to some , where is the hub of and is any subset of with . By Theorem 9 and Lemma 10, we get the result.
Given an arbitrary integer , for , let be the hub of and any subset of . Suppose .
Theorem 12. For an arbitrary integer , the distance matrix of the graph has exactly one positive eigenvalue.
Proof. We will prove the conclusion by induction on .
If , by Theorem 9, the conclusion is true.
Suppose the conclusion is true for . For convenience, let . Then . By Lemma 10, we have the following: Since , we get . By the induction hypothesis, we get . This completes the proof.
Remark 13. Let and be any two graphs with the same form as in Theorem 12. Making Cartesian product of graphs and , by Lemma 10 and Theorem 12, we get a series of graphs whose distance matrices have
exactly one positive eigenvalue.
The authors would like to thank the anonymous referees for their valuable comments and suggestions. This work was supported by NSFC (11126256) and NSF of Shandong Province of China (ZR2012AQ022).
1. D. H. Rouvray, “The role of the topological distance matrix in chemistry,” in Mathematical and Computational Concepts in Chemistry, N. Trinajstid, Ed., pp. 295–306, Ellis Harwood, Chichester, UK,
1986. View at MathSciNet
2. W. Brostow, D. M. McEachern, and S. Petrez- Guiterrez, “Pressure second virial coefficients of hydrocarbons, fluorocarbons, and their mixtures: Interactions of walks,” Journal of Chemical Physics
, vol. 71, pp. 2716–2722, 1979.
3. B. D. McKay, “On the spectral characterisation of trees,” Ars Combinatoria, vol. 3, pp. 219–232, 1977. View at Zentralblatt MATH · View at MathSciNet
4. D. W. Bradley and R. A. Bradley, “String edits and macromolecules,” in Time Wraps, D. Sankoff and J. B. Kruskal, Eds., Chapter 6, Addison-Wesley, Reading, Mass, USA, 1983.
5. J. P. Boyd and K. N. Wexler, “Trees with structure,” Journal of Mathematical Psychology, vol. 10, pp. 115–147, 1973. View at MathSciNet
6. R. L. Graham and L. Lovász, “Distance matrix polynomials of trees,” in Theory and Applications of Graphs, vol. 642 of Lecture Notes in Mathematics, pp. 186–190, 1978. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
7. M. S. Waterman, T. F. Smith, and H. I. Katcher, “Algorithms for restriction map comparisons,” Nucleic Acids Research, vol. 12, pp. 237–242, 1984.
8. R. L. Graham and H. O. Pollak, “On the addressing problem for loop switching,” The Bell System Technical Journal, vol. 50, pp. 2495–2519, 1971. View at Zentralblatt MATH · View at MathSciNet
9. R. Bapat, S. J. Kirkland, and M. Neumann, “On distance matrices and Laplacians,” Linear Algebra and Its Applications, vol. 401, pp. 193–209, 2005. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at MathSciNet
10. K. Balasubramanian, “Computer generation of distance polynomials of graphs,” Journal of Computational Chemistry, vol. 11, no. 7, pp. 829–836, 1990. View at Publisher · View at Google Scholar ·
View at MathSciNet
11. D. M. Cvetković, M. Doob, I. Gutman, and A. Torgašev, Recent Results in the Theory of Graph Spectra, vol. 36, North-Holland Publishing, Amsterdam, The Netherlands, 1988. View at MathSciNet
12. H. S. Ramane, D. S. Revankar, I. Gutman, and H. B. Walikar, “Distance spectra and distance energies of iterated line graphs of regular graphs,” Institut Mathématique, vol. 85, pp. 39–46, 2009.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
13. X. Zhang and C. Godsil, “The inertia of distance matrices of some graphs,” Discrete Mathematics, vol. 313, no. 16, pp. 1655–1664, 2013. View at Publisher · View at Google Scholar · View at
14. D. M. Cvetković, M. Doob, and H. Sachs, Spectra of Graphs, vol. 87, Academic Press, New York, NY, USA, 1980, Theory and application. View at MathSciNet
15. R. L. Graham, A. J. Hoffman, and H. Hosoya, “On the distance matrix of a directed graph,” Journal of Graph Theory, vol. 1, no. 1, pp. 85–88, 1977. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at MathSciNet | {"url":"http://www.hindawi.com/journals/jam/2013/707954/","timestamp":"2014-04-17T13:26:17Z","content_type":null,"content_length":"442639","record_id":"<urn:uuid:528cda14-a242-4f24-baa1-be9eb3b3d8bc>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yarrow Point, WA Math Tutor
Find a Yarrow Point, WA Math Tutor
...Since androids work from a windows-based model now, it's a matter of learning how to do a particular operation on the Mac given the available software. I have been using Access since 2001 and
took a class in three levels of Access the same year. Since that time, I have revised and created three databases in Access.
39 Subjects: including algebra 1, algebra 2, grammar, linear algebra
...In my years of teaching, I have been praised as being a highly dedicated and diverse teacher that incorporates differentiated learning in my lessons on a daily basis. In this current age, I
strive to incorporate not only tangible manipulatives to help students better understand the concepts, but...
11 Subjects: including geometry, writing, trigonometry, algebra 1
...I will help you with the technology aspect without charging for the time and I offer free e-mail support. Contact me today!Prealgebra is the foundation for all upper level math and as such it
is incredibly important for a student to fully understand prealgebra concepts. I enjoy helping students...
36 Subjects: including SAT math, ACT Math, geometry, prealgebra
As an engineering program manager for twelve years, and with an MIT degree to boot, I know my fair share of math and science, having taken Calculus, Physics, Chemistry, and Biology AP II. It all
started when my algebra teacher sent me to math camp and entered me into a geometry contest the next year, which I got a perfect score on all rounds. Proofs are my specialty.
43 Subjects: including geometry, ESL/ESOL, SPSS, linear algebra
...At the end of it there was a multiplication problem. I said 'take the first number. Draw that many circles.
17 Subjects: including calculus, elementary (k-6th), special needs, college counseling
Related Yarrow Point, WA Tutors
Yarrow Point, WA Accounting Tutors
Yarrow Point, WA ACT Tutors
Yarrow Point, WA Algebra Tutors
Yarrow Point, WA Algebra 2 Tutors
Yarrow Point, WA Calculus Tutors
Yarrow Point, WA Geometry Tutors
Yarrow Point, WA Math Tutors
Yarrow Point, WA Prealgebra Tutors
Yarrow Point, WA Precalculus Tutors
Yarrow Point, WA SAT Tutors
Yarrow Point, WA SAT Math Tutors
Yarrow Point, WA Science Tutors
Yarrow Point, WA Statistics Tutors
Yarrow Point, WA Trigonometry Tutors
Nearby Cities With Math Tutor
Beaux Arts Village, WA Math Tutors
Bellevue, WA Math Tutors
Clyde Hill, WA Math Tutors
Duvall Math Tutors
Highlands, WA Math Tutors
Houghton, WA Math Tutors
Hunts Point, WA Math Tutors
Kirkland, WA Math Tutors
Medina, WA Math Tutors
Mercer Island Math Tutors
Monroe, WA Math Tutors
Redmond, WA Math Tutors
Seahurst Math Tutors
Suquamish Math Tutors
Woodway, WA Math Tutors | {"url":"http://www.purplemath.com/yarrow_point_wa_math_tutors.php","timestamp":"2014-04-21T10:37:19Z","content_type":null,"content_length":"23979","record_id":"<urn:uuid:7bca66d2-e0c4-462e-aae6-3accc0fc4df4>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Find the Geometric Mean
Brush up on your math with these tutorials.
You Will Need
• A set of numbers
• A calculator with advanced math functions
1. Step 1
Take the square root for two numbers
Take the square root of the product if you are only dealing with two numbers. This is the geometric mean when there are only two numbers.
2. The geometric mean of 2 and 72 is the square root of their product, or 12.
3. Step 2
Determine logarithms for more than two numbers
If you are dealing with more than two numbers, determine the logarithm of each number that will be multiplied. Use the logarithmic function key on your calculator to calculate these values.
4. Step 3
Add the values
Add each of these logarithmic values together.
5. Step 4
Divide by the number of values
Divide the sum of the logarithmic values by the total number of values.
6. Step 5
Determine the antilog value
Determine the antilog value of the average using the antilogarithm function key on your calculator. This is the geometric mean for the general case.
7. Health inspectors often report bacteria concentrations at public beaches as geometric means so that very high or very low numbers don’t skew the average. | {"url":"http://www.howcast.com/videos/316206-How-to-Find-the-Geometric-Mean","timestamp":"2014-04-21T12:12:20Z","content_type":null,"content_length":"36035","record_id":"<urn:uuid:35a7e7e5-3b50-4cf6-82e9-9ccd9827ab06>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tarzana Math Tutor
...At UC Davis I took Theatre courses for a full year. These classes included Theatre History, Staging, Lighting and Acting. I volunteered to do costumes for Shakespeare's MacBeth.
44 Subjects: including algebra 1, chemistry, public speaking, grammar
...As a senior developer, I mentor new comers at my work. Therefore, several years' of hands on real life experience put me in a unique position to educate the interested learners and to help them
to excel in their career in this particular field. My approach to teach this subject is simple - firs...
13 Subjects: including algebra 1, algebra 2, Microsoft Excel, geometry
...My passion for learning and teaching so many subjects comes from my love of nanotechnology which is multi-disciplinary by nature. I have taken the SAT, SAT II, GRE, and several AP exams and am
very familiar with standardized testing. My approach to tests is to alleviate test anxiety and give ti...
42 Subjects: including differential equations, SAT reading, grammar, Microsoft Excel
...As a teaching assistant at UCLA, I taught students at the classroom and individual levels and received excellent student reviews. I also have ample tutoring experience, having worked with
students ranging from the son of a high-level diplomat to the United States to troubled inner-city youth thr...
23 Subjects: including algebra 1, writing, SAT math, reading
Greetings -I would first like to thank you for stopping by my humble page. My approach to tutoring is very simple; teach in the manner the student learns. Unfortunately the school systems
(elementary - college) by and large do not embrace this approach; they teach in the way the teacher/school dist...
31 Subjects: including algebra 2, algebra 1, English, prealgebra | {"url":"http://www.purplemath.com/tarzana_ca_math_tutors.php","timestamp":"2014-04-17T19:51:29Z","content_type":null,"content_length":"23560","record_id":"<urn:uuid:bcb51a4c-7a49-4e48-af31-6c60dbaf3812>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
es of
Cego is a special type of Tarok, played in south west Germany. It was developed in the early part of the nineteenth century and became the national card game of Baden and Hohenzollern, where it
remains extremely popular. These are the only parts of Germany where genuine Tarok cards (here known as Cego cards) are still in general use. (A game called Tarock is played in Württemberg and
Bavaria, but that game uses a normal 36 card German pack).
Cego is unusual among Tarok games in that an extra hand, the Cego, sometimes known as the Tapp or Blinde, is dealt to the centre of the table. Many of the bids involve playing with this extra hand,
retaining only one or two of one's original cards and discarding the remainder. The discarded cards are sometimes called the Legage. The idea of this type of bid derives from a version of L'Hombre,
and survives in a few other games, such as Vira.
There are many local variations of Cego; the description on this page is based on games played in Bräunlingen in April 1997, in the Gasthaus zum Löwen and also with young members of the church (the
Bräunlingen Ministranten). My thanks to Stephan Ocker for introducing me to the players.
The version played at the Gasthaus zum Löwen is described first, then the version played by the Ministranten. Some other variations, including those mentioned in various published descriptions of
Cego, are given at the end.
There are four or three active players. If five people want to play, the dealer sits out of each hand but pays or receives the same as the defenders. The game is played anticlockwise.
The player to the dealer's right, who receives the first cards and speaks first in the bidding, is known as Vorhand.
A special 54 card Cego pack consisting of 22 Trocke, which are permanent trumps, and 8 cards in each of the four suits clubs (kreuz), spades (schippen, schip), hearts (herz) and diamonds (karo,
eckstein, eck). There are two different designs in use: in one various anmals are depicted on the Trocke; in the other, the Trocke show domestic scenes. If you have no Cego cards, you could use
instead an Austrian Tarock pack, or a French Tarot pack from which the 1-6 in each black suit and the 5-10 in each red suit have been removed.
The Trocke from 1 to 21 are identified by large arabic numbers in the top centre. They rank from 1 (lowest) up to 21 (second highest). The highest trump, which is effectively No. 22, is called der
Gstieß (or sometimes der Geiger). It has no number and shows a musician. The lowest trump, Trock 1, is called der kleine Mann.
The cards in the black suits rank (from high to low) king (König), queen (Dame), rider (Reiter), jack (Bube), 10, 9, 8, 7. The cards in the red suits rank king, queen, rider, jack, 1, 2, 3, 4. The
picture cards have no corner indices for identification, but the kings wear crowns, the queens are female, the riders have horses and the jacks are the other ones.
Players in North America can obtain Cego cards from TaroBear's Lair.
Values of the cards
The object of the game is (usually) to win tricks containing valuable cards. The cards values are:
Gstieß, Trock 21, Trock 1 and kings 5 points each
queens 4 points each
riders 3 points each
jacks 2 points each
all other cards 1 point each
The cards are counted in groups of three, and two points are subtracted from the value of each group of 3 cards. If one or two cards are left over at the end of counting a pile of cards, one point is
subtracted from this group. The total value of the cards in the pack is 70 points.
If this method of counting is unfamiliar, see the counting points in Tarot games page for further explanation and examples.
Four player game
The first dealer is chosen by cutting cards (highest deals); thereafter the turn to deal rotates anticlockwise. The dealer shuffles and the player to dealer's left cuts. The dealer places the top 10
cards of the pack face down in the centre of the table, and then deals a single batch of 11 cards to each player. The cards dealt to the centre of the table are known as das Cego, or sometimes der
The Games
The game to be played is decided by bidding. In most cases, the player who wins the bidding (i.e. makes the last bid) plays alone against the other three players in partnership (the defenders). The
only exception to this is the game Räuber, which is played without partnerships - everyone for themselves.
There are two types of game, which I shall call normal games and special games. In a normal game, the bidder's objective is to take as many card points as possible. When counting the points taken,
the cego cards (the 10 cards that are out of play) are added to the tricks won by the bidder, and the cards in the tricks won by the defenders are counted together. The bidder wins by taking more
card points than the defenders, that is 36 or more, since there are 70 card points in total.
Normal Games
The possible normal games are as follows:
Everyone plays with the cards they were originally dealt. No one may look at the cego cards until after the play.
The bidder selects two cards to keep (usually high Trocke), discards the other nine, and then picks up the ten cego cards, making twelve. Finally the bidder discards one more card face down from
these twelve and plays with the remaining 11 cards. This exchange of cards is performed by the bidder, without exposing any of the cards to the defenders.
The bidder can choose just one card to keep (generally a high trump). This card is combined with the cego to form a new 11 card hand with which the bidder plays. The remaining 10 cards are
discarded. Again, none of the cards are shown to the defenders.
Eine Leere ("one empty")
The bidder can keep one card, which must be a numeral card of a suit (i.e. an "empty" card). This card is placed face up on the table, the remaining 10 cards of the bidder's hand are discarded
face down, and the bidder picks up the 10 cego cards in their place, without showing them. The empty card which was kept and exposed must either be led to the first trick, or the bidder must lead
another card of the same suit as the exposed card.
It is possible for a player who has no empty cards to play Eine Leere. In this case the bidder can keep a picture card in a suit instead, for example a jack, and nominate this as an empty card.
Such a card counts as the lowest in its suit, and cannot win a trick. In fact the exposed "empty card" kept by the bidder in Eine Leere can never win a trick. If the bidder chooses to lead a
different card of the same suit as the exposed card, the card which was originally exposed automatically loses any trick to which it is played later.
Example: The bidder keeps the but having found the king of clubs in the cego, decides to lead that to the first trick instead. The king of clubs wins, as everyone has a club. Later in the hand,
the bidder has managed to draw all the defenders' trumps and none of them has any clubs left. If the bidder leads the now, it does not win the trick even though it is the only club. The second
player can play any card, this card determines the suit to be followed, and the highest card of that suit wins the trick.
Zwei Leere ("two empty")
The bidder keeps two numeral cards of the same suit, which are placed face up on the table, and discards the other nine cards face down. The bidder then picks up the cego, and from it must
discard the lowest Trock, showing it to the defenders before adding it to the other 9 discards. The bidder must either lead the two exposed cards to the first two tricks, or replace one or both
of them by cards of the same suit from hand and lead those. In any case, as in Eine Leere, whether they are played now or later, the original exposed "empty cards" can never win tricks.
A player who does not have two numeral cards of the same suit can designate any two cards of the same suit as "empty" and expose them, but as in Eine Leere, these become low cards and can never
win tricks.
Zwei Verschiedene ("two different")
The bidder keeps two numeral cards of different suits, which are placed face up on the table, and discards the other nine cards face down. The bidder then picks up the cego, and from it must
discard the highest Trock, showing it to the defenders before adding it to the other 9 discards. The bidder must lead the two exposed cards to the first two tricks - there is no option to lead
other cards of the same suits. As in Eine Leere and Zwei Leere, a player who does not have two numeral cards of different suits can use picture cards for one or both of them instead; these then
become the lowest cards of their suits.
Der kleine Mann
The bidder must hold the kleiner Mann (Trock 1), which is placed face up on the table and must be led to the first trick (which it will lose). The bidder's other 10 cards are discarded face down
and replaced by the 10 cards of the cego.
Special Games
There are four special games, in which the objective is different from that in the normal games. In all of the special games, the cego is set aside and the players play with the cards they were
The bidder's sole object is to win the last trick with Trock 1 (der kleine Mann). The bidder wins if this succeeds and loses if it fails (which can happen in two ways: one of the defenders wins
the last trick with a higher Trock, or the Trock 1 is forced out before the last trick).
The bidder's aim is to win exactly one trick; the defenders win if the bidder takes no tricks or more than one.
The bidder wins by taking no tricks at all; if the bidder ever takes a trick the defenders win.
In this game everyone plays for themselves. The player who takes the most card points in tricks loses.
Bidding Procedure
The bidding is in two phases. The purpose of the first phase is to find out if anyone wants to play Solo or Ulti, and in the second phase the other games can be bid.
First phase of bidding
The bidding begins with Vorhand (the player to dealer's right) and continues anticlockwise until someone says Solo or Ulti; a player who does not want to play either of these games passes by saying
"Fort Solo" or just "Fort". A bid of Ulti ends the whole auction - no further bids are possible and the Ulti is played. A bid of Solo ends the first phase of bidding. If no one bids Solo or Ulti the
first phase ends when everyone has said "Fort".
Second phase after everyone said "Fort"
If everyone said "Fort", Vorhand must begin the second phase of bidding by saying "Cego" or "Piccolo" or "Bettel". Vorhand is not allowed to pass. If Vorhand says "Cego", any of the other players who
wish to play Piccolo or Bettel can say so now, or at any time up to an including their normal turn to bid. A bid of Piccolo or Bettel ends the bidding and is played. In the unlikely event that more
than one player wants to play a Piccolo or a Bettel, the player whose turn to bid is earlier (i.e. the nearest player in anticlockwise order from Vorhand) has priority.
When Vorhand has said "Cego", as long as no one interrupts with Piccolo or Bettel, players have the opportunity to bid the other normal games. The ranking of the remaining bids, in ascending order,
is Eine, Eine Leere, Zwei Leere, Zwei Verschiedene, der kleine Mann. If two players want to play the same game, the player whose turn to bid was earlier has priority. However, at any stage, each
player can only make the lowest possible bid (jump bids are not allowed), and each player only enters the bidding after the bidding between the previous players has been resolved.
So after Vorhand has said "Cego" the player to Vorhand's right has the choice of making the next higher bid, "Eine" or passing, by saying "gut" (good). If this player says gut, the next player in
turn has the same options, and so on, round to the dealer. If a player says "Eine", then it is immediately Vorhand's turn to decide whether to equal this bid, by saying "selbst" (myself) or to pass,
by saying "gut". If Vorhand says "selbst" then the player who bid Eine must either pass, saying "gut", or continue to the next bid "Eine Leere", in which case Vorhand again has the choice of saying
"selbst" or "gut". This competition between the two players continues until one of them says "gut". The survivor will play the last game mentioned unless someone bids higher. The bidding continues
with the player to the right of the one who said Eine; this player can either pass or bid the next higher game, in which case the survivor of the previous bidding can say "selbst" or "gut". The
bidding continues in this way until everyone has had a chance to speak and all players but one have said "gut". This one surviving bidder plays the last game mentioned.
In the special case when Vorhand bids Cego and everyone else says "gut", Vorhand can choose whether to play Cego or Räuber. This is the only case in which a game of Räuber can be played. The idea of
a Räuber is to punish a player who has failed to bid Solo, despite having a good hand; this player is likely to take most points and thus lose.
Examples (A is Vorhand; B, C and D are the other players in antclockwise order; D is dealer).
A B C D Result
Fort Fort Fort Fort
Cego Gut Eine -
Selbst - Eine Leere -
Gut - - Zwei Leere
- - Selbst Gut C plays Zwei Leere
A B C D Result
Fort Fort Fort Fort
Cego Eine - -
Gut - Gut Gut B plays Eine
A B C D Result
Fort Fort Fort Fort
Cego Eine - Piccolo D plays Piccolo
A B C D Result
Fort Fort Fort Fort
Cego Gut Gut Gut A has the choice of playing Cego or Räuber
Second phase of bidding after a Solo bid
The bidding procedure after a Solo similar to the procedure when there is no Solo, except that:
• the Solo bidder has highest priority, then Vorhand, and then the other players in anticlockwise order;
• the next bid above Solo is Gegensolo (which means the same as Cego), then Eine, Eine Leere, etc., as usual;
• no special games (Piccolo, Bettel) can can be bid over a Solo.
So after a Solo bid, the second phase begins with Vorhand, or with the player to the right of Vorhand if it was Vorhand who bid Solo. This player can bid "Gegensolo" or pass by saying "gut".
Gegensolo (against the solo) is the same game as Cego, but is worth more (as it is more difficult to win, given that the Solo player has a strong hand). If the first player passes, the next player
can bid Gegensolo or pass and so on anticlockwise around the table, but skipping the player who bid Solo.
If all three opponents of the Solo bidder say "gut" the Solo is played. If someone bids Gegensolo the Solo bidder can either equal this bid, saying "selbst" or pass by saying "gut". If the Solo
bidder says "selbst", the Gegensolo bidder can raise the bid to Eine, and so on just as in the bidding when there is no Solo. When one of these players says "gut", it is the turn of the player to the
right of the Gegensolo bidder (or if that is the Solo bidder, the player to the Solo bidder's right) to make the next higher bid or pass, and so on.
Examples (A is Vorhand; B, C and D are the other players in antclockwise order; D is dealer).
A B C D Result
Fort Fort Solo -
Gut Gut - Gut C plays Solo
A B C D Result
Fort Solo - -
Gut - Gegensolo -
- Gut - Gut C plays Cego
A B C D Result
Fort Fort Solo -
Gegensolo - Selbst -
Gut Gut - Eine (A rather unlikely bidding sequence)
- - Selbst Gut C plays Eine
The Play
The bidder leads to the first trick. The other players must follow suit if they can. A player who cannot follow suit must play a Trock if possible. If a Trock is led, the other players must follow
with Trocks if they can. A player who has no card of the suit led and no Trocks is free to play any card.
A trick is won by the highest Trock in it, or if no Trocks are played, by the highest card of the suit led. The winner of a trick leads to the next.
In games in which the bidder's original hand was discarded, the bidder is allowed to look at the discarded cards (the Legage) at any time until the end of the first trick, but not thereafter. In
Solo, Ulti, Piccolo, Bettel and Räuber, no one is allowed to look at the cego cards until the end of the play.
In the games Zwei Leere and Zwei Verschiedene, the bidder leads to the first two tricks. The two cards kept from the bidder's original hand (which in the case of Zwei Leere may be replaced by other
cards of the same suit found in the cego) are placed face up on the table to begin the tricks, and each defender in turn plays to both tricks. If the first two tricks are won by different players,
the player who won with the higher card leads to the third trick. In the case of Zwei Verschiedene it is possible that two different defenders could win the tricks with equally high cards (for
example two kings); in that case the player who played the king of the higher suit leads to the third trick, the suits ranking in the order: clubs (highest), spades, hearts, diamonds (lowest).
The Scoring
In a normal game, provided that the bidder wins at least one trick, the bidder counts the card points in won tricks plus the cego (the ten cards which are out of play), while the defenders count the
points in the tricks they have won. There are 70 card points altogether; to win, the bidder needs more than half of these points - that is at least 36.
If the bidder loses every trick, the cego counts for the opponents - so the payments are calculated on the basis that the bidder has taken 0 points and the opponents have 70. Thus it is possible for
the bidder to lose even after discarding a Legage of 36 points - the bidder also needs to win at least one trick to avoid defeat.
The amount the bidder wins or loses is the difference between 35 and the number of card points taken, multiplied by a factor which depends on the game which was played. The result is rounded up to
the next multiple of 5, and this is the amount (in Pfennig) which the bidder receives from or pays to each opponent.
The factor for a Solo is 2 if the bidder wins, but just 1 if the bidder loses. The factors for the other possible normal games depend on whether they were bid against a Solo, as follows:
│ Game │ Factor if Solo was not bid │ Factor if bid over a Solo │
│ Cego │ 1 │ 2 (Gegensolo) │
│ Eine │ 2 │ 3 │
│ Eine Leere │ 3 │ 4 │
│ Zwei Leere │ 4 │ 5 │
│ Zwei Verschiedene │ 5 │ 6 │
│ Der kleine Mann │ 6 │ 7 │
The case when the bidders and the defenders take 35 points each is called Bürgermeister, and the bidder pays 5 Pfennig to each defender.
• The bidder wins a Solo with 41 points (the defenders have 29). The difference from 35 is 6; multiplying by 2 (the factor for a won Solo) gives 12; this is rounded up to 15 and the bidder wins 15
Pfennig from each defender.
• The bidder loses a Solo taking only 29 points. The difference is 6 and the multiplication factor is 1 (for a lost Solo); 6 is rounded up to 10, and the bidder pays 10 Pfennig to each opponent.
• The bidder wins Zwei Leere with 39 points. The difference is 4 and the multiplying factor is 4; the product 16 is rounded up to 20 and the bidder wins 20 Pfennig from each opponent.
• The bidder loses a Gegensolo (a Cego bid over a Solo), taking only 27 points; the bidder pays 20 Pfennig (8*2=16 rounded up) to each defender.
• The bidder plays a Gegensolo, discarding 25 points, but takes no tricks in the play. The discarded cards count for the opponents and the bidder must therefore pay 70 Pfennig to each defender
The special games have fixed scores as follows:
│ Game │ Score │
│ Ulti │ 80 │
│ Piccolo │ 30 │
│ Bettel │ 30 │
│ Räuber │ 30 │
If the game is Ulti, Piccolo or Bettel, each defender pays the appropriate amount to the bidder if the bidder wins; otherwise the bidder pays each defender.
In a Räuber, all players count the points in their own tricks. The cego is not counted. The player who has most points loses. If Vorhand loses, the payment to the other players is doubled (60 instead
of 30). If there is a tie for most points, and Vorhand is involved in the tie, then Vorhand loses. If there is a tie in which Vorhand is not involved, then all the players who tie for most points
have to pay 30 to each other player.
Ending the Session
A player who wishes to end the session says "der Gstieß gibt ab". On the following deal it is noted who holds the Gstieß play then continues until that player's next turn to deal, and that player
deals the last hand of the session.
Three Player Game
When there are only three players, three cards - the , and - are removed from the pack, leaving 51 cards (there are 8 cards in the hearts suit but only 7 in the other suits). The deal is 12 cards to
the Cego and 13 to each player.
The games Piccolo, Bettel and Räuber are not allowed; otherwise the games and bidding procedure are the same as in the four player game.
In place of Räuber, there is a different method of penalising a player who fails to bid Solo with a strong hand. For this purpose a strong hand is defined as follows:
• any hand containing nine or more trumps
• any hand containing eight trumps of which at least two are higher than the 17, and the remaining cards belonging to at most two suits (so at least two suits are void)
A player whose hand satisfies either of the above criteria is said to "have a Solo". Passing (saying "fort Solo") in the first phase of the bidding when you have such a hand is called "skinning" a
solo (
If everyone says "fort Solo", the eventual highest bidder can, after looking at the Cego cards, claim that someone has skinned a Solo. In this case all three players expose their cards, and if it
turns out that one of the players has indeed skinned a Solo, that player loses as though they had played in the game of the final bid and lost every trick. If it turns out, on the other hand, that no
one has skinned a Solo, the bidder who made the accusation loses as though having lost every trick. Note that if, as the bidder, you find that your own hand plus the Cego contains fewer than five
Trocke in total, you are safe in claiming that someone has skinned a Solo.
There are many local variations of Cego, and even within the relatively small town of Bräunlingen several different versions are played. Here are a few variations that I have so far collected from
players and from some of the published rule books. If any Cego players reading this would like to let me know about other versions and where they are played, I would be happy to add this information
to the page.
This is a local variation from the Gasthaus zum Löwen in Bräunlingen, where they play four-handed Cego every Wednesday evening. At 23:00 there is a round of compulsory Räuber - one deal by each
player. During this round no other bidding is allowed; a Räuber is played on every deal.
In Pflicht-Räuber the loser pays 30 to each other player. There is no special penalty if Vorhand loses, but if one player takes no tricks the loser must pay 60. If two players take no tricks the
loser must pay 120 to each other player. If three players take no tricks, the player who took all the tricks wins (rather than loses) 240 from each other player.
After the Pflicht-Räuber round, normal Cego is played again, but for 10 times the normal stake - that is 10 Pfennig per point rather than 1; so Ramsch, Piccolo and Bettel cost DM 3.00, Ulti costs DM
8.00, and so on.
Ministranten Version
These rules are based on games played in the Pfarrenhaus at Bräunlingen with some of the Ministranten: Christoph, Stephan, Georg and Richard. I shall just give the differences from the Gasthaus zum
Löwen rules set out above. The usual game is for four players.
• The cards in the centre are always called the Cego (not Blinde or Legage). The word Legage is used for a discard that contains a large number of points.
• The numeral cards in the suits (10, 9, 8, 7, A, 2, 3, 4) are called Brettli.
• Piccolo is alternatively called Bikel
• Räuber is also known as Luftkampf
The possible Games
When everyone passes Vorhand's Cego bid, there are some additional alternatives:
A special game which can be chosen by Vorhand as an alternative to Räuber. The Cego is not used, and the sole objective is to avoid winning the last trick; the winner of the last trick loses the
game. The players at the Gasthaus zum Löwen also knew of this game but disapproved of it, probably because Vorhand can use it to punish one of the other players heavily, almost at random.
Geregelter Räuber
This is like a normal Räuber except that trumps must be played to the first three tricks. The holder of the Gstieß must play it on the first trick, the holder of the 21 must play it on the second
trick, and the kleiner Mann must be played on the third trick. A player with fewer than three trumps must play Brettli to the tricks to which trumps cannot be played (for example if you had only
two Trocke, 16 and 1, you would play the 16 to the first trick, a Brettli to the second trick, and the 1 to the third). The winner of the third trick leads to the fourth and play continues
Wilder Räuber
This is just a normal Räuber, in which there is no special restriction on what can be played.
Bidding procedure
As usual, the first phase is begun by Vorhand and the possible bids are Ulti and Solo; a player who does not want to bid either says Fort. A bid of Ulti ends the auction; a bid of Solo immediately
starts the second phase.
If everyone said Fort in the first phase, Vorhand must begin the second phase by bidding Cego, and the bidding continues anticlockwise. The possible bids for the next player are Eine, Piccolo and
Bettel; alternatively the second player can pass by saying "gut", and the next player has the same possibilities. A bid of Piccolo or Bettel ends the auction - otherwise the bidding continues
anticlockwise. A bid of Eine can be overcalled by Eine Leere, which can be overcalled by Zwei Leere and so on through the normal games (no jump bids are allowed). Piccolo and Bettel cannot be bid
over Eine or higher normal bids. When the bidding comes back to Vorhand, one of the other players having bid a higher normal game, Vorhand can bid the same game by saying "selbst". A player who has
passed cannot bid in a later round.
If after Vorhand has bid Cego everyone says "gut", Vorhand has a choice between playing Cego, Piccolo, Bettel, Geregelter Räuber, Wilder Räuber or Dresch. Example:
A B C D Result
Fort Fort Fort Fort
Cego Gut Eine Eine Leere
selbst - Zwei Leere Gut
Gut - - - C plays Zwei Leere
If the first phase is ended by someone saying Solo, the second phase is begun by the player on the right of the one who said Solo and continues clockwise. The Solo player cannot bid again. If all the
other players say "gut", the Solo is played. The only possible bid over a Solo is Gegensolo (which is a Cego against the Solo). If someone bids Gegensolo, this ends the auction and the player who bid
Gegensolo plays a Cego.
The Play
Generally this is the same as in the same as in the Gasthaus zum Löwen version. In a game in which the Cego has been used, the bidder can look at the Cego until the end of the third trick.
There are difference in the games Eine Leere, Zwei Leere and Zwei Verschiedene:
1. For these games, the one or two cards you lay out must really be empty cards (Brettli). If you do not have the appropriate cards in your hand you cannot make the bid.
2. The empty cards must be led to the first trick(s) - there is no opportunity to substitute other cards of the same suit.
3. After these trick(s) have been taken by the opponents, it is the bidder who leads to the next trick. Play than continues normally.
The Scoring
The scores are written down rather than paid out in money after each hand. Nevertheless, the normal stake is 1 Pfennig per point. So if a player wins 30 in a four player game, 90 is added to that
player's cumulative score and each of the other players loses 30 from their cumulative score.
In a normal game, the cards are counted in threes as usual, and the score is based on the difference of the card points taken from 35. This is multiplied by the factor for the game being played, and
then rounded to the nearest 5 points, with a minimum score of 5 points won or lost.
The factors are the same as the Gasthaus zum Löwen version, except that a Gegensolo has a factor of 4 if won, 2 if lost.
In case of a Bürgermeister (a hand in which the points divide 35-35), the bidder loses the minimum of 5 points, but in addition must buy a round of Schnaps for the players.
The scores for the special games are somewhat different:
│ Game │ Score │
│ Ulti │ 70 │
│ Piccolo │ 40 │
│ Bettel │ 50 │
│ Räuber │ 30 │
│ Dresch │ 70 │
If the loser of a Räuber took more than 30 card points, the payment is the number of card points taken, rounded to the nearest five. If Vorhand loses a Räuber, the loss is 60 points, or twice the
number of card points taken rounded to the nearest five if this is greater.
In some circles, Piccolo is played as worth 35, rather than 40.
Three Player Version
This is the equivalent game, without Räuber or Dresch, but with the possibility of challenging a player who you think has skinned a Solo. The three cards removed were , and , leaving clubs rather
than hearts as the long suit.
The standard penalty for breaking the rules is that the offender pays 70 points to each other player. These penalties were enthusiastically enforced by the players, especially in the following cases:
• Misdeal. The dealer is allowed to stop the other players picking up their hands before the deal is complete; once the dealer is satisfied that all is in order and allows the cards to be picked
up, if any hand or the cego has the wrong number of cards, the dealer is penalised
• Incorrect discard. This can easily happen in a Cego game - having retained two cards and picked up the 10 cards from the middle, you forget to discard a further card but lead to the first trick
instead. The deal is abandoned and you pay everyone 70.
• Revoke. Failing to follow suit, or failing to play a Trock when you have no card of the suit led. Again the deal is abandoned and you pay the penalty.
Bräunlingen Tournament Version
Stephan Ocker gave me a rule sheet for a Cego tournament which had recently been held in Bräunlingen. Although incomplete, this sheet indicates yet another version of the rules. The main
distinguishing features are as follows:
• Two sessions of 24 hands are to be played
• You cannot bid against your own Solo - that is, once you have bid Solo and someone has bid Gegensolo you are out of the bidding
• The values of the special games are:
□ Ultimo: 80
□ Piccolo: 40
□ Bettel: 40
□ Räuber: 30
• When scoring normal games, the difference from 35 is rounded up to the next multiple of 5 before it is multiplied by the factor for the game being played
Oberwolfach Version
This was reported by Michael Dummett in his book "The Game of Tarot" (Duckworth 1980), on the basis of games he played there in 1974. The prinicpal game there is the three player version. The main
differences from the three player game at Bräunlingen will be listed.
1. There is no Ulti game.
2. In Eine Leere, Zwei Leere and Zwei Vershiedene, the empty cards must really be empty. If you do not have the appropriate cards you cannot bid these games. There is no obligation to lead the empty
card(s) at the beginning - the bidder can lead any card.
3. In the highest normal game - here called die Pfeif', Bapperle or Pagat rather than der kleine Mann, the bidder has the option, instead of leading the Trock 1 to the first trick, to say "ich
spiele die Pfeif' frei", take the card back into hand, and attempt to win the last trick with it. If the bidder does this but fails to win the last trick with the 1 the game is lost. It is
unclear how such a loss is scored; probably it is as though the bidder had lost every trick.
4. When all three players say "Fort Solo" in the first phase of bidding, not only must Vorhand open the second phase with "Cego", but the next player must overcall with "Eine". Vorhand is then free
to hold by saying "selbst" or pass by saying "gut", and the rest of the bidding is as usual.
5. If all three players say "Fort Solo", a player whose hand contains seven or more empty cards (numeral cards in the suits) can throw the cards in, and there is a new deal by the next dealer.
6. A bid of Solo ends the first phase of bidding and the player who bid Solo cannot bid again. The second phase begins not with Vorhand, but with the player to the right of the one who bid Solo.
This player can hold the third player's bids. For example the bidding might go:
A B C Result
Fort Solo Solo Gegensolo
Eine - selbst
gut - - C plays Eine Leere
7. The factor for Solo is 1 if it is won, but 2 if it is lost. This is the opposite way round from the Bräunlingen scores and makes Solo much less attractive.
8. To score a normal game, the difference of the points from 35 is divided by 5, ignoring any remainder, and then 1 is added, and the result is multiplied by the factor for the game. The following
table is given for calculating the base value of the game:
│ │ Points won by │ Points won by │ Base value │
│ │ bidder │ opponents │ │
│ │ 70 │ 0 │ 8 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 65 - 69 │ 1 - 5 │ 7 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 60 - 64 │ 6 - 10 │ 6 │
│ ├────────────────┼────────────────┼────────────┤
│ bidder │ 55 - 59 │ 11 - 15 │ 5 │
│ wins ├────────────────┼────────────────┼────────────┤
│ │ 50 - 54 │ 16 - 20 │ 4 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 45 - 49 │ 21 - 25 │ 3 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 40 - 44 │ 26 - 30 │ 2 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 36 - 39 │ 31 - 34 │ 1 │
│ │ 31 - 35 │ 35 - 39 │ 1 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 26 - 30 │ 40 - 44 │ 2 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 21 - 25 │ 45 - 49 │ 3 │
│ ├────────────────┼────────────────┼────────────┤
│ bidder │ 16 - 20 │ 50 - 54 │ 4 │
│ loses ├────────────────┼────────────────┼────────────┤
│ │ 11 - 15 │ 59 - 59 │ 5 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 6 - 10 │ 60 - 64 │ 6 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 1 - 5 │ 65 - 69 │ 7 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 0 │ 70 │ 8 │
The four player game at Oberwolfach follows similar principles. There is still no Ulti game, but Bettel, Piccolo and Räuber are possible. There is no possibility for a player with seven empty cards
to throw in the hand.
In the second bidding phase after all players have said "Fort Solo", Bettel and Piccolo can be bid, as well as the normal games. Bettel can overcall Piccolo, and both outrank the normal games. If
Vorhand bids Cego and the other three pass, Vorhand has the option of playing Cego or Räuber. The scores for the special games are 5 for Räuber, 10 for Piccolo and 15 for Bettel. These are in
proportion to the lower scores for the normal games, which are generally about one fifth of the Bräunlingen scores.
In the four player game, a player who has 8 or more Trocke (here called Trucks), or 7 of which at least two are higher than 17 and at least two voids, is said to have a Solo. If everyone says "Fort
Solo" in the first phase of bidding, the eventual bidder of a normal game, having looked at the cego cards (here called the Blinde) can claim that someone has skinned a Solo, with the same effects as
in the three player game.
Ichenheim Version
Peter Müller reports that there is an annual Cego tournament on 6th January ("Heilige Drei König" - epiphany) in the Gasthaus "Hechten" in Ichenheim with around 60 to 100 players. The three-player
game is played, in two sessions of 24 deals. If the number of entrants is not divisible by three, there are some 4-player tables, but at these the 3-player game is still played with the dealer
sitting out and winning or losing the same as the defenders.
In Eine, Eine Leere, Zwei Leere and Zwei Verschiedene, the cards laid out have to be Brettli (pip cards) - it is not possible to substitute a picture card.
Two special games are allowed - these must be bid in the first phase:
• Ulti has a value of just 20 game points.
• Solodu (sometimes also called "Drescher") is a Solo in which the bidder is committed to win all the tricks, and is worth 64 game points. (The name obviously comes from the French "solo tout",
"tout" meaning "all" - in the Bavarian game Schafkopf there is a similar bid with the same meaning.
There is a tradition that if the bidder loses every trick ("er geht durch"), his opponents sing a short version of the German folk song "Im Wald, da sind die Räuber".
When the bidder is "Bürgermeister" (loses by taking 35 card points), he has to buy a round of Schnaps for the table.
Schmidt Version
The Cego cards made by F.X.Schmidt come with a leaflet giving rules of the game. These include several small variations and mostly agree with the Oberwolfach version. Some differences are:
1. There is no possibility to throw in the hand if you have seven empty cards.
2. In Eine Leere, Zwei Leere and Zwei Verschiedene, the possibility of using other cards instead of empty cards and the obligation to lead these cards at the beginning are mentioned as variations.
3. In die Pfeif', normally the Pfeif' must be led to the first trick. "Ich spiele die Pfeif' frei" is mentioned as a variation, but in this case the bidder is only committed to win a trick with the
Pfeif', not necessarily the last trick.
4. It is mentioned as a variation that you can bid over your own Solo.
5. The leaflet is ambiguous as to whether the factor for Solo is 2 if won and 1 if lost or vice versa.
Cards counted in twos
Some published descriptions of Cego, notably those by Claus D Grupp, say that the cards are counted in twos rather than in threes. The players at Bräunlingen confirmed that this method is used in
villages a few kilometers to the west of there.
In this method of counting, the cards are grouped into twos, the values of each pair of cards are added and one point subtracted from the sum. (See the counting points in Tarot games page for a
general discussion of counting). This gives a higher total of 79 points in the pack, and because the total is odd, no Bürgermeister (tie) is possible.
The side which has 40 or more points wins. The base value of the game is given by the table below. This is multiplied by the factor for the game being played to obtain the score.
│ │ Points won by │ Points won by │ Base value │
│ │ bidder │ opponents │ │
│ │ 75 - 79 │ 0 - 4 │ 8 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 70 - 74 │ 5 - 9 │ 7 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 65 - 69 │ 10 - 14 │ 6 │
│ ├────────────────┼────────────────┼────────────┤
│ bidder │ 60 - 64 │ 15 - 19 │ 5 │
│ wins ├────────────────┼────────────────┼────────────┤
│ │ 55 - 59 │ 20 - 24 │ 4 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 50 - 54 │ 25 - 29 │ 3 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 45 - 49 │ 30 - 34 │ 2 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 40 - 44 │ 35 - 39 │ 1 │
│ │ 35 - 39 │ 40 - 44 │ 1 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 30 - 34 │ 45 - 49 │ 2 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 25 - 29 │ 50 - 54 │ 3 │
│ ├────────────────┼────────────────┼────────────┤
│ bidder │ 20 - 24 │ 59 - 59 │ 4 │
│ loses ├────────────────┼────────────────┼────────────┤
│ │ 15 - 19 │ 60 - 64 │ 5 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 10 - 14 │ 65 - 69 │ 6 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 5 - 9 │ 70 - 74 │ 7 │
│ ├────────────────┼────────────────┼────────────┤
│ │ 0 - 4 │ 75 - 79 │ 8 │
Other Cego Sites
Jürgen Weißauer's eBook, which includes rules (in German) for a version of Cego, is available from his Spiele Okular website. | {"url":"http://www.pagat.com/tarot/cego.html","timestamp":"2014-04-16T16:48:01Z","content_type":null,"content_length":"57546","record_id":"<urn:uuid:71ead61d-6550-4f77-993d-491c4b0a0454>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Convergence to the optimal value for barrier methods
combined with Hessian Riemannian gradient flows and
generalized proximal algorithms
Felipe Alvarez & Julio L´opez
We consider the problem minxRn {f(x) | Ax = b, x C, gj(x) 0, j = 1, . . . , s}, where
b Rm
, A Rm×n
is a full rank matrix, C is the closure of a nonempty, open and convex
subset C of Rn
, and gj(·), j = 1, . . . , s, are nonlinear convex functions. Our strategy consists
firstly in to introduce a barrier-type penalty for the constraints gj(x) 0, then endowing
{x Rn
| Ax = b, x C} with the Riemannian structure induced by the Hessian of an
essentially smooth convex function h such that C = int(dom h), and finally considering the
flow generated by the Riemannian penalty gradient vector field. Under minimal hypotheses,
we investigate the well-posedness of the resulting ODE and we prove that the value of the
objective function along the trajectories, which are strictly feasible, converges to the optimal
value. Moreover, the value convergence is extended to the sequences generated by an implicit
discretization scheme which corresponds to the coupling of an inexact generalized proximal | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/738/2749614.html","timestamp":"2014-04-17T07:58:58Z","content_type":null,"content_length":"8421","record_id":"<urn:uuid:0fc78b89-3a12-4be0-b7e1-9ccf52a04b78>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
2475 -- Any fool can do it
Surely you know someone who thinks he is very clever. You decide to let him down with the following problem:
• "Can you tell me what the syntax for a set is?", you ask him.
• "Sure!", he replies, "a set encloses a possibly empty list of elements within two curly braces. Each element is either another set or a letter of the given alphabet. Elements in a list are
separated by a comma."
• "So if I give you a word, can you tell me if it is a syntactically correct representation of a set?"
• "Of course, any fool can do it!" is his answer.
Now you got him! You present him with the following grammar, defining formally the syntax for a set (which was described informally by him):
Set ::= "{" Elementlist "}"
Elementlist ::= <empty> | List
List ::= Element | Element "," List
Element ::= Atom | Set
Atom ::= "{" | "}" | ","
stands for the empty word, i.e., the list in a set can be empty.
Soon he realizes that this task is much harder than he has thought, since the alphabet consists of the characters which are also used for the syntax of the set. So he claims that it is not possible
to decide efficiently if a word consisting of "{", "}" and "," is a syntactically correct representation of a set or not.
To disprove him, you need to write an efficient program that will decide this problem.
The first line of the input contains a number representing the number of lines to follow.
Each line consists of a word, for which your program has to decide if it is a syntactically correct representation of a set. You may assume that each word consists of between 1 and 200 characters
from the set { "{", "}", "," }.
Output for each test case whether the word is a set or not. Adhere to the format shown in the sample output. | {"url":"http://poj.org/problem?id=2475","timestamp":"2014-04-19T10:10:06Z","content_type":null,"content_length":"7424","record_id":"<urn:uuid:2c550b7f-e216-4b30-9aa0-df28e6e334a2>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
Majority vote of total orders
up vote 5 down vote favorite
Fix an odd natural number $k$. Suppose we have $k$ total orders on the same (finite) set $X$. Define a tournament on the vertex set $X$ by putting a directed edge $x\rightarrow y$ if a majority of
the total orders compare $x > y$.
1. What tournaments can be obtained this way? Of course, if $k = 1$, only linearly ordered tournaments are possible. I am most interested in the case of small $k$. For example, is there an
excluded-substructure characterization of these tournaments?
2. What if we make the problem harder and ask whether a given directed graph $G$ can be extended to a tournament $T$ such that $T$ can be obtained in this way? Again, if $k = 1$, there are various
simple characterizations, such as all digraphs that contain no directed cycles.
3. What can be said about the computational problem of determining the smallest $k$ that can represent a given tournament or digraph?
I assume, perhaps naively, that this problem already occurs in the literature, perhaps in the theory of voting/social choice, so I would be happy with references instead of solutions if that's
co.combinatorics voting-theory directed-graph
2 Are the total orders allowed to occur with multiplicity? Then clearly if a tournament occurs then all subtournaments occur. So there is some list of excluded substructures, and the question is if
we can characterize it. But if not then it is not even obvious that this is closed under substructures. – Will Sawin Jan 18 '13 at 8:06
Yes, I'm allowing multiplicity. So I would be interested, for example, in a description of the excluded sub-tournaments for k=3. – aorq Jan 18 '13 at 16:50
add comment
4 Answers
active oldest votes
Every possible tournament on $n$ vertices is realisable with polynomially many voters. This recent paper cites D. C. McGarvey, A theorem on the construction of voting paradoxes,
up vote 3 down Econometrica 21 (1953), 608-610.
I think Gil Kalai proved some generalization of this result, but I'm not sure. – Michael Greinecker Jan 26 '13 at 10:14
add comment
You say you are interested in small $ k $. This makes sense, because allowing an arbitrarily large $ k $ makes the question trivial (provided you allow repetition of a linear order with any
multiplicity as well).
You can get any tournament as the majority vote of some number of linear orders.
Indeed, suppose you have $ n $ vertices (where $ 3 \le n $) and a tournament on this you want to obtain. For every arc $ (u, v) $ in the tournament, take all $ (n - 1)! $ linear orders in
which $ v $ is greater than $ u $ and they are adjacent so there is no vertex between them. In these tournaments, any edge other than $ {u, v} $ occurs the same number of times in the two
up vote 3 directions. Gather these linear orders for all edges in the tournament (that's $ n(n-1)(n-1)!/2 $ linear orders), and add any one linear order to make the total odd. The majority vote of
down vote these shall give your tournament.
Remark. I don't claim this construction to be optimal, indeed I think instead of the factorial order here, I think that you might be able to choose $ k $ to grow only polynomially in $ n $.
Update: it seems Ben Barber was a bit faster than me to post an answer that proves a bit more than this one.
add comment
These tournaments are called majority tournaments and are studied in several papers, e.g.
up vote 3 down vote
add comment
For $k=3$, the following paper shows an example of a non-3-majority tournament with 8 vertices.
up vote 2 down vote
A few years ago, I checked that every tournament with 7 vertices (even the Paley tournament) is 3-majority by using a computer.
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics voting-theory directed-graph or ask your own question. | {"url":"http://mathoverflow.net/questions/119240/majority-vote-of-total-orders","timestamp":"2014-04-19T19:48:35Z","content_type":null,"content_length":"64999","record_id":"<urn:uuid:d441a0bf-4fa0-4d6a-a862-8f788239c1fc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Waterman polyhedra
Photos taken by Bob Malvasio.
The gray polyhedron are all-space-fillers. Faces have been colored as; red - along the 6 x,y,z axis, blue as per the 8 vertex of a cube and yellow as per the 12 mid-points of the edges of a cube.
"Gaps" between those three have been appropriately colored as orange, purple and green. | {"url":"http://www.watermanpolyhedron.com/watermanpolyhedra1.html","timestamp":"2014-04-17T18:43:41Z","content_type":null,"content_length":"27968","record_id":"<urn:uuid:dcfdc36a-e5cb-4c89-aa14-c03415c294ef>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Knowledge
Bernard Bolzano: Philosophy of Mathematical Knowledge
In Bernard Bolzano’s theory of mathematical knowledge, properties such as analyticity and logical consequence are defined on the basis of a substitutional procedure that comes with a conception of
logical form that prefigured contemporary treatments such as those of Quine and Tarski. Three results are particularly interesting: the elaboration of a calculus of probability, the definition of
(narrow and broad) analyticity, and the definition of what it is for a set of propositions to stand in a relation of deducibility (Ableitbarkeit) with another. The main problem with assessing
Bolzano’s notions of analyticity and deducibility is that, although they offer a genuinely original treatment of certain kinds of semantic regularities, contrary to what one might expect they do not
deliver an account of either epistemic or modal necessity. This failure suggests that Bolzano does not have a workable account of either deductive knowledge or demonstration. Yet, Bolzano’s views on
deductive knowledge rest on a theory of grounding (Abfolge) and justification whose role in his theory is to provide the basis for a theory of mathematical demonstration and explanation whose
historical interest is undeniable.
Table of Contents
1. His Life and Publications
Bernard Placidus Johann Nepomuk Bolzano was born on 5 October 1781 in Prague. He was the son of an Italian art merchant and of a German-speaking Czech mother. His early schooling was unexceptional:
private tutors and education at the lyceum. In the second half of the 1790s, he studied philosophy and mathematics at the Charles-Ferdinand University. He began his theology studies in the Fall of
1800 and simultaneously wrote his first mathematical treatise. When he completed his studies in 1804, two university positions were open in Prague, one in mathematics, the other one in the “Sciences
of the Catholic Religion.” He obtained both, but chose the second: Bolzano adhered to the Utilitarian principle and believed that one must always act, after considering all possibilities, in
accordance with the greater good. He was hastily ordained, obtained his doctoral degree in philosophy and began work in his new university position in 1805. His professional career would be
punctuated by sickness—he suffered from respiratory illness—and controversy. Bolzano’s liberal views on public matters and politics would serve him ill in a context dominated by conservatism in
Austria. In 1819, he was absurdly accused of “heresy” and subjected to an investigation that would last five years after which he was forced to retire and banned from publication. From then on, he
devoted himself entirely to his work.
Bolzano’s Considerations on Some Objects of Elementary Geometry (1804) received virtually no attention at the time they were published and the few commentators who have appraised his early work
concur in saying that its interest is merely historical. (Russ 2004, Sebestik 1992; see also Waldegg 2001). Bolzano’s investigations in geometry did not anticipate modern axiomatic approaches to the
discipline–he was attempting to prove Euclid’s parallel postulate–and did not belong to the trend that would culminate with the birth of non-Euclidean geometries, the existence of which Bolzano’s
contemporary Johann Carl Friedrich Gauss (1777-1855) claimed to have discovered and whose first samples were found in the works of Nikolai Lobatchevski (1792-1856) and Janos Bolyai (1802-1860), whom
Bolzano did not read. (See Sebestik 1992, 33-72 for a discussion of Bolzano’s contribution to geometry; see also Russ 2004, 13-23). As Sebestik explains (1992, 35 note), Bolzano never put into
question the results to which he had come in (1804).
By contrast, Bolzano is renown for his anticipation of significant results in analysis. Three booklets that appeared in 1816-17 have drawn the attention of historians of mathematics, one of which,
the Pure Analytic Proof, was reedited in 1894 and 1905. (Rusnock 2000, 56-86; 158-198) At the time of their publication however they attracted hardly any notice. Only one review is known (see
Schubring 1993, 43-53). According to (Grattan-Guiness 1970), Cauchy would have plagiarized (Bolzano 1817a) in his Cours d’Analyse, but this hypothesis is disputed in (Freudenthal 1971) and (Sebestik
1992, 107ff). This might explain why Bolzano chose to resume the philosophical and methodological investigations he had initiated in the Contributions to a Better Founded Exposition of Mathematics
(1810) a decade earlier. At the end of the 1830s, after he had worked out the logical basis for his system in the Theory of Science (1837), Bolzano returned once more to mathematics and spent the
last years of his life working on the Theory of Quantities. The latter remained unpublished until after his death, and only excerpts appeared in print in the 19^th century, most notably the Paradoxes
of the Infinite (1851). The Theory of Function (1930) and the Pure Theory of Numbers (1931) were edited by the Czech mathematician Karel Rychlik and published in 1930 and 1931 respectively by a
commission from the Royal Bohemian Academy of Science. All these works have now been translated into English (See Russ 2004).
2. The Need for a New Logic
Bolzano understood the main obstacle to the development of mathematics in his time to be the lack of proper logical resources. He believed syllogistic (that is, traditional Aristotelian logic) was
utterly unfit for the purpose. He saw the task of the speculative part of mathematics that belongs at once to philosophy as consisting in providing a new logic following which a reform of all
sciences should take place. As Bolzano conceived of it, philosophy of mathematics is one aspect of a more general concern for logic, methodology, the theory of knowledge, and, in general, the
epistemological foundation of deductive sciences, “purely conceptual disciplines” as Bolzano calls them, that unfolds throughout his mathematical work and forms the foremost topic of his philosophy.
The latter falls in two phases. The period of the Contributions, which extends throughout the 1810s, and the period of the Theory of Science, which was written in the course of the 1820s and
published anonymously in 1837. In the Contributions, Bolzano’s undertaking remained largely programmatic and by no means definitive. By the time he was writing the Theory of Science he had revised
most of his views, such as those of the multiple copula, analyticity and necessity. (See Rusnock 2000, 31-55, for discussion.) Nonetheless, the leitmotiv of Bolzano’s mature epistemology already
comes through in 1810, namely his fundamental disagreement with the “Kantian Theory of Construction of Concepts through Intuitions” to which he devoted the Appendix of the Contributions. (See Rusnock
2000, 198-204 for an English translation; see also Russ 2004, 132-137). In this, Bolzano can be seen to have anticipated an important aspect of later criticisms of Kant, Russell’s for instance (1903
§§ 4, 5, 423, 433-4). As Bolzano saw it, an adequate account of demonstration excludes appeal to non-conceptual inferential steps, intuitions or any other proxy for logic.
In the Theory of Science, Bolzano’s epistemology of deductive disciplines is based on two innovations. On the one hand, properties such as analyticity or deducibility (Ableitbarkeit) are defined not
for thoughts or sentences but for what Bolzano conceives to be the objective content of the former and the meaning of the latter and which he calls “propositions in themselves” (Sätze and sich) or
“propositions.” On the other hand, properties such as analyticity and deducibility are “formal” in that they are features of sets of propositions defined by a fixed vocabulary; they come to the fore
through the application of a substitution method that consists in arbitrarily “varying” determinate components in a proposition so as derive different types of semantic regularities.
3. Analyticity and Deducibility
Bolzano’s theory of analyticity is a favored topic in the literature. (Cf. Bar-Hillel 1950; Etchemendy 1988; Künne 2006; Lapointe 2000, 2008; Morscher 2003; Neeman 1970; Proust 1981, 1989; Textor
2000, 2001) This should be no surprise. For one thing, by contrast to the Kantian definition, Bolzano’s allows us to determine not only whether a grammatical construction of the form
subject-predicate is analytic, as Kant has it, but whether any construction is analytic or not. This includes hypotheticals, disjunctions, conjunctions, and so forth, but also any proposition that
presents a syntactic complexity that is foreign to traditional (that is, Aristotelian) logic. Analyticity is not tied to any “syntactic” conception of “logical form.” It is a relation pertaining to
the truth of propositions and not merely to their form or structure. Let ‘A[ij…](S)’ stand for “The proposition S is analytic with respect to the variable components i, j…”
A[ij…](S) iff:
(i) i, j, … can be varied so as to yield at least one objectual substitution instance of S
(ii) All substitution instances of S have the same truth-value as S
where a substitution instance is “objectual” if the concept that is designated by the subject has at least one object. On this account, propositions can be analytically true or analytically false.
Although the idea that analyticity should be defined on the basis of a purely semantic criterion is in itself a great anticipation, Bolzano’s conception of analyticity fails in other respects. For
one, it does not provide an account of what it means for a proposition to be true by virtue of meaning alone and to be knowable as such. “… is analytic with respect to …” is not a semantic predicate
of the type one would expect, but is a variable holding operator. A statement ascribing analyticity to a given propositional form, say “X who is a man is mortal” if it is true, is true because every
substitution instance of “X who is a man is mortal” that also has objectuality is true. Bolzano’s definition of analyticity offers a fairly clear description of substitutional quantification — to say
that a propositional form is analytic is to say that all its substitution instances are true. Yet because he deals not primarily with sentences and words but with their meaning, that is, with ideas
and propositions in themselves, and because there is at least one idea for every object, there is in principle a “name” for every object. For this reason, although Bolzano’s approach to
quantification is substitutional, he is not liable to the reproach that his interpretation of the universal quantifier cannot account for every state of the world. The resources he has at his
disposal are in principle as rich as necessary to provide a complete description of the domain the theory is about.
Bolzano’s epistemology rests on a theory of logical consequence that is twofold: an account of truth preservation that is epitomized in his notion of “deductibility” (Ableitbarkeit) on the one hand
(See Siebel 1996, 2002, 2003; van Benthem 1985, 2003; Etchemendy 1990), and an account of “objective grounding” (Abfolge) on the other (see Tatzel 2002, 2003; see also (Thompson 1981; Corcoran 1975).
The notion of deducibility presents a semantic account of truth-preservation that is neither trivial nor careless. The same holds for his views on probability. Likewise his attempt at a definition of
grounding constitutes the basis of an account of a priori knowledge and mathematical explanations whose interest has been noticed by some authors, and in some cases even vindicated (Mancosu 1999).
As Bolzano presents it, although analyticity is defined for individual propositional forms, deducibility is a property defined for sets of those forms. Let “D[ij…](T’ T’, T’’, … ; S, S’, S’’, …)”
stand for “The set of propositions T’ T’, T’’ is deducible from the set of propositions S, S’, S’’ with respect to i, j,….” Bolzano defines deducibility in the following terms:
D[ij…](T’ T’, T’’, … ; S, S’, S’’, …) iff
(i) i, j, … can be varied so as to yield at least one true substitution instance of S, S’, S’’, … and T, T’, T’’, …
(ii) whenever S, S’, S’’… is true, T, T’, T’’,… is also true.
Bolzano’s discussion of deducibility is exhaustive. It extends over thirty-six paragraphs, and he draws a series of theorems from his definition. The most significant theorems are the following:
• ¬(A[ij…](T, T’, T’’…; S, S’, S’’) → A[ij…](S, S’, S’’…; T, T’, T’’…,) (asymmetry)
• (A[ij…](T, T’, T’’…; S, S’, S’’) & A[ij…](R, R’, R’’…; T, T’, T’’…) → (A[ij…](R, R’, R’’…; S, S’, S’’…) (transitivity)
In addition, assuming that the S, S’, S’’…, share at least one variable that make them all true at the same time, then:
• A[ij…]( S, S’, S’’…; S, S’, S’’) (reflexivity)
As regard reflexivity, the assumption that the S, S’, S’’… must share at least one variable follows from the fact that every time S, S’, S’’… contain a falsehood S that does not share at least one
variable idea i, j, with the conclusion T, T’, T’’,…, then there are no substitution that can make both the premises and the conclusion true at the same time, and the compatibility constraint is not
On Bolzano’s account, fully-fledged cases of deducibility include both formally valid arguments as well as materially valid ones, for instance:
Caius is rational
is deducible with respect to ‘Caius’, ‘man’ and ‘rational’ from
Caius is a man
Men are rational
Caius is rational
is deducible with respect to ‘Caius’ from
Caius is a man.
There is a sharp distinction to be drawn between arguments of the former kind and arguments of the latter. Assuming a satisfactory account of logical form, in order to know that the conclusion
follows from the premises in arguments of the former kind one only needs to consider their structure or form; no other kind of knowledge is required. In the latter argument however in order to infer
from the premise to the conclusion, one must know more than its form. One also needs to understand the signification of ‘man’ and ‘rational’ since in order to know that Caius is rational one also
needs to know in addition to the fact that Caius is a man that all men are rational. There is good evidence that Bolzano was aware of some such distinction between arguments that preserve truth and
arguments that do so in virtue of their “form.” Unfortunately, Bolzano’s definition of deducibility does not systematically uphold the distinction. Since deducibility applies across the board to all
inferences that preserve truth from premises to conclusion with respect to a given set of ideas, it does not of itself guarantee that an argument be formally valid and the notion of deducibility
turns out to be flawed: it makes it impossible to extend our knowledge in the way we would expect it. If we know, for instance, that all instances of modus ponens are logically valid, we can infer
from two propositions whose truth we’ve recognized:
If Caius is a man, then he is mortal
Caius is a man
a new proposition:
Caius is mortal
whose truth we might not have previously known. Bolzano’s account of deducibility does not allow one to extend one’s knowledge in this way since in order to know for every substitution instance that
truth is preserved from the premises to the conclusion one has to know that the premises are true and that the conclusion is true.
On Bolzano’s account, in order for a conclusion to be deducible from a given set of premises, there must be at least one substitution that makes both the premises and the conclusion true at once. He
calls this the “compatibility” (Verträglichkeit) condition, a requirement that is not reflected in classical conceptions of consequence. As a result, Bolzano’s program converges with many
contemporary attempts at a definition of non-classical notions of logical consequence. Given the compatibility condition, although a logical truth may follow from any (set of) true premises (with
respect to certain components), nothing as opposed to everything is deducible from a contradiction. The compatibility condition invalidates the ex contradictio quod libet or explosion principle. The
reason for this is that no substitution of ‘p’ in “‘q’ is deducible from ‘p and non-p’’ can fulfil the compatibility constraint; no interpretation of ‘p’ in ‘p and non-p’ can yield a true variant
and hence there are no ideas that can be varied so as to make both the premises and the conclusion true at once. This has at least two remarkable upshots. First, the compatibility constraint
invalidates the law of contraposition. Whenever one of S, S’, S’’… is analytically true, when all their substitution instances are true, we cannot infer from:
D[ij…](T’ T’, T’’, … ; S, S’, S’’, …)
D[ij…](¬S, ¬S’, ¬S’’, …; ¬T, ¬T’, ¬T’’…)
since ‘¬S, ¬S’, ¬S’’’ entails a contradiction, that is, an analytically false proposition. For instance,
Caius is a physician who specializes in the eyes
is deducible from
Every ophthalmologist is an ophthalmologist
Caius is an ophthalmologist
with respect to ‘ophthalmologist’. However,
It is not the case that every ophthalmologist is an ophthalmologist
It is not the case that Caius is an ophthalmologist
are not deducible with respect to the same component from:
It is not the case that Caius is a physician who specializes in the eyes.
Second, the compatibility condition makes Bolzano’s logic nonmonotonic. Whenever the premise added contains contradictory information, the conclusion no longer follows. While compatibility does not
allow him to deal with all cases of defeasible inference, it allows Bolzano to account for cases that imply typicality considerations. It is typical of crows that they be black. Hence from the fact
that x is a crow we can infer that x is black. On Bolzano’s account adding a premise that describes a new case that contradicts previous observation, say that this crow is not black, the conclusion
no longer follows since the inference does not fulfil the compatibility condition: no substitution can make both the premises and the conclusion true at the same time.
At many places Bolzano suggests that deducibility is a type of probabilist inference, namely the limit case in which the probability of a proposition T relative to a set of premises S, S’, S’’… = 1.
Bolzano also calls inferences of this type “perfect inference.” More generally, the value of a probability inference from S, S’, S’’, … to T with respect to a set of variable ideas i, j,… is
determined by comparing the number of cases in which the substitution of i, j,… yields true instances of both S, S’, S’’… and T, to the number of cases in which S, S’, S’’,… are true (with respect to
i, j,…). Let’s assume that Caius is to draw a ball from a container in which there are 90 black and 10 white and that the task is to determine the degree of probability of the conclusion “Caius draws
a black ball.” On Bolzano’s account, in order to determine the probability of the conclusion one must first establish the number n of admissible substitution instances K[1], K[2], …, K[n] of the
premise “Caius draws a ball” with respect to ‘ball.’ The number n of acceptable substitution instances of the premise is in general a function of the following considerations: (i) the probability of
each of K[1], K[2], …, K[n] is the same; (ii) only one of K[1], K[2], …, K[n] can be true at once; (iii) taken together, they exhaust all objectual substitution instances of the premise. In this
case, since there are 100 balls in the container, there are only 100 admissible substitution instances of the premise, namely K[1]: “Caius draws ball number 1,” K[2]: “Caius draws ball number 2,”…, K
[100]: “Caius draws ball number 100.” If the set of K[1], K[2], …, K[n] = k and the number of cases in which “Caius draw a black ball” is deducible from “Caius draws a ball” is m, then the
probability m of “Caius draws a black ball” is the fraction m/k = 90/100 = 9/10. In the case of deducibility the number of cases in which the substitution yields both true variants of the premises
and the conclusion is identical to the number of true admissible variants of the premises, that is, m = 1. If there is no substitution that makes both the premises and the conclusion true at the same
time, then the degree of probability of the conclusion is 0, that is, the conclusion is not deducible from the premises.
4. Grounding
Bolzano did not think that his account of truth preservation exhausted the topic of inference since it does not account for what is specific to knowledge we acquire in mathematics. Such knowledge he
considered to be necessary and a priori, two qualities relations that are defined on the basis of the substitutional method do not have. Bolzano called “grounding” (Abfolge) the relation that defines
structures in which propositions relate as grounds to their consequences. As Bolzano conceived of it, my knowing that ‘p’ grounds ‘q’ has explanatory virtue: grounding aims at epitomizing certain
intuitions about scientific explanation and seeks to explain, roughly, what, according to Bolzano, the truly scientific mind ought to mean when, in the conduct of a scientific inquiry, she uses the
phrase “…because…” in response the question “why …?” Since in addition the propositions that pertain to “grounding” orders such as arithmetic and geometry are invariably true and purely conceptual,
then grasping the relations among propositions in the latter invariably warrants knowledge that does not rest on extra-conceptual resources, a move that allowed Bolzano to debunk the Kantian theory
of pure intuition.
Bolzano’s notion of grounding is defined by a set of distinctive features. For one thing, grounding is a unique relation: for every true proposition that is not primitive, there is a unique
tree-structure that relates it to the axioms from which it can be deduced. That there is such a unique objective order is an assumption on Bolzano’s part that is in many ways antiquated, but it
cannot be ignored. Uniqueness follows from two distinctions Bolzano makes. On the one hand, Bolzano distinguishes between simple and complex propositions: a ground (consequence) may or may not be
complex. A complex ground is composed of a number of different truths that are in turn composed of a number of different primitive concepts. On the other hand, Bolzano distinguishes between the
complete ground or consequence of a proposition and the partial ground or consequence thereof. On this basis, he claims that the complete ground of a proposition is never more complex than is its
complete consequence. That is, propositions involved in the complete ground of a proposition are not composed of more distinct primitive concepts than is the complete consequence. Given that Bolzano
thinks that the grounding order is ultimately determined by a finite number of simple concepts, this restriction implies that the regression in the grounding order from a proposition to its ground is
finite. Ultimately, the regression leads to true primitive propositions, that is, axioms whose defining characteristic is their absolute simplicity.
Note that the regression to primitive propositions is not affected by the fact that the same proposition may appear at different levels of the hierarchy. Although the grounding order is structured
vertically and cannot have infinitely many distinct immediate antecedents, in order to conduct basic inductive mathematical demonstration the horizontal structure needs on its part to allow for
recursions. Provided that the recurring propositions do not appear on the same branch of the tree, Bolzano is in a position to avoid loops that would make it impossible to guarantee that we ever
arrive at the primitive propositions or that there be primitive propositions in the first place.
Bolzano draws a distinction between cases in which what we have is the immediate ground for the truth of a proposition and cases in which the ground is mediated (implicitly or explicitly) by other
truths. When Bolzano speaks of grounding, what he has in mind is invariably immediate grounding, and he understands the notion of mediate grounding as a derivative notion. It is the transitive
closure of the more primitive notion of immediate grounding. p is the mediate consequence of the propositions Ψ1, …, Ψn if and only if there is a chain of immediate consequences starting with Ψ1, …,
Ψn and ending with p. p is the immediate consequence of Ψ1, …, Ψn if there are no intermediate logical step between Ψ1, …, Ψn and p.
Grounding is not reflexive. p cannot be its own ground, whether mediate or immediate. The non-reflexive character of grounding can be inferred from its asymmetry, another of Bolzano’s assumption. If
grounding were reflexive, then the truth that p could be grounded on itself, but given that if p grounds q it is not the case that q grounds p, this would imply a contradiction since, by substitution
p could at once ground itself and not ground itself. Irreflexivity allows Bolzano to deny the traditional tenet according to which some propositions such as axioms are grounded in themselves. Bolzano
explains that this is a loose way of talking, that those who maintain this idea are unaware of the putative absurdity of saying that a proposition is its own consequence and that the main motivation
behind this claim is the attempt to maintain, unnecessarily, the idea that every proposition has a ground across the board. According to Bolzano however, the ground for the truth of a primitive
proposition does not lie in itself but in the concepts of which this proposition consists.
One important distinction to be made between deducibility and grounding, as Bolzano conceives of them, rests in the fact that while grounding is meant to support the idea that a priori knowledge is
axiomatic, that there are (true) primitive, atomic propositions from which all other propositions in the system follow as consequences, deducibility does not have such implication. Whether a
proposition q is deducible from another proposition p is not contingent on q’s being ultimately derivable from the propositions from which p is derivable. That “Caius is mortal” is deducible from
“Caius is a man” can be established independently of the truth that Caius is a finite being. Likewise, the possibility that deducibility be a special case of grounding is unacceptable for Bolzano.
Not all cases of deducibility are cases of grounding. For instance,
It is warmer in the summer than in the winter
is deducible from
Themometers, if they function properly, are higher in the summer than in the winter
but it is not an objective consequence of the latter in Bolzano’s sense. On the contrary, the reason why thermometers are higher in the summer is that it is warmer so that, in the previous example,
the order of grounding is reversed. There are cases in which true propositions that stand in a relation of deducibility also stand in a relation of grounding, what Bolzano calls “formal grounding.”
It is not difficult to see what could be the interest of the latter. Strictly speaking, in an inference that fits both the notion of grounding and that of deducibility, the conclusion follows both
necessarily (by virtue of its being a relation of grounding) and as a matter of truth preservation (by virtue of its being an instance of deducibility) from the premises. Formal grounding however
presents little interest: it is not an additional resource of Bolzano’s logic but a designation for types of inferences that present the specificity of suiting two definitions at once: I can only
know that an inference fits the definition of formal grounding if I know that it fits both that of grounding and that of deducibility. Once I know that it fits both, to say that it is a case of
formal grounding does not teach me much I did not already know.
It could be tempting to think that grounding is a kind of deducibility, namely the case in which the premises are systematically simpler than the conclusion. Bolzano suggests something similar when
he claims that grounding might not, in the last instance, be more than an ordering of truths by virtue of which we can deduce from the smallest number of simple premises, the largest possible number
of the remaining truths as conclusion. This would require us however to ignore important differences between deducibility and grounding. When I say that “The thermometer is higher in the summer” is
deducible from “It is warmer in the summer,” I am making a claim about the fact that every time “It is warmer in X” yields a true substitution instance, “The thermometer is higher in X” yields one as
well. When I say that “The thermometer is higher in the summer” is grounded in “It is warmer in the summer” I am making a claim about determinate conceptual relations within a given theory. I am
saying that given what it means to be warmer and what it means to be a thermometer, it cannot be the case that it be warm and that the thermometer not be high. Of course the theory can be wrong, but
assuming that it is true, the relation is necessary since it follows from the (true) axioms of the theory. In this respect, a priori knowledge can only be achieved in deductive disciplines when we
grasp the necessary relations that subsist among the (true and purely conceptual) propositions they involve. If I know that a theorem follows from an axiom or a set of them, I know so with necessity.
5. Objective Proofs
Bolzano’s peculiar understanding of grounding is liable to a series of problems, both exegetical and theoretical. Nonetheless, the account of mathematical demonstration, what he terms “
Begründungen,” (objective proofs), that it underlies is of vast historical interest. Three notions form the basis of Bolzano’s account of mathematical and deductive knowledge in general: grounding (
Abfolge), objective justification (objective Erkenntnisgrund) and objective proof (Begründung). The structure of the theory is the following: (i) grounding is a relation that subsists between true
propositions independently of epistemic access to them. We may grasp objective grounding relations and (ii) the possibility of grasping the latter is also the condition for our having objective
justifications for our beliefs, as opposed to merely “subjective” ones. Finally, (iii) objective proofs are meant to cause the agent to have objective justifications in this sense. With respect to
(ii), Bolzano’s idea is explicitly Aristotelian: Bolzano believes that whenever an agent grasps p and grasps the grounding relation between p and q, she also knows the ground for the existence of q
and therefore putatively why q is true, namely because p. If we follow (iii), the role of a (typically) linguistic or schematic representation of (i) is to cause the agent to have (ii). According to
Bolzano, objective proofs succeed in providing agents with an objective justification for their relevant beliefs because they make the objective ground of the propositions that form the content of
these beliefs epistemically accessible to the agent. As Bolzano sees it, the typical objective proof is devised so as to reliably cause the reader or hearer to have an objective justification for the
truth of the proposition. The objective proof is merely ‘reliable’ since whether I do acquire objective knowledge upon surveying the proof in question depends in part on my background knowledge, in
part on my overall ability to process the relevant inferences and the latter according to Bolzano’s theory of cognition is mostly a function of my having been previously acquainted with many
inferences of different types. The more accustomed I am to drawing inferences, the more reliably the objective proof is likely to cause in me the relevant objective justification.
According to Bolzano, there are good reasons why we should place strong constraints on mathematical demonstration, and in everyday practice favor the objective proofs that provide us with objective
mathematical knowledge. It would be wrong however to assume that on his account mathematical knowledge can only be achieved via objective proofs. Objective proofs are not the only type of
demonstration in Bolzano’s theory of knowledge, nor indeed the only bona fide one. Bolzano opposes objective proofs, that is, proofs that provide an objective justification to what he calls
Gewissmachungen (certifications). Certifications, according to Bolzano, are also types of demonstrations (there are many different species thereof) in the sense that they too are meant to cause
agents to know a certain truth p on the basis of another one q. When an agent is caused to know that something is true on the basis of a certification, the agent has a subjective, as opposed to an
objective, justification for his or her belief. Bolzano’s theory of certification and subjective justification is an indispensible element of his account of empirical knowledge. Certifications are
ubiquitous in empirical sciences such as medicine. Medical diagnosis relies on certifications in Bolzano’s sense. Symptoms are typically visible effects, direct or indirect, of diseases that allow us
to recognize them. When we rely on symptoms to identify a disease, we thus never know this disease through its objective ground. Likewise, subjective proofs also play an important role in Bolzano’s
account of mathematical knowledge. As Bolzano sees it, in order to have an occurrent (and not a merely dispositional) cognitive attitude towards a given propositional content, an agent must somehow
be causally affected. This may be brought about in many ways. Beliefs and ideas arise in our mind most of the time in a more or less sophisticated, chaotic and spontaneous way, on the basis of mental
associations and/or causal interactions with the world. The availability of a linguistic object that represents the grounding relation is meant to reliably cause objective knowledge, that is, to
bring one’s interlocutor to have occurent objective knowledge of a certain truth. This may however not be the best way to cause the given belief per se. It might be that in order to cause me to
recognize the truth of the intermediate value theorem, my interlocutor needs resort to a more or less intuitive diagrammatic explanation, which is precisely what objective proofs exclude. Since as
Bolzano conceives of it the purpose of demonstrations is primarily to cause the interlocutor to have a higher degree of confidence (Zuversicht) in one of his beliefs, and since Bolzano emphasizes the
effectiveness of proofs over their providing objective justifications, objective proofs should not be seen as the only canonical or scientifically acceptable means to bring an agent to bestow
confidence on a judgment. Besides, Bolzano warns us against the idea that one ought to use only logical or formal demonstrations that might end up boring the interlocutor to distraction and have a
rather adverse epistemic effect. Although Bolzano claims that we ought to use objective proofs as often as possible, he also recognizes that we sometimes have to take shortcuts or simply use
heuristic creativity to cause our interlocutor to bestow confidence on the truths of mathematics, especially when the interlocutor has only partial and scattered knowledge of the discipline.
Objective proof, in addition to its epistemic virtue, introduces pragmatic constraints on demonstration that are meant to steer actual practices in deductive science. The idea that mathematical
demonstrations ought to reflect the grounding order entails two things. First, it requires that an agent does not deny that a proposition has an objective ground and is thus inferable from more
primitive propositions every time this agent, perhaps owing to her medical condition or limited means of recognition, fails to recognize that the proposition has an objective ground. Consequently, it
insures that the demonstration procedure is not short-circuited by criterion such as intuition, evidence or insight. The requirement that mathematical demonstrations be objective proofs forbids that
the agent’s inability to derive a proposition from more primitive ones be compensated by a non grounding-related feature. In this relation, Mancosu speaks of the heuristic fruitfulness of Bolzano’s
requirement on scientific exposition. (Mancosu 1999, 436) Although Bolzano considered that objective proofs should be favored in mathematical demonstration and despite the fact that he thought that
only objective proofs have the advantage of letting us understand why a giving proposition is true, he did not think that in everyday practice mathematical demonstrations ought to be objective
proofs. Bolzano thinks that there are situations in which it is legitimate to accept proofs that deliver only evidential knowledge. When it comes to setting out a mathematical theory the main
objective should be to cause the agent to have more confidence in the truth of the proposition to be demonstrated than he would have otherwise or even merely to incite him to look for an objective
justification by himself. Hence, given certain circumstantial epistemic constraints, Bolzano is even willing to concede that certain proofs can be reduced to a brief justification of one’s opinion.
Furthermore, though this would deserve to be investigated further, it is worth mentioning that Bolzano is not averse to reverting to purely inductive means, for instance, when it comes to
mathematical demonstration. This may seem odd, but Bolzano has good reasons to avoid requiring that all our mathematical proofs provide us with objective and explanatory knowledge. For one thing,
asking that all mathematical proofs be objective proofs would not be a reasonable requirement and, in particular, it would not be one that is always epistemically realizable. Given the nature of
grounding, it would often require us to engage in the production of linguistic objects that have immense proportions. Since they are merely probable, Bolzano does think that evidential proofs need to
be supplemented by “decisive” ones. One could want to argue that the latter reduce to objective proofs. If, upon surveying an objective proof, I acquire an objective justification, I cannot doubt the
truth of the conclusion, and it is therefore decisively true. But it is hard to imagine that Bolzano would have thought that the linguistic representation of an inference from deducibility would be
any less decisive. Consider this inference:
Triangles have two dimensions
is deducible from
Figures have two dimensions
Triangles are figures.
Not only is the inference truth-preserving, but the conclusion is also a conceptual truth. It is composed only of concepts which, according to Bolzano, means that its negation would imply a
contradiction and is therefore necessary. In mathematics and other conceptual disciplines, deducibility and grounding both have the epistemic particularity of yielding a belief that can be asserted
with confidence. By contrast, according to Bolzano, though an agent need not always be mistaken whenever she asserts a proposition that stands to its premises in a mere relation of probability, she
is at least liable to committing an error. Inferences whose premises are only probable can only yield a conclusion that has probability. As Bolzano sees it, confidence is a property of judgments that
are indefeasible. The conclusion (perfectly) deduced from a set of a priori propositions cannot be defeated if only because, if I know its ground, I also know why it is true and necessarily so.
Similarly, if p is true and if I know that q is deducible from p (and this holds a fortiori in the case in which p and q are conceptual truths), I have a warrant, namely the fact that I know that
truth is preserved from premises to conclusion, and I cannot be mistaken about the truth of q.
6. Conclusion
The importance of Bolzano’s contribution to semantics can hardly be overestimated. The same holds for his contribution to the theoretical basis of mathematical practice. Far from ignoring epistemic
and pragmatic constraint, Bolzano discusses them in detail, thus providing a comprehensive basis for a theory of mathematical knowledge that was aimed at supporting work in the discipline. As a
mathematician, Bolzano was attuned to philosophical concerns that escaped the attention of most of his contemporaries and many of his successors. His theory is historically and philosophically
interesting, and it deserves to be investigated further.
7. References and Further Reading
• Bar-Hillel, Yehoshua (1950) “Bolzano’s Definition of Analytic Propositions” Methodos, 32-55. [Republished in Theoria 16, 1950, pp. 91-117; reprinted in Aspects of language: Essays and Lectures
on Philosophy of Language, Linguistic Philosophy and Methodology of Linguistics, Jerusalem, The Magnes Press 1970 pp. 3-28].
• Benthem, Johan van (2003) “Is There Still Logic in Bolzano’s Key?” in Bernard Bolzanos Leistungen in Logik, Mathematik und Physik, Edgar Morscher (ed.) Sankt Augustin, Academia, 11-34.
• Benthem, Johan van (1985) “The Variety of Consequence, According to Bolzano”, Studia Logica 44/4, 389-403.
• Benthem, Johan van (1984) Lessons from Bolzano. Stanford, Center for the Study of Language and Information, Stanford University, 1984.
• Bolzano, Bernard (1969-…) Bernard Bolzano-Gesamtausgabe, dir. E. Winter, J. Berg, F. Kambartel, J. Louzil, B. van Rootselaar, Stuttgart-Bad Cannstatt, Fromann-Holzboog, 2 A, 12.1, Introduction
par Jan Berg.
• Bolzano, Bernard (1976) Ausgewählte Schriften, Winter, Eduard (ed.), Berlin, Union Verlag.
• Bolzano, Bernard (1851) Paradoxien des Unendlichen, (reprint) Wissenschaftliche Buchgesellschaft, 1964. [Dr Bernard Bolzano’s Paradoxien des Unendlichen herausgegeben aus dem schriftlichem
Nachlasse des Verfassers von Dr Fr. Příhonský, Leipzig, Reclam. (Höfler et Hahn (Eds.), Leipzig, Meiner, 1920)]
• Bolzano, Bernard (1948) Gemoetrishche Arbeiten [Geometrical Works], Spisy Bernada Bolzana, Prague, Royal Bohemian Academy of Science.
• Bolzano, Bernard (1837) Wissenschaftslehre, Sulzbach, Seidel.
• Bolzano, Bernard (1931) Reine Zahlenlehre [Pure Theory of Numbers], Spisy Bernada Bolzana, Prague, Royal Bohemian Academy of Science.
• Bolzano, Bernard (1930) Funktionenlehre [Theory of Function] Spisy Bernada Bolzana, Prague, Royal Bohemian Academy of Science;
• Bolzano, Berbard (1817a) Rein Analytischer Beweis des Lehrsatzes,dass zwischen je zwey Werthe, die ein entgegengesetzes Resultat gewähren, wenigstens eine reelle Wurzel der Gleichung liege,
Prague, Haase. 2^nd edition, Leipzig, Engelmann, 1905; Facsimile, Berlin, Mayer & Mueller, 1894.
• Bolzano, Bernard (1817b) Die drey Probleme der Rectification, der Complanation und der Cubirung, ohne Betrachtung des unendlich Kleinen, Leipzig, Kummer.
• Bolzano, Bernard (1816) Der binomische Lehrsatz und als Folgerung aus ihm der polynomische, und die Reihen, die zur Berechnung der Logarithmen une Exponentialgrösse dienen, Prague, Enders.
• Bolzano, Bernard (1812) Etwas aus der Logik, Bolzano Gesamtausgabe, Gesamtausgabe, Stuttgart, Frohmann-Holzboog, vol. 2 B 5, p.140ff.
• Bolzano, Bernard (1810) Beyträge zu einer begründeteren Darstellung der Mathematik; Widtmann, Prague. (Darmstadt, Wissenschaftliche Buchgesellschaft,1974).
• Coffa, Alberto (1991) The semantic tradition fro Kant to Carnap, Cambridge, Cambridge University Press.
• Dubucs, Jacques & Lapointe, Sandra (2006) “On Bolzano’s Alleged Explicativism,” Synthese 150/2, 229–46.
• Etchemendy, John (2008) “Reflections on Consequence,” in (Patterson 2008), 263-299.
• Etchemendy, John (1990) The Concept of Logical Consequence, Cambridge, Harvard University Press.
• Etchemendy, John (1988) “Models, Semantics, and Logical Truth”, Linguistics and Philosophy, 11, 91-106.
• Freudenthal, H (1971) (“Did Cauchy Plagiarize Bolzano?”, Archives for the History of Exact Sciences 375-92.
• Grattan-Guiness, Ivan (1970) “Bolzano, Cauchy and the ‘New Analysis’ of the Early Nineteenth Century,” Archives for the History of Exact Sciences, 6, 372-400.
• Künne Wolfgang (2006) “Analyticity and logical truth: from Bolzano to Quine”, in (Textor 2006), 184-249.
• Lapointe, Sandra (2008), Qu’est-ce que l’analyse?, Paris, Vrin.
• Lapointe, Sandra (2007) “Bolzano’s Semantics and His Critique of the Decompositional Conception of Analysis” in The Analytic Turn, Michael Beaney (Ed.), London, Routledge, pp.219–234.
• Lapointe, Sandra (2000). Analyticité, Universalité et Quantification chez Bolzano. Les Études Philosophiques, 2000/4, 455–470.
• Morscher, Edgar (2003) “La Définition Bolzanienne de l’Analyticité lLgique”, Philosophiques 30/1, 149-169.
• Neeman, Ursula (1970), “Analytic and Synthetic Propositions in Kant and Bolzano” Ratio 12, 1-25.
• Patterson, Douglas (ed.) (2008) News Essays on Tarski’s Philosophy, Oxford, Oxford.
• Příhonský, František (1850) Neuer Anti-Kant: oder Prüfung der Kritik der reinen Vernunft nach den in Bolzanos Wissenschaftslehre niedergelegten Begriffen, Bautzen, Hiecke.
• Proust, Joëlle (1989) Questions of Form. Logic and the Analytic Proposition from Kant to Carnap. Minneapolis: University of Minnesota Press.
• Proust, Joëlle (1981) “Bolzano’s analytic revisited”, Monist, 214-230.
• Rusnock, Paul (2000) Bolzano’s philosophy and the emergence of modern mathematics, Amsterdam, Rodopi.
• Russ, Steve (2004) The Mathematical Works of Bernard Bolzano, Oxford, Oxford Univewrsity Press.
• Russell, Bertrand (1903) The Principles of Mathematics, Cambridge, Cambridge University Press.
• Sebestik, Jan (1992) Logique et mathématique chez Bernard Bolzano, Paris, Vrin.
• Schubring, Gert (1993) “Bernard Bolzano. Not as Unknown to His Contemporaries as Is Commonly Believed?” Historia Mathematica, 20, 43-53.
• Siebel, Mark (2003) “La notion bolzanienne de déductibilité” Philosophiques, 30/1, 171-189.
• Siebel, Mark (2002) “Bolzano’s concept of consequence” Monist, 85, 580-599.
• Siebel, Mark (1996) Der Begriff der Ableitbarkeit bei Bolzano, Sankt Augustin, Academia Verlag.
• Tatzel, Armin (2003) “La théorie bolzanienne du fondement et de la consequence” Philosophiques 30/1, 191-217.
• Tatzel, Armin (2002) “Bolzano’s theory of ground and consequence” Notre Dame Journal of Formal Logic 43, 1-25.
• Textor, Mark (ed.) (2006) The Austrian Contribution to Analytic Philosoph, New York, Routledge.
• Textor, Mark, (2001) “Logically analytic propositions “a posteriori”?” History of Philosophy Quarterly, 18, 91-113.
• Textor, Mark (2000) “Bolzano et Husserl sur l’analyticité,” Les Études Philosophiques 2000/4 435–454.
• Waldegg, Guillermina, (2001) “Ontological Convictions and Epistemological Obstacles in Bolzano’s Elementary Geometry”, Science and Education, 10/4 409-418.
Author Information
Sandra LaPointe
Email: sandra.lapointe@mac.com
Kansas State University
U. S. A. | {"url":"http://www.iep.utm.edu/bol-math/print","timestamp":"2014-04-18T08:04:13Z","content_type":null,"content_length":"53196","record_id":"<urn:uuid:f7c22036-0cc5-48a1-94dd-e1b8c13d12a7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
Why do this problem?
This problem
offers an opportunity to reflect on the very important concept of fitting a curve to experimental data. Along the way, students will utilise their skills of transforming graphs in order to find a
close fit, and consider ways of deciding how close their fit is. The problem is marked as challenge level 1 as it is a straightforward task to begin, but to find a complete solution for all 10 graphs
is rather more challenging!
Possible approach
Although this problem stands alone, it could also be done as a follow-up to work on transformations of graphs based on the problem
Parabolic Patterns.
Students will need access to computers or graphical calculators to get the best out of this task. Familiarity with spreadsheet software is assumed.
Part of the challenge of this problem is to identify which graphs are easiest to fit, as they are not presented in any particular order. One approach is to start by displaying the graphs and
discussing as a class or in pairs which have recognisable shapes, such as straight lines, quadratics, trig graphs and exponential graphs.
If students haven't met graphs such as $y=a^x$ and $y=a^{-x}$ it might be fruitful to give them some time to experiment with graphical calculators to see what these graphs look like for different
values of the constant $a$.
Once students have some preliminary ideas about graphs which might fit, small groups could start to work on the spreadsheet, entering a possible equation and seeing how closely it matches the given
data, then using their knowledge of transformations of graphs to tweak their equation to get a closer match. Alternatively, they could experiment with graphical calculators to find graphs with the
right basic shape and then enter them into one copy of the spreadsheet displayed at the front of the class.
Ideally, different groups will come up with slightly different suggestions for functions, and this can stimulate discussion about how to decide which function most closely matches the data.
Key questions
What clues can we find from the axes and the points given to help us to guess a likely function?
How can we modify our guess once we've seen how closely it fits?
Does joining the points in order of increasing time help?
How do we decide when the fit is close enough?
Possible extension
Students could investigate and discuss the benefits of a least squares method of determining how close the fit is.
Possible support
Graphs 1, 3 and 5 are the most straightforward functions to fit, so this is a good place to start. | {"url":"http://nrich.maths.org/6506/note?nomenu=1","timestamp":"2014-04-20T05:43:48Z","content_type":null,"content_length":"6160","record_id":"<urn:uuid:3006ba7d-2511-4212-afe0-a423571bb1df>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Changing basis on an extension of a free Z-module.
up vote 0 down vote favorite
Consider a finite-rank free $Z$-module $Y$. Let $c: Y \times Y \rightarrow Z$ be a $Z$-bilinear form. Assume that $c(y_1, y_2) + c(y_2, y_1)$ is even, for all $y_1, y_2 \in $. Then $c$ "incarnates"
an extension of $Z$-modules: $$0 \rightarrow \mu_2 \rightarrow \hat Y \rightarrow Y \rightarrow 0,$$ with a distinguished section. Here $\mu_2 = \{ \pm 1 \}$, and $\hat Y = Y \oplus \mu_2$ as a set;
define addition in this set by $$(y_1, \epsilon_1) + (y_2, \epsilon_2) = \left( y_1 + y_2, \epsilon_1 \epsilon_2 \cdot (-1)^{c(y_1, y_2)} \right).$$
First question: Does $\hat Y$ have a name in the literature? I know it's a special case of the construction of extensions by cocycles, etc., but maybe it has its own name? Do such abelian extensions
arise naturally? For example, if $T$ is the topological torus $(Y \otimes R) / Y$ with fundamental group $Y$, is there a natural manifold with fundamental group $\hat Y$ that occurs in the
Now, many algebraists would dismiss these extensions, because they are "trivial". All extensions of $Y$ split, since $Y$ is a free Z-module. But the extension $\hat Y$ does not split canonically.
What interests me most is the "change-of-basis" formula for splittings. Namely, consider a (ordered) basis $(y_1, \ldots, y_r)$ of $Y$. This gives a splitting $\phi$, using the section above: define
$$\phi \left( a_1 y_1 + \cdots + a_r y_r \right) = a_1 \hat y_1 + \cdots + a_r \hat y_r.$$
Now consider another $Z$-basis $(y_1', \ldots, y_r')$ with change of basis matrix $A = (\alpha_i^k)$: $$y_i = \sum_k \alpha_i^k y_k'.$$ This gives another splitting $\phi': Y \rightarrow \hat Y$: $$\
phi' \left( a_1 y_1' + \cdots + a_r y_r' \right) = a_1 \widehat{y_1'} + \cdots + a_r \widehat{y_r'}.$$
As any two splittings differ by an element of $Hom(Y, \mu_2)$, so $\phi'(y) - \phi(y) \in \mu_2$ for all $y \in Y$. This difference is given on basis elements by the formula $$\phi'(y_i) - \phi(y_i)
= (-1)^{E_i},$$ $$E_i = \sum_k \left( {\alpha_i^k} \atop 2 \right) c(y_k', y_k') + \sum_{1 \leq m < n \leq r} \alpha_i^m \alpha_i^n c(y_m', y_n').$$
Second, most important, question: Has anyone seen a formula like this before in other contexts? It involves nothing more than a invertible matrix $A \in GL(Y)$ and symmetric bilinear form $C \in Hom
(Y \otimes Y, Z / 2 Z)$. So what else does this linear algebraic quantity $E_i$ capture? Where else does $(-1)^{E_i}$ occur in the wild?
linear-algebra extension reference-request
The group of splittings of a split extension is $H^1(Y,\mu_2)$, for any group and $Y$ any $Y$-module $\mu_2$. – Fernando Muro Aug 19 '12 at 0:41
I'm working with Z modules, so it is just $Hom(Y,\mu_2)$ here. – Marty Aug 19 '12 at 1:41
Yes, it is that. – Fernando Muro Aug 19 '12 at 2:02
True... I probably should have done that. The mu2 is left over from the context I was working in, as was the Z valued bilinear form. – Marty Aug 19 '12 at 2:35
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged linear-algebra extension reference-request or ask your own question. | {"url":"https://mathoverflow.net/questions/105014/changing-basis-on-an-extension-of-a-free-z-module","timestamp":"2014-04-19T10:35:49Z","content_type":null,"content_length":"52432","record_id":"<urn:uuid:af6d8282-334c-4475-abc7-257e91b25074>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sample Exam #3
1. Heino Inc. hired you as a consultant to help them estimate their cost of capital. You have been provided with the following data: r[RF] = 5.0%; RP[M] = 5.0%; and b = 1.1. Based on the CAPM
approach, what is the cost of equity from retained earnings?
A. 10.50%
b. 10.71%
c. 10.88%
d. 11.03%
e. 11.14%
r[s] = 5% + (5%)*1.1 = 10.50%
2. P. Daves Inc. hired you as a consultant to help them estimate their cost of equity. The yield on the firm’s bonds is 6.5%, and Daves' investment bankers believe that the cost of equity can be
estimated using a risk premium of 4.0%. What is an estimate of Daves' cost of equity from retained earnings?
a. 9.77%
b. 10.02%
c. 10.19%
d. 10.33%
E. 10.50%
6.5% + 4% = 10.5%
3. You were recently hired by Hemmings Media, Inc., to estimate their cost of capital. You were provided with the following data: D[1] = $2.50; P[0] = $60; g = 7% (constant); and F = 5%. What is the
cost of equity raised by selling new common stock?
a. 11.02%
b. 11.20%
C. 11.39%
d. 11.58%
e. 11.74%
2.50/(60* 95%) + 7% = 11.39%
4. For a typical firm, which of the following is correct? All rates are after taxes, and assume the firm operates at its target capital structure.
a. r[d] > r[e] > r[s] > WACC.
b. r[s] > r[e] > r[d] > WACC.
c. WACC > r[e] > r[s] > r[d].
D. r[e] > r[s] > WACC > r[d].
e. WACC > r[d] > r[s] > r[e].
5. Maese Sisters Inc has been paying out all of its earnings as dividends, and hence has no retained earnings. This same situation is expected to persist in the future. The company uses the CAPM to
calculate its cost of equity. Its target capital structure consists of common stock, preferred stock, and debt. Which of the following events would reduce the WACC?
a. The flotation costs associated with issuing new common stock increase.
B. The market risk premium declines.
c. The company’s beta increases.
d. Expected inflation increases.
e. The flotation costs associated with issuing preferred stock increase.
6. Which of the following statements is CORRECT?
a. In the WACC calculation, we must adjust the cost of preferred stock (the market yield) because 70% of the dividends received by corporate investors are excluded from their taxable income.
b. We should use historical measures of the component costs from prior financings when estimating a company’s WACC for capital budgeting purposes.
c. The cost of new equity (r[e]) could possibly be lower than the cost of retained earnings (r[s]) if the market risk premium, risk-free rate, and the company’s beta all decline by a sufficiently
large amount.
d. The component cost of preferred stock is expressed as r[p](1 - T), because preferred stock dividends are treated as fixed charges, similar to the treatment of debt interest.
E. The cost of retained earnings is the rate of return stockholders require on a firm’s common stock.
7. If a typical U.S. company uses the same cost of capital to evaluate all projects, the firm will most likely become
A. Riskier over time, and its intrinsic value will not be maximized.
b. Riskier over time, but its intrinsic value will be maximized.
c. Less risky over time, and its intrinsic value will not be maximized.
d. Less risky over time, and its intrinsic value will be maximized.
e. There is no reason to expect its risk position or value to change over time as a result of its use of a single discount rate.
8. Blanchford Enterprises is considering a project that has the following cash flow data. What is the project's IRR? Note that a project's projected IRR can be less than the WACC (and even negative),
in which case it will be rejected.
Year: 0 1 2 3
Cash flows:-$1,000 $450 $450 $450
a. 16.20%
B. 16.65%
c. 17.10%
d. 17.55%
e. 18.00%
n = 3; PV = -1000; PMT = 450; FV = 0: Solve for i = 16.6487%
9. Tapley Dental Associates is considering a project that has the following cash flow data. What is the project's payback?
Year: 0 1 2 3 4 5
Cash flows: -$1,000 $300 $310 $320 $330 $340
a. 2.11 years
b. 2.50 years
c. 2.71 years
d. 3.05 years
E. 3.21 years Accumulate cash inflows until you get to 1000
10. Richards Enterprises is considering a project that has the following cash flow and WACC data. What is the project's NPV? Note that a project's projected NPV can be negative, in which case it will
be rejected.
WACC = 10%
Year: 0 1 2 3 4 5
Cash flows: -$1,000 $400 $395 $390 $385 $380
a. $478.74
B. $482.01 (Use the cash flow entries on your calculator with a rate of 10%)
c. $495.05
d. $507.98
e. $517.93
11. Which of the following statements is CORRECT?
a. The internal rate of return method (IRR) is generally regarded by academics as being the best single method for evaluating capital budgeting projects.
b. The payback method is generally regarded by academics as being the best single method for evaluating capital budgeting projects.
c. The discounted payback method is generally regarded by academics as being the best single method for evaluating capital budgeting projects.
D. The net present value method (NPV) is generally regarded by academics as being the best single method for evaluating capital budgeting projects.
e. The modified internal rate of return method (MIRR) is generally regarded by academics as being the best single method for evaluating capital budgeting projects.
12. Which of the following statements is CORRECT?
a. One defect of the IRR method is that it does not take account of cash flows over a project’s full life.
b. One defect of the IRR method is that it does not take account of the time value of money.
c. One defect of the IRR method is that it does consider the time value of money.
d. One defect of the IRR method is that it values a dollar received today the same as a dollar that will not be received until some time in the future.
E. One defect of the IRR method is that it assumes that the cash flows to be received from a project can be reinvested at the IRR itself, and that assumption is often not valid.
13. Which of the following statements is CORRECT? Assume that the project being considered has normal cash flows, with one outflow followed by a series of inflows.
a. A project’s regular IRR is found by compounding the initial cost at the WACC to find the terminal value (TV), then discounting the TV at the WACC.
b. A project’s regular IRR is found by compounding the cash inflows at the WACC to find the present value (PV), then discounting to find the IRR.
c. If a project’s IRR is less than the WACC, then its NPV will be positive.
D. A project’s IRR is the discount rate that causes the PV of the inflows to equal the project’s cost.
e. If a project’s IRR is positive, then its NPV must also be positive.
14. Malholtra Inc. is considering a project that has the following cash flow and WACC data. What is the project's MIRR? Note that a project's projected MIRR can be less than the WACC (and even
negative), in which case it will be rejected.
WACC: 10.00%
Year 0 1 2 3 4
Cash flows -$850 $300 $320 $340 $360
a. 14.08%
B. 15.65%
c. 17.21%
d. 18.94%
e. 20.83%
15. You work for Alpha Inc., and you must estimate the Year 1 operating net cash flow for a proposed project with the following data. What is the Year 1 operating cash flow?
Sales $11,000
Depreciation $4,000
Other operating costs $6,000
Tax rate 35%
A. $4,650
b. $4,800
c. $4,950
d. $5,100
e. $5,250
You earn $650 after tax and add back depreciation to get $4650
16. As a member of Gamma Corporation's financial staff, you must estimate the Year 1 operating net cash flow for a proposed project with the following data. What is the Year 1 operating cash flow?
Sales $33,000
Depreciation $10,000
Other operating costs $17,000
Interest expense $4,000
Tax rate 35%
a. $ 9,500
b. $10,600
c. $11,700
d. $12,800
E. $13,900
$33000-27000= 6000* (1-.35) + 10000 = $13900
17. Big Air Services is now in the final year of a project. The equipment originally cost $20 million, of which 75% has been depreciated. Big Air can sell the used equipment today for $6 million, and
its tax rate is 40%. What is the equipment’s after-tax net salvage value?
a. $5,500,000
B. $5,600,000
c. $5,700,000
d. $5,800,000
e. $5,900,000
Book Value = $5 M; Taxable Gain on Sale = $1M; Tax on Gain – 400,000
Total After Tax Salvage = $5,600,000
18. Which of the following is NOT a cash flow that should be included in the analysis of a project?
a. Changes in net operating working capital.
b. Shipping and installation costs.
c. Cannibalization effects.
d. Opportunity costs.
E. Sunk costs that have been expensed for tax purposes.
19. Which of the following statements is CORRECT?
A. Using MACRS depreciation rather than straight line would normally have no effect on a project’s total projected cash flows but would affect the timing of the cash flows and thus the NPV.
b. Under current laws and regulations, corporations must use straight line depreciation for all assets whose lives are 10 years or longer.
c. Corporations must use the same depreciation method (e.g., straight line or MACRS) for stockholder reporting and tax purposes.
d. Since depreciation is not a cash expense, it has no affect on cash flows and thus no affect on capital budgeting decisions.
e. Under MACRS depreciation rules, higher depreciation charges occur in the early years, and this reduces the early cash flows and thus lowers the projected NPV.
20. A company is considering a new project. The CFO plans to calculate the project’s NPV by first estimating the relevant cash flows for each year of the project’s life (the initial investment cost,
the annual operating cash flows, and the terminal cash flow), then discounting those cash flows at the company’s WACC. Which of the following factors should the CFO INCLUDE IN THE CASH FLOWS when
estimating the relevant cash flows?
a. All sunk costs that have been incurred relating to the project.
b. All interest expenses on debt used to help finance the project.
C. The investment in working capital required to operate the project, even if that investment will be recovered at the end of the project’s life.
d. Sunk costs that have been incurred relating to the project, but only if those costs were incurred prior to the current year.
e. Effects of the project on other divisions of the firm, but only if those effects lower the project’s own direct cash flows.
21. Millman Electronics will produce 60,000 stereos next year. Variable costs will equal 50% of sales, while fixed costs will total $120,000. At what price must each stereo be sold for the company to
achieve an EBIT of $95,000?
a. $6.57
b. $6.87
C. $7.17
d. $7.47
e. $7.77
60,000X – 30,000X – 120,000 = 95,000
30,000X = 215,000
X = $7.1667
22. Brandi Co. has an unlevered beta of 1.10. The firm currently has no debt, but is considering changing its capital structure to be 30% debt and 70% equity. If its corporate tax rate is 40%, what
is Brandi's levered beta?
a. 1.2549
B. 1.3829
c. 1.5764
d. 1.6235
e. 1.7458
Levered Beta = Unlevered Beta [1+(1-t)D/E] = 1.3829
23. If a stock’s dividend is expected to grow at a constant rate of 5% a year, which of the following statements is CORRECT? The stock is in equilibrium.
a. The expected return on the stock is 5% a year.
b. The stock’s dividend yield is 5%.
c. The price of the stock is expected to decline in the future.
d. The stock’s required return must be equal to or less than 5%.
E. The stock’s price one year from now is expected to be 5% above the current price.
24. The firm’s target capital structure is consistent with which of the following?
a. Maximum earnings per share (EPS).
b. Minimum cost of debt (r[d]).
c. Highest bond rating.
d. Minimum cost of equity (r[s]).
E. Minimum weighted average cost of capital (WACC).
25. Which of the following statements is correct?
A. The capital structure that maximizes stock price is also the capital structure that minimizes the weighted average cost of capital (WACC).
b. The capital structure that maximizes stock price is also the capital structure that maximizes earnings per share.
c. The capital structure that maximizes stock price is also the capital structure that maximizes the firm’s times interest earned (TIE) ratio.
d. Increasing a company’s debt ratio will typically reduce the marginal costs of both debt and equity financing; however, it still may raise the company’s WACC.
e. If Congress were to pass legislation that increases the personal tax rate, but decreases the corporate tax rate, this would encourage companies to increase their debt ratios. | {"url":"http://wtfaculty.wtamu.edu/~jowens/FIN3320/Samp10-13.html","timestamp":"2014-04-18T13:07:02Z","content_type":null,"content_length":"63371","record_id":"<urn:uuid:289940d0-ffda-430b-a1ec-df9384560815>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
Euclidean Norm and Maximum Norm
October 6th 2009, 08:01 PM
Euclidean Norm and Maximum Norm
Q: Explain why the Euclidean norm and maximum norm (any two norms on Rn for that matter) result in the same open sets. It follows from this that a sequence Xk in Rn will converge with respect to
the maximum norm if and only if it will converge with respect to the Euclidean norm.
I have absolutely no idea how to approach this problem.
Any suggestions?
Thanks as always
October 7th 2009, 04:26 AM
Q: Explain why the Euclidean norm and maximum norm (any two norms on Rn for that matter) result in the same open sets. It follows from this that a sequence Xk in Rn will converge with respect to
the maximum norm if and only if it will converge with respect to the Euclidean norm.
I have absolutely no idea how to approach this problem.
Any suggestions?
Thanks as always
Let A(w,r) be the open ball with center w and radius r wrt the euclidean norm. Prove this set is open also wrt the max. norm (take any point X=(x_1,..,x_n) in A(w,r) and supose w = (w_1,...,w_n)
==> SUM(x_i - w_i)^2 < r^2 ==> but we also have that SUM(x_i - w_i)^2 <= n*max(x_i - w_i)^2, so if we choose wisely and s.t. |y_i - w_i| < r/Sqrt(n) then...etc. and try to end the argument by
yourself. The other direction is simmilar. | {"url":"http://mathhelpforum.com/differential-geometry/106594-euclidean-norm-maximum-norm-print.html","timestamp":"2014-04-20T04:27:49Z","content_type":null,"content_length":"4961","record_id":"<urn:uuid:d4d6271d-792f-4ac5-a751-5b5766142ad4>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Algebra Tutors
Palos Verdes Peninsula, CA 90274
Always applying a holistic approach when tutoring!
...I make learning Spanish a fun experience by connecting with my students on an effective way after evaluating their needs. My major is Civil Engineering which adds to my extensive teaching
experience. I started tutoring when I was 15. My students have stayed in...
Offering 10+ subjects including algebra 2 | {"url":"http://www.wyzant.com/geo_Bell_Gardens_College_Algebra_tutors.aspx?d=20&pagesize=5&pagenum=4","timestamp":"2014-04-16T20:42:46Z","content_type":null,"content_length":"60964","record_id":"<urn:uuid:b401b107-fffd-411f-867a-f7ce05935903>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Goat In the Field Problem
Date: 05/24/97 at 11:02:38
From: Anonymous
Subject: The Goat In the Field Problem
This problem was first posed by a colleague during my Engineering
Degree some 15 years ago. The problem looked simple enough but it was
stipulated that calculus must NOT be used. I wouldn't like to guess
how many hours I've spent on it. I decided it was probably a standard,
recognised problem but could not find it in any recreational
mathematics book. I eventually decided there was probably some
'Euclidean' geometric rule I'd forgotten since school days but basic
research has got me nowhere. Here's how the problem was presented:
A farmer owns a circular field of grass (radius=r). He tethers a goat
via a length of rope (R) to the circumference of the field. What ratio
r/R must the farmer choose so that the goat can only eat half the area
of grass?
Many thanks.
Date: 05/25/97 at 16:59:13
From: Doctor Anthony
Subject: Re: The Goat In the Field Problem
Let O be the center of the circle of the goat's grazing range and let
C be the point on the circumference where the goat is tethered. Let
CA and CB be the chords of length R giving the extreme positions on
the circumference where the goat can reach. Angle AOC = angle BOC =
phi (radians). We shall first calculate the area of the segment of
circular field cut off by the chords CA and CB.
The area cut off is found by subtracting the area of triangle OAC from
the sector OAC:
= (1/2)r^2.phi - (1/2)r^2.sin(phi)
= (1/2)r^2[phi - sin(phi)]
The area cut off by both AC and BC is double this: r^2[phi -sin(phi)].
This area must be added to the area of the sector of the circle of
radius R between the radii CA and CB. By simple geometry the angle
ACB = (1/2)(2.pi-2phi) = pi-phi
Area of sector CAB = (1/2)R^2(pi-phi)
Total area available for the goat
= r^2[phi-sin(phi)] + (1/2)R^2(pi-phi)
This must equal half the area of the field (1/2)pi.r^2
r^2[phi-sin(phi)] + (1/2)R^2(pi-phi) = (1/2)pi.r^2
Dividing through by (1/2)r^2:
(1) 2[phi-sin(phi)] + (R/r)^2(pi-phi) = pi
Now we can get another relationship between r and R by drawing a
perpendicular from O to AC to bisect AC. This shows that:
r.sin(phi/2) = R/2 and R/r = 2sin(phi/2)
Equation(1) can be written:
2[phi-sin(phi)] + 4sin^2(phi/2)[pi-phi] - pi = 0
This can be solved for phi by Newton Raphson or by a handy TI-92
calculator to give:
phi = 1.23589 radians
Then R/r = 2sin(.617945) = 1.158723
-Doctor Anthony, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 05/25/97 at 06:19:58
From: Doctor Sarah
Subject: Re: The Goat In the Field Problem
For more on this question, see this "Classic Problem from the
Dr. Math FAQ:
Grazing Animals
You'll also find more answers to problems about tethering and
grazing, some with illustrative diagrams, by searching the Dr. Math
archives for the words "goat" and "cow" (just the word, not the
-Doctor Sarah, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 02/13/2000 at 22:36:41
From: David Gillies
Subject: Re: The Goat in the Field Problem
I was browsing the Dr. Math archives today and came across this old
chestnut. You may be interested to know that two friends of mine,
Simon Shepherd and Peter van Eetvelt, came up with a completely
closed-form, analytic solution. It turns out this problem is very
germane to communications physics, for instance determining the
optimum siting of mobile telephone base stations (you can imagine the
circles as the coverage areas of the radio signals). It also is useful
in determining the optimum placement for a jammer transmitter so as to
cause maximal disruption.
Basically the result is this:
Consider two circles of radii a and b separated by a distance c
between their centres, which are located at (-c/2,0) and (+c/2,0)
respectively. The area of intersection is given by
I(a,b,c) = a^2 arccos ((c^2 - b^2 + a^2)/2ac) - (sqrt(2(a^2 b^2
+ b^2 c^2 + c^2 a^2) - a^4 - b^4 - c^4)/2)
+ b^2 arccos ((c^2 + b^2 - a^2)/2bc)
Note that I(a,b,c) = I(b,a,c), as expected, and the *real part* of this
integral gives the correct answer in all three of the cases:
1) the circles do not overlap and I = 0
2) the smaller circle is entirely contained in the larger and
I = Pi * b^2 (or Pi * a^2)
3) the circles partially overlap.
As far as I know, Shepherd and Eetvelt's proof is the only closed-form
solution that yields the correct result in all three cases. They used it
to calculate very accurately that the length L of the goat's rope such
that he can graze exactly half the field of radius R is
L = 1.15872847301812171... R.
The title of their paper is as follows:
S. J. Shepherd and P. W. J. van Eetvelt, "On Goats and Jammers,"
Bulletin of the IMA, 31, (5-6), May 1995, pp. 87-89. | {"url":"http://mathforum.org/library/drmath/view/51746.html","timestamp":"2014-04-16T11:41:12Z","content_type":null,"content_length":"10218","record_id":"<urn:uuid:c20440b2-ef0c-4ba2-97b8-3c9f8f7d919f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comprehensive comprehensions
As part of his final year work at Cambridge, Max Bolingbroke worked on implementing the "Comprehensive Comprehensions" described in a paper available here in GHC. A patch with the complete
functionality described here was integrated into GHCs HEAD branch on the 20th December 2007.
Ordering Syntax
The paper uses a syntax based around the new keywords "order" and "by". For example:
[ (name, salary)
| (name, dept, salary) <- employees
, salary > 70
, order by salary ]
It has been noted that introducing a new keyword may not be desirable, especially given the fact that you can use "order" to achieve things which aren't really ordering:
[ (the dept, sum salary)
| (name, dept, salary) <- employees
, order by salary
, order by salary < 50 using takeWhile
, order using take 5 ]
For those reasons, Max's implementation was initially based around the syntax proposed in section 6.1 of the paper:
[ (the dept, sum salary)
| (name, dept, salary) <- employees
, then sortWith by salary
, then takeWhile by salary < 50
, then take 5 ]
This reuses the "then" keyword and is probably less confusing. However, no final decision has been made on the optimal syntax: in particular it might be better to write:
[ (the dept, sum salary)
| (name, dept, salary) <- employees
, then sortWith using salary
, then takeWhile using salary < 50
, then take 5 ]
Grouping Syntax
Some of the same concerns about keyword introduction apply here, but ordering is being implemented first so not much thought has been given to syntax improvements. The main suggestion from the paper
[ (the dept, sum salary)
| (name, dept, salary) <- employees
, group by dept ]
We could equally well substitute "using" for the "by" if desired:
[ (the dept, sum salary)
| (name, dept, salary) <- employees
, group using dept ]
Or we could even do an implicit call to "the" on the grouped-by variables:
[ (the_dept, namesalary)
| (name, dept, salary) <- employees
, the_dept <- group by dept
where (name,salary) -> namesalary
We would be interested in hearing peoples thoughts on these issues.
SPJ, after looking at the issues above, has decided that Max's implementation should at least initially be based on syntax like this:
then group by dept using groupWith
then group by dept -- The function groupWith is implicit here
then group using runs 3 -- The runs function has type [a] -> [[a]] rather than the (a -> t) -> [a] -> [[a]] type required if you used "by"
Bracketing Syntax
Due to the generality added to comprehensions by the paper, it now makes sense to allow bracketing of qualifiers. An example from the paper is:
xs = [1,2]
ys = [3,4]
zs = [5,6]
p1 =
[ (x,y,z)
| ( x <- xs
| y <- ys )
, z <- zs ]
p2 =
[ (x,y,z)
| x <- xs
| ( y <- ys
, z <- zs ) ]
This results in:
p1 = [(1,3,5), (1,3,6), (2,4,5), (2,4,6)]
p2 = [(1,3,5), (2,3,6)]
Unfortunately, there is a practical problem with using brackets in this way: doing so causes a reduce/reduce conflict in the grammar. Consider this expression:
[foo | (i, e) <- ies]
When the parser reaches the bracket after "e" it is valid to either reduce "(i, e)" to a pair of qualifiers (i.e. i and e are treated as guard expressions), OR to reduce it to the tuple expression
(i, e) which will be later converted to a pattern. There are a number of alternative ways we could solve this:
• Disallow bracketing of qualifiers altogether!
□ This keeps the concrete syntax simple and should cover all common use cases
□ It does reduce the composability of the qualifier syntax rather drastically however
• Keep bracketing in this manner but use type information to resolve the ambiguity
□ I will need to change the parser to consider qualifiers as expressions so that we can parse without any reduce/reduce conflicts
□ We can then always use type information to determine which reading is correct, because guards are always boolean, and so can be distinguished from tuples as required
□ Might have negative implications on the readability of some error messages :(
□ If the parser finds it hard to understand this syntax, you can argue that any human reader would too and hence we should look for something less ambiguous
• Introduce new syntax to allow this idiom to be expressed unambiguously. Some examples of what we could use are below:
-- 1) A new keyword
[ foo | x <- e,
nest { y <- ys,
z <- zs },
x > y + 3 ]
-- 2) Trying to suggest pulling things out of a sublist
-- without having to mention binders
[ foo | x <- e,
<- [ .. | y <- ys,
z <- zs ],
x > y + 3 ]
-- 3) New kind of brackets
[ foo | x <- e,
(| y <- ys,
z <- zs |),
x < y + 3 ]
-- 4) Variation on 2), slightly more concise
[ foo | x <- e,
<- [ y <- ys,
z <- zs ],
x > y + 3 ]
-- 5) Another variation on 2), moving the ".." into
-- the pattern rather than the comprehension body
[ foo | x <- e,
.. <- [ y <- ys,
z <- zs ],
x > y + 3 ]
This functionality was implemented and working, but owing to the syntactic difficulties support was dropped.
Extending To Arbitrary Monads
On the paper talk page, Michael Adams has outlined how the new idioms could be extended to arbitrary monads. It looks very nice theoretically, but before we consider actually implementing this we
need to know if anyone has a use case for the syntax. To demonstrate the kind of thing that this would make possible, consider the following example from Michael:
do a <- ma
b <- mb
c <- mc
sort by (b, c) using foo
d <- md
return (a, b, c, d)
It would de-sugar to:
((do a <- ma
b <- mb
c <- mc
return ((b, c), (a, b, c))
) `foo` fst) >>= \result ->
do let (a, _, _) = result
(_, b, _) = result
(_, _, c) = result
d <- md
return (a, b, c, d)
Where we have:
foo :: forall a. (a -> t) -> m a -> m a
Some other possible ways we could add to Max's implementation given the need:
• Add associativity back in
• Add desugaring support for parallel arrays as well as lists: every other part of the compiler should already handle these seamlessly | {"url":"https://ghc.haskell.org/trac/ghc/wiki/SQLLikeComprehensions","timestamp":"2014-04-18T21:46:10Z","content_type":null,"content_length":"17453","record_id":"<urn:uuid:13821366-24fe-49a0-8d54-81b56ca19612>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
is there any easy website where i can find the beta values updated from time to time?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Thanks Daniel, one more question. which beta value should be taken for DCF Valuation? Average beta or unlevered beta or unlevered beta corrected for cash? I am preparing one, so your answer will
be helpful. thank you!
Best Response
You've already chosen the best response.
Sorry, I didn't see your second question before. Wich beta should be used depends on the multiplicator, I mean wich cash flow are you discounting. Depending on for-who are the cash flows, the
discount rate must be different, so the beta you have to use to calculate this in a CAPM approach will be one or another. They are also used for valuation of a non-public company by comparison
with a similar (or a group of similars) public company. You can use the average beta if the debt levels are similar, if not, you should deleverage the value of the company and use the unleverage
beta. It's pretty complex and I don't know good resources in the web to sugest you, you can start here: http://en.wikipedia.org/wiki/Hamada%27s_equation and then gloogling for more. Hope this
help you, despite the confusing of my response :D
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50657f40e4b0f9e4be2814c4","timestamp":"2014-04-20T18:47:43Z","content_type":null,"content_length":"35978","record_id":"<urn:uuid:11f5477a-b427-4970-a6c4-2cab78b476a2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Program and Abstracts
Tuesday, February 1, 2000 11:00-11:30 - Opening address
Claudine Simson, V.P., Disruptive Technology, Network and Business Solutions, Nortel Networks 11:30-12:30 - Modern data analysis and its application to Nortel Networks data
Otakar Fojt, The University of York
In this talk we outline an approach to the analysis of sequential manufacturing and telecom traffic data from industry using techniques from nonlinear dynamics. The aim of the talk is to show the
potential of nonlinear techniques for processing real world data and developing new advanced methods of commercial data analysis.
The basic idea is to consider a factory as a dynamical system. A process in the factory generates data, which contains information about the state of the system. If it is possible to analyse this
data in such a way that knowledge of the system is increased, control and decision-making processes can be improved. This will result, if applied, in a basis of competitive advantage to the factory.
First, we give details of the general idea and the type of recorded data together with the necessary preprocessing techniques. We follow this with a description of our analysis. Our approach consists
of state space reconstruction, applications of principal component analysis and nonlinear deterministic prediction algorithms. The talk will conclude with our results and with suggestions for future
1:30-2:00 - The need for real-time data analysis in telecommunications
Chris Hobbs, Sr. Mgr., System Architecture, Nortel Networks
A telecommunications network typically comprises many independently-controlled layers: from the physical fibre interconnectivity, through wavelengths, STS connexions, ATM Virtual Channels, MPLS Paths
to the end-to-end connexions established for user services. Each of these layers generates statistics that, in a large network, may easily be measured in tens of gigaBytes per hour.
Traditionally, the layers have been controlled individually since the complexity of "tuning" a lower layer to the traffic it is carrying has been too great for human operators (particularly where the
carried traffic itself has complex statistics) and since the work involved in moving connexions (particularly fibres and wavelengths) has been prohibitive.
Technological advances in Optical Switches, capable of logically relaying fibre or wavelengths in micro-seconds, have made flexible network rebalancing possible and Carriers, the owners of these
large networks, are demanding lower costs by combining layers and exploiting this new agility. In order to address this problem, the Terabytes of data being extracted daily from the large networks
need to be analysed: initially statically to determine the gross inter-related behaviours, and then dynamically to detect and react to changing traffic patterns.
2:30-3:30 - Noise reduction for human speech using chaos-like features
Holger Kantz, Max-Planck-Institut für Physik komplexer Systeme
A local projective noise reduction scheme, originally developed for low-dimensional stationary signals, is successfully applied to human speech. This is possible by exploiting properties of the
speech signal which mimic structure exhibited by deterministic chaotic systems. In high-dimensional embedding spaces, the strong non-stationarity is resolved as a sequence of different dynamical
regimes of moderate complexity. This filtering technique does not make use of the spectral contents of the signal and is far superior to the Ephraim-Malah adaptive filter.
4:00-5:00 - Scaling phenomena in telecommunications
Murad Taqqu, Boston University (Lecture co-sponsored by Dept. of Statistics, University of Toronto)
Ethernet local area network traffic appears to be approximately statistically self-similar. This discovery, made about eight years ago, has had a profound impact on the field. I will try to explain
what statistical self-similarity means and how it is detected. I will also indicate how its presence can be explained physically, by aggregating a large number of "on-off" renewal processes, whose
distributions are heavy-tailed. As the size of the aggregation becomes large, then, after rescaling, the behavior turns out to be the Gaussian self-similar process called fractional Brownian motion.
If, however, the rewards instead of being 0 and 1 are heavy-tailed as well, then the limit is a stable non-Gaussian process with infinite variance and dependent increments. Since linear fractional
stable motion is the stable counterpart of the Gaussian fractional Brownian motion, a natural conjecture is that the limit process is linear fractional stable motion. This conjecture, it turns out,
is false. The limit is a new type of infinite variance self-similar process.
Wednesday, February 2, 2000
9:30-10:30 - Electrical/Biological networks of nonlinear neurons
Henry Abarbanel, Institute for Nonlinear Science at USCD, San Diego
Using analysis tools for time series from nonlinear sources, we have been able to characterize the chaotic oscillations of individual neurons in a small biological network that controls simple
behavior in an invertebrate. Using these characteristics, we have built computer simulations and simple analog electronic circuits, which reproduce the biological oscillations. We have performed
experiments in which biological neurons are replaced by the electronic neurons retaining the functional behavior of the biological circuits. We will describe the nonlinear analysis tools (widely
applicable), the electronic neurons, and the experiments on neural transplants.
11:00-11:30 - E-commerce and data mining challenges
Weidong Kou, IBM Centre for Advanced Studies
E-commerce over Internet is having a profound impact on the global economy. Goldman, Sachs & Co. estimates B2B e-commerce revenue alone will grow to $1.5 trillion (US) over the next five years.
Electronic commerce is becoming a major channel for conducting business, with increasing number organizations developing, deploying and installing e-commerce products, applications and solutions.
With rapid e-commerce growth, there are many challenges, for example, how to analyze e-commerce data and provide an organization with meaningful information to improve their product and services
offering to target customers, and how to group millions web users who access a web site so that the organization can serve each group of users better and can reduce the business cost and increase the
revenue. These challenges would bring a lot of opportunities for data mining researchers to develop better intelligent algorithms and systems to solve the practical e-commerce problems. In this talk,
we will use IBM Net.Commerce as example to explain the e-commerce development and challenges that we face today.
11:30-12:00 - Occurrence of ill-defined probability distribution in real-world data
John Hudson, Advisor, Radio Technology, Nortel Networks
In many communications problems the statistics of the data, communication channels, and behaviour of users is ill defined and not handled well by the simpler concepts in classical probability theory.
We can have data with alpha-stable (infinite variance) characteristics, long-tailed and large variance log normal distributions, self similarity in the time domain, and so on. If the higher moments
of the underlying distributions do not exist or have disproportionate values then laws of large numbers and the central limit theorem may not be safely applied to a surprising number of problems. The
behaviour of some control mechanisms can begin to take on a chaotic appearance when driven by such data.
In this talk, some of the properties of data, channels and systems that are confronting workers in the communication field are discussed. It is illustrated with examples taken from network data
traffic, Internet browsing, radio propagation, video images, speech statistics and so on.
1:30-2:30 - The analysis of experimental time series
Tom Mullin, The University of Manchester
We will discuss the application of modern dynamical systems time series analysis methods to data from experimental systems. These will include vibrating beams, nonlinear oscillators and physiological
measures. The emphasis will be placed on obtaining quantitative estimates of the essential dynamics. We will also describe the application of data synergy methods to multivariate data.
2:30-3:00 - Fuzzy-pharmacology: Rationale and applications
Beth Sproule, Faculty of Pharmacy and Department of Psychiatry Psychopharmacology, SunnyBrook Health Sciences Centre, Toronto
Pharmacological investigations are undertaken in order to optimize the use of medications. The complexity and variability associated with biological data has prompted our explorations into the use
fuzzy logic for modeling pharmacological systems. Fuzzy logic approaches have been used in other areas of medicine (e.g., imaging technologies, control of biomedical devices, decision support
systems), however, their uses in pharmacology are incipient. The results of our preliminary studies will be presented in which we assessed the feasibility of fuzzy logic: a) to predict serum lithium
concentrations in elderly patients; and b) to predict the response of alcohol dependent patients to citalopram in attempting to reduce their drinking. Since then many current projects have evolved.
Approaches to this line of investigation will be presented.
3:30-4:30 - Geospatial backbones of environmental monitoring programs: the challenges of timely data acquisition, processing and visualization
Chad P. Gubala, Director, The Scientific Assessment Technologies Laboratory University of Toronto
When considering ‘environmental’ issues or legalities, a general and useful description of a pollutant is an element or entity in the wrong place at the wrong time and perhaps in the wrong amount.
Prior to the establishment of cost-effective global positioning, monitoring the fate and transport of environmental pollutants was limited to reduced scale and statistically based sampling programs.
Whole systems models developed from parcels of environmental studies have been limited in predictive capability due to unnoticed attributes, undocumented synergies or antagonisms and un-quantifiable
spatial and temporal variances.
Advances in the areas of commercial geospatial technologies and high-speed sensors arrays have now offered the possibility of assessing a whole ecosystem in near real time and in a spatially complete
manner. This capacity should then greatly improve quantitative environmental modeling and the adaptive management process, further ‘tuning’ the balance between global environments and economies.
However, the promise of increased knowledge about our natural resources is now limited by our capacity to move the data collected from integrated geopositioning and sensor systems into meaningful
management products. This talk describes these limitations and addresses the needs for developments in the areas of real time analytical protocols.
4:30-5:00 - Data mining and its challenges in the banking industry
Chen Wei Xu, Manager, Statistical Modeling, Customer Knowledge Management, Bank of Montreal Back to Top
Thursday, February 3, 2000
9:30-10:30 Elements of fuzzy system modeling
I.B. Turksen, University of Toronto
In most system modeling methodologies, we attempt to find out, in an inductive manner, how a particular system behaves. That is, we essentially try to determine how the input factors affect the
performance measure of our concern. There are at least three approaches to system modeling: (1) personal experience, (2) expert interviews and teachings, and (3) data mining with historical data.
In all these approaches, there are two fundamental theoretical base structures for system modeling: (1) classical two - valued set and logic theory based functional analyses and / or (2) novel (35
years old) Infinite (fuzzy) - valued set and logic based super functional analyses. Furthermore there are to basic learning methods in these two approaches: (1) unsupervised learning and (2)
supervised learning. The basic difference between these two methods of learning is that the first has no goal whereas the second has a goal. Generally the goal of supervised learning is to assure
that the model result compared to the actual is minimized.
In classical two-valued set and logic based functional analyses, the world and its systems are seen through the two-valued, black and white, restricted view of, what is called, the clear patterns.
Unfortunately, first the two - valued dichotomy forces one to make arbitrary choices when there are many alternatives to choose from. Secondly, functional view can only represent many to one mapping
by its very definition. Thirdly, the combination of variables are assumed to be additive and multiplicative leading to linear superposition schema in functional representation of systems. In this
view, logical “OR ness” is simply mapped to “algebraic plus” and “AND ness” to “algebraic multiplication”. Fourthly, imprecision in data are generally assumed to originate due to random occurrences.
Whereas, in fuzzy (infinite) - valued set and logic based super functional analyses, the world and its systems are seen through information granules which admit an unrestricted view of fuzzy
patterns. Fortunately, first we are not forced to make arbitrary choices but have the freedom to choose the gradation that is appropriate for a given situation. Secondly, super functional view allows
us to make many to many mapping. That is membership functions are identified to specify patterns via fuzzy cluster analyses. But then we can establish cluster to cluster mappings over these functions
that gives us super functional representations. Thirdly, the combination of variables are generally super additive or sub additive requiring highly nonlinear representations. In fuzzy theory there
are infinitely many ways to represent “AND ness” (conjunction) and “OR ness” (disjunction) depending on context and the behavior of a given system. Fourthly, imprecision in data are generally
deterministic due to incapability of our measurement devices.
In our integrated fuzzy system modeling approach, we first use fuzzy clustering techniques to learn patterns with fuzzy scatter matrices and diagrams to determine the essential fuzzy clusters, i.e.,
the effective rules of system behavior. This is an unsupervised learning method. Next we fit membership functions to these clusters. As well we determine significant and critical variables that
affect the system behavior drastically and moderately, etc.
Later, we apply supervised learning to determine the nonlinear operators that combine the fuzzy clusters in many to many maps of input and output variables in order to achieve minimum system model
error. In this supervised learning we also implement compensation and compromise between the extreme values of formulas that specify combination of concepts and hence the appropriate combination of
variables as well as alternate inference schemas.
Real-life system model building examples include: (1) a continuous caster model that attempts to balance tardiness of customer delivery due dates versus mixed grade steel production and (2)
pharmacological models that attempt to determine the effects of medication on humans. Simulated system model building examples include: (1) utilization of Internet data links, (2) analyses of traffic
characteristics, and (3) discard rate prediction.
11:00-12:00 - A Steel Industry Viewpoint on Fuzzy Technology -Scheduling Analysis Application
Michael Dudzic, Manager, Process Automation Technology, Dofasco Inc.
This presentation will discuss the experiences in the use of Fuzzy Expert system technologies as it was applied in a proof-of-concept project looking at 2 specific issues in scheduling the #1
Continuous Caster at Dofasco. This talk complements I. B. Turksen’s talk on Elements of Fuzzy System Modeling.
The Application of Multivariate Statistical Technologies at Dofasco
This presentation will discuss the experiences in the use of Multivariate Statistics (Principle Component Analysis and Partial Least Squares) in applications at Dofasco. The focus example will be the
on-line monitoring system at the #1 Continuous Caster.
1:00-2:00 - Recent developments in decision tree models
Hugh Chipman, University of Waterloo
Decision trees are an appealing predictive model because of their interpretability and flexibility. In this talk, I will outline some recent developments in decision tree modeling, including
improvements in model search techniques, and enrichments to the tree model, such as linear models within terminal nodes.
2:30-3:30 - A hybrid predictive model for database marketing
Zhen Mei, Generation 5
We discuss a simple hybrid approach for predicting response rate in mailing campaigns and for predicting certain demographic and expenditure characteristics in customer database. This method is based
on cluster analysis and predictive modeling. As an example we model home ownership for the State of New York.
Missing value filling
Wenxue Huang , Generation 5
The talk is about the missing value filling methodology and software that are being developed by Generation 5 and focused on the mathematics for target data being interval-scaled. A local-and-global
(or vertical-and-horizontal) balanced approach in a multivariate and a large database setting will be discussed. The methodology and software may apply to doing prediction: filling in missing values
is equivalent to predicting instant target values based on reliable complete historical records and current incomplete input.
4:00-5:00 - Challenges in the development of segmentation solutions in the banking industry and a genetic algorithms approach
Chris Ralph, Senior Manager Market Segmentation, Bank of Montreal
The Bank of Montreal team is in the process of building market segmentation solutions for a few different lines of business using syndicated survey data. The dataset consists of 4,200 responses from
households across Canada (geographically unbiased sample), and contains detailed information on their financial holdings across all institutions, as well as channel usage, banking habits, and
household profile information. The process we typically follow in the development of a segmentation solution consists of the following steps:
1) Standard preprocessing stuff (treating outliers, missing values, standardization.) --> 3-5 days
2) Data reduction via factor analysis, PCA, or simple cross-correlations to help avoid redundancy in the cluster runs --> 2-3 days
3) Brainstorming sessions with the lines of business to help us understand key business issues, and generate a list of potential driver variables --> 1-2 weeks
4) AIternative cluster runs using the brainstorming suggestions and data reduction output to generate potential solutions through trial and error. --> 2-4 weeks
The evaluation of solutions in Step 4 involves making trade-offs between the number of clusters, cluster size, cluster overlap, and the degree to which the current solution meets the needs of the
business as determined through the brainstorming sessions. This is usually a painful process that relies heavily on the experience of the analyst to bridge the gap between cluster solution statistics
and relevance to the business. Given the highly manual nature of this task, we can only evaluate a very small subset of the universe of possible solutions, and different analysts will generate very
different solutions.
The discussion will focus on the development of an objective function which captures both business rules and cluster statistics, and which allows for the evaluation and ranking of a much larger
number of potential solutions. The elements of the objective function will be described in fairly simple terms, which apply to any segmentation problem, and show how genetic algorithms may be used to
“evolve” potential solutions. An open discussion will be encouraged of ways to improve the encoding of the problem and the objective function, as well as a discussion of the challenges associated
with the integration of business rules. There are also plenty of issues surrounding the use of genetic algorithms to help optimize the search through the space of possible solutions.
The current objective function captures the business rules simply by measuring the average variance of key “business driver” variables across the clusters, where these variables have been selected
ahead of time in cooperation with the line of business. The higher the variance of these variables across the segments, the more distinct and relevant the clusters should be. Average cluster overlap
is calculated by building n-dimensional hypersheres (where n = # of cluster drivers) around the centroids of the clusters, where the radius of the hypershere is between 2 and 3 RMS standard
deviations. Overlap is defined as occurring when any single observation falls within the hypershere of a cluster, which it has not been assigned to. Cluster size may be integrated into the objective
function, where solutions are penalized for having clusters that are either too large or too small.
Friday, February 4, 2000 9:30-10:30 - Interdisciplinary application of time series methods inspired by chaos theory
Thomas Schreiber, University of Wuppertal
We report on real world applications of time series methods developed on the basis of the theory of deterministic chaos. First, we demonstrate statistical criteria for the necessity of a nonlinear
approach. Nonlinear processes are not in general purely deterministic. Then we discuss modified methods that can cope with noise and nonstationarities. In particular, we will discuss nonlinear
filtering, signal classification, and the detection of nonlinear coherence between processes.
11:00-12:00 - Symbolic data compression concepts for analyzing experimental data
Matt Kennel, Institute for Nonlinear Science at USCD, San Diego 1:00-2:00 - Geometric time series analysis
Mark Muldoon, University of Science and Technology in Manchester
A discussion of a circle of techniques, all developed within the last 20 years and all loosely organized around the idea that one can extract detailed information about a dynamical system (say, the
equations of motion governing some industrial process...) by forming vectors out of successive entries in a time series of measurements.
2:30-3:30 - Chaotic communication using optical and wireless devices
Henry Abarbanel, Institute for Nonlinear Science at USCD, San Diego 3:30-4:30 - Status of cosmic microwave background data analysis: motivations and methods
Simon Prunet, CITA (Canadian Institute for Theoretical Astrophysics), University of Toronto
After a brief review of the physics that motivates measurements of Cosmic Microwave Background anisotropies, I will present the current observational status, the analysis methods used so far, and the
challenge posed by the upcoming huge data sets from future satellite experiments. | {"url":"http://www.fields.utoronto.ca/programs/scientific/99-00/data_analysis/program.html","timestamp":"2014-04-20T11:35:29Z","content_type":null,"content_length":"36779","record_id":"<urn:uuid:4f11b903-1e9e-415e-9916-70df22b900ee>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math 116 Discrete Mathematics - Syllabus
Math 216 Discrete Mathematics - Syllabus
January 9, 2012
Instructor: Eileen M. Peluso, D325 Academic Center, (570) 321-4135
Email: pelusoem@lycoming.edu
Office hours: See www.lycoming.edu/~pelusoem.
Objective: Introduce students to discrete mathematical structures including formal logic, proof by mathematical induction, sequences, sums, products, set theory, counting and probability, functions
including recursive functions, regular expressions, and finite-state automata. Application of these structures to computer systems will be incorporated.
Course material covers important departmental learning goals: assessing the probability of a simple random event and solving mathematical problems using technology.
Furthermore, a goal of this is course is to improve students’ computational thinking skills defined as a combination of abstract, algorithmic and critical thinking and to prepare students for further
work in the scientific traditions that require the application of computational fundamentals in problem-solving, all in support of the Lycoming College mission (see full statement on the college web
site at http://www.lycoming.edu/aboutLycoming/mission.aspx).
Text: Susanna S. Epp, Discrete Mathematics with Applications, 3^rd edition, Brooks/Cole-Thomson Learning, 2004.
· Exams (3): 70% (tentatively scheduled for Feb. 3, Feb 29, and Mar 30)
· See the remarks below regarding the inclusion of homework grades on exams
· Preparation, participation, and attendance: 10%
· Final: 20% (The comprehensive final exam will be given during the exam week at the scheduled time only.)
Grade scale: If you earn the following average, you will receive at least the grade indicated.
· 93.0 or above A
· 90.0 to 92.99 A-
· 87.0 to 89.99 B+
· 83.0 to 86.99 B
· 80.0 to 82.99 B-
· 77.0 to 79.99 C+
· 73.0 to 76.99 C
· 70.0 to 72.99 C-
· 67.0 to 69.99 D+
· 63.0 to 66.99 D
· 60.0 to 62.99 D-
· 59.99 or below F
1. Any potential conflicts with the above test dates (for example, due to scheduled college athletic events) should be resolved within the first two weeks of the semester. Otherwise, students will
not be excused from exams unless
· they are ill and have been to the infirmary or have seen a doctor, or
· they have an emergency situation and have received exemption from the dean.
It is wise to contact me before missing an exam. Any tests missed will result in a grade of zero unless arrangements for a make-up are made within 48 hours.
2. Students are expected to attend class and to be on time. Attendance signature sheets will be circulated at the beginning of each class period. It is the student's responsibility to make sure
that they have signed the day's attendance sheet. It is also the student's responsibility to obtain details about any missed work, homework assignments, announcements, and any information
disseminated during the missed classes. A student who misses submitting more than 10 homework assignments over the course of the semester will automatically fail the course.
3. Reading assignments are given on a daily basis. See the attached term planner. All reading assignments are to be completed before coming to class. Class time will NOT consist of lectures that
repeat the presentation in the text, but rather on problems exercising the material. Students are however encouraged to bring questions to class on those portions of the reading that they find
unclear. The reading material covered for any given day will constitute the basis of the written homework assignment due (usually) the subsequent class day.
4. Written homework assignments will generally be made on a daily basis and are due at the beginning of the first class after the day assigned, unless otherwise indicated. Each written homework will
be graded on a 10 point scale and returned. Points from homework assignments leading up to each exam are added, up to a maximum total of 50. Completing all homework is strongly encouraged, even
though no more than 50 points can be earned leading up to each exam. No points are ever given for late homework for any reason unless arrangements are made with a Dean of the College. Students who
miss submitting an assignment on time can still earn the maximum 50 points leading up to the exam, if no more than 2 or 3 other homework assignments are missed.
5. Semester exams will each be valued at 200 points: 150 for the problems completed on the in-class exam and up to 50 points from homework submissions.
6. The final letter grade will be determined by the final numerical grade using the above conversion guide. The grade for preparation, participation, and attendance includes attendance and
participation (obviously) as well as taking the readings seriously in order to be involved fully in the class discussions. It is not expected that students will have the right answers to all
questions and board work.
7. Academic dishonesty is not allowed. Do not discuss contents of exams with anyone other than the instructor until the last person has taken the exam. To protect yourself, check with the
instructor before discussing test questions with anyone.
Written assignments are to be completed individually by each student. However, discussions with your professor and other students about coursework, including homework assignments, are encouraged.
There is a fine line between the two. Check if you are not sure that what you are doing is acceptable. However, as a general rule of thumb: The difference between sharing ideas and plagiarism will
be determined by the instructor as follows: if you cannot discuss, expound upon, justify, and modify what you have written, then you have plagiarized.
8. We are in the process of arranging for a tutor for this course. Details will be provided when they are available. | {"url":"http://lycofs01.lycoming.edu/~pelusoem/math216/syllabus.htm","timestamp":"2014-04-20T05:50:57Z","content_type":null,"content_length":"41892","record_id":"<urn:uuid:ae410c89-7865-4f11-9be9-81c752140aa0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sort by:
Per page:
Now showing results 1-10 of 16
This collection of activities is based on a weekly series of space science mathematics problems distributed during the 2012-2013 school year. They were intended for students looking for additional
challenges in the math and physical science... (View More) curriculum in grades 5 through 12. The problems were created to be authentic glimpses of modern science and engineering issues, often
involving actual research data. The problems were designed to be one-pagers with a Teacher’s Guide and Answer Key as a second page. (View Less)
In this problem set, learners will analyze a table of global electricity consumption to answer a series of questions and consider the production of carbon dioxide associated with that consumption.
Answer key is provided. This is part of Earth Math:... (View More) A Brief Mathematical Guide to Earth Science and Climate Change. (View Less)
In this problem set, learners will analyze a graph of solar irradiance since 1610. Answer key is provided. They will consider average insolation, percent changes and the link between irradiance and
climate change. This is part of Earth Math: A Brief... (View More) Mathematical Guide to Earth Science and Climate Change. (View Less)
In this problem set, learners will analyze two figures: a graph of Arctic sea ice extent in September between 1950 and 2006, and a graph showing poll results for 2006-2009 for percentage of adults
that believe there exists scientific evidence for... (View More) global warming. They will develop linear models for both graphs. This is part of Earth Math: A Brief Mathematical Guide to Earth
Science and Climate Change. (View Less)
In this problem set, learners will calculate the energy consumption of a home in kilowatt-hours (kWh) to answer a series of questions. They will also consider carbon dioxide production associated
with that energy consumption. Answer key is provided.... (View More) This is part of Earth Math: A Brief Mathematical Guide to Earth Science and Climate Change. (View Less)
In this problem set, learners will create and use a differential equation of rate-of-change of atmospheric carbon dioxide. They will refer to the "Keeling Curve" graph and information on the sources
and sinks of carbon on Earth to create the... (View More) equation and apply it to answer a series of questions. Answer key is provided. This is part of Earth Math: A Brief Mathematical Guide to
Earth Science and Climate Change. (View Less)
In this problem set, students calculate precisely how much carbon dioxide is in a gallon of gasoline. A student worksheet provides step-by-step instructions as students calculate the production of
carbon dioxide. The investigation is supported the... (View More) textbook "Climate Change," part of "Global System Science," an interdisciplinary course for high school students that emphasizes how
scientists from a wide variety of fields work together to understand significant problems of global impact. (View Less)
Students are presented with a graph of atmospheric becomes CO² values from Mauna Loa Observatory, and are asked to explore the data by creating a trend line using the linear equation, and then use
the equation to predict future becomes CO² levels.... (View More) Students are asked to describe qualitatively what they have determined mathematically, and suggest reasons for the patterns they
observe in the data. A clue to the reason for the data patterning can be deduced by students by following up this activity with the resource, Seasonal Vegetation Changes. The data graph and a student
worksheet is included with this activity. This is an activity from Space Update, a collection of resources and activities provided to teach about Earth and space. Summary background information, data
and images supporting the activity are available on the Earth Update data site. (View Less)
In this activity, students learn about the changing configuration of the continents over geological time resulting from plate tectonics. Using a map pair, students measure the difference in distance
between continents 94 million years ago and today,... (View More) and calculate the speed at which the plates have moved. The resource includes the images and a student worksheet. This is an activity
from Space Update, a collection of resources and activities provided to teach about Earth and space. Summary background information, data and images supporting the activity are available on the Earth
Update data site. (View Less)
In this online, interactive module, students will learn how to interpret weather patterns from satellite images, predict storm paths and forecast the weather for their area. The module is part of an
online course for grades 7-12 in satellite... (View More) meteorology, which includes 10 interactive modules. The site also includes lesson plans developed by teachers and links to related resources.
Each module is designed to serve as a stand-alone lesson, however, a sequential approach is recommended. Designed to challenge students through the end of 12th grade, middle school teachers and
students may choose to skim or skip a few sections. (View Less)
«Previous Page12 Next Page» | {"url":"http://nasawavelength.org/resource-search?facetSort=1&topicsSubjects=Earth+and+space+science%3AEarth+processes&resourceType%5B%5D=Instructional+materials%3ATool%2Fsoftware&resourceType%5B%5D=Instructional+materials%3AProblem+set","timestamp":"2014-04-16T05:42:13Z","content_type":null,"content_length":"73342","record_id":"<urn:uuid:879860db-d030-4950-894b-0004624e4297>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multicategory support vector machines: Theory and application to the classification of microarray data and satellite radiance data
Results 1 - 10 of 148
- in CVPR , 2006
"... We consider visual category recognition in the framework of measuring similarities, or equivalently perceptual distances, to prototype examples of categories. This approach is quite flexible,
and permits recognition based on color, texture, and particularly shape, in a homogeneous framework. While n ..."
Cited by 210 (7 self)
Add to MetaCart
We consider visual category recognition in the framework of measuring similarities, or equivalently perceptual distances, to prototype examples of categories. This approach is quite flexible, and
permits recognition based on color, texture, and particularly shape, in a homogeneous framework. While nearest neighbor classifiers are natural in this setting, they suffer from the problem of high
variance (in bias-variance decomposition) in the case of limited sampling. Alternatively, one could use support vector machines but they involve time-consuming optimization and computation of
pairwise distances. We propose a hybrid of these two methods which deals naturally with the multiclass setting, has reasonable computational complexity both in training and at run time, and yields
excellent results in practice. The basic idea is to find close neighbors to a query sample and train a local support vector machine that preserves the distance function on the collection of
neighbors. Our method can be applied to large, multiclass data sets for which it outperforms nearest neighbor and support vector machines, and remains efficient when the problem becomes intractable
for support vector machines. A wide variety of distance functions can be used and our experiments show state-of-the-art performance on a number of benchmark data sets for shape and texture
classification (MNIST, USPS, CUReT) and object recognition (Caltech-101). On Caltech-101 we achieved a correct classification rate of 59.05%(±0.56%) at 15 training images per class, and 66.23%
(±0.48%) at 30 training images. 1.
- Journal of Machine Learning Research , 2004
"... Editor: John Shawe-Taylor We consider the problem of multiclass classification. Our main thesis is that a simple “one-vs-all ” scheme is as accurate as any other approach, assuming that the
underlying binary classifiers are well-tuned regularized classifiers such as support vector machines. This the ..."
Cited by 202 (0 self)
Add to MetaCart
Editor: John Shawe-Taylor We consider the problem of multiclass classification. Our main thesis is that a simple “one-vs-all ” scheme is as accurate as any other approach, assuming that the
underlying binary classifiers are well-tuned regularized classifiers such as support vector machines. This thesis is interesting in that it disagrees with a large body of recent published work on
multiclass classification. We support our position by means of a critical review of the existing literature, a substantial collection of carefully controlled experimental work, and theoretical
, 2004
"... In this paper we argue that the choice of the SVM cost parameter can be critical. We then derive an algorithm that can fit the entire path of SVM solutions for every value of the cost parameter,
with essentially the same computational cost as fitting one SVM model. ..."
Cited by 148 (9 self)
Add to MetaCart
In this paper we argue that the choice of the SVM cost parameter can be critical. We then derive an algorithm that can fit the entire path of SVM solutions for every value of the cost parameter, with
essentially the same computational cost as fitting one SVM model.
- Journal of Machine Learning Research , 2005
"... We study the problem of finding an optimal kernel from a prescribed convex set of kernels K for learning a real-valued function by regularization. We establish for a wide variety of
regularization functionals that this leads to a convex optimization problem and, for square loss regularization, we ch ..."
Cited by 96 (7 self)
Add to MetaCart
We study the problem of finding an optimal kernel from a prescribed convex set of kernels K for learning a real-valued function by regularization. We establish for a wide variety of regularization
functionals that this leads to a convex optimization problem and, for square loss regularization, we characterize the solution of this problem. We show that, although K may be an uncountable set, the
optimal kernel is always obtained as a convex combination of at most m+2 basic kernels, where m is the number of data examples. In particular, our results apply to learning the optimal radial kernel
or the optimal dot product kernel. 1.
- Journal of Computational and Graphical Statistics , 2001
"... The support vector machine (SVM) is known for its good performance in binary classification, but its extension to multi-class classification is still an on-going research issue. In this paper,
we propose a new approach for classification, called the import vector machine (IVM), which is built on ker ..."
Cited by 91 (3 self)
Add to MetaCart
The support vector machine (SVM) is known for its good performance in binary classification, but its extension to multi-class classification is still an on-going research issue. In this paper, we
propose a new approach for classification, called the import vector machine (IVM), which is built on kernel logistic regression (KLR). We show that the IVM not only performs as well as the SVM in
binary classification, but also can naturally be generalized to the multi-class case. Furthermore, the IVM provides an estimate of the underlying probability. Similar to the "support points" of the
SVM, the IVM model uses only a fraction of the training data to index kernel basis functions, typically a much smaller fraction than the SVM. This gives the IVM a computational advantage over the
SVM, especially when the size of the training data set is large. 1
- JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION , 2002
"... Monitoring gene expression profiles is a novel approach in cancer diagnosis. Several studies showed that prediction of cancer types using gene expression data is promising and very informative.
The Support Vector Machine (SVM) is one of the classification methods successfully applied to the cancer d ..."
Cited by 88 (4 self)
Add to MetaCart
Monitoring gene expression profiles is a novel approach in cancer diagnosis. Several studies showed that prediction of cancer types using gene expression data is promising and very informative. The
Support Vector Machine (SVM) is one of the classification methods successfully applied to the cancer diagnosis problems using gene expression data. However, its optimal extension to more than two
classes was not obvious, which might impose limitations in its application to multiple tumor types. In this paper, we analyze a couple of published multiple cancer types data sets by the
multicategory SVM, which is a recently proposed extension of the binary SVM.
- in Machine Learning. PhD thesis, MIT , 2002
"... 2 Everything Old Is New Again: A Fresh Look at Historical ..."
- Data Mining Knowledge Disc , 2002
"... Abstract. The Bayes rule is the optimal classification rule if the underlying distribution of the data is known. In practice we do not know the underlying distribution, and need to “learn ”
classification rules from the data. One way to derive classification rules in practice is to implement the Bay ..."
Cited by 84 (13 self)
Add to MetaCart
Abstract. The Bayes rule is the optimal classification rule if the underlying distribution of the data is known. In practice we do not know the underlying distribution, and need to “learn ”
classification rules from the data. One way to derive classification rules in practice is to implement the Bayes rule approximately by estimating an appropriate classification function. Traditional
statistical methods use estimated log odds ratio as the classification function. Support vector machines (SVMs) are one type of large margin classifier, and the relationship between SVMs and the
Bayes rule was not clear. In this paper, it is shown that the asymptotic target of SVMs are some interesting classification functions that are directly related to the Bayes rule. The rate of
convergence of the solutions of SVMs to their corresponding target functions is explicitly established in the case of SVMs with quadratic or higher order loss functions and spline kernels.
Simulations are given to illustrate the relation between SVMs and the Bayes rule in other cases. This helps understand the success of SVMs in many classification studies, and makes it easier to
compare SVMs and traditional statistical methods.
- In Proceedings of Advances in Neural Information Processing Systems , 2002
"... We discuss the problem of ranking instances with the use of a “large margin ” principle. We introduce two main approaches: the first is the “fixed margin ” policy in which the margin of the
closest neighboring classes is being maximized — which turns out to be a direct generalization of SVM to ranki ..."
Cited by 65 (0 self)
Add to MetaCart
We discuss the problem of ranking instances with the use of a “large margin ” principle. We introduce two main approaches: the first is the “fixed margin ” policy in which the margin of the closest
neighboring classes is being maximized — which turns out to be a direct generalization of SVM to ranking learning. The second approach allows for different margins where the sum of margins is
maximized. This approach is shown to reduce to-SVM when the number of classes. Both approaches are optimal in size of where is the total number of training examples. Experiments performed on visual
classification and “collaborative filtering ” show that both approaches outperform existing ordinal regression algorithms applied for ranking and multi-class SVM applied to general multi-class
classification. 1
- Taiwan University , 2005
"... Feature selection is an important issue in many research areas. There are some reasons for selecting important features such as reducing the learning time, improving the accuracy, etc. This
thesis investigates the performance of combining support vector machines (SVM) and various feature selection s ..."
Cited by 58 (0 self)
Add to MetaCart
Feature selection is an important issue in many research areas. There are some reasons for selecting important features such as reducing the learning time, improving the accuracy, etc. This thesis
investigates the performance of combining support vector machines (SVM) and various feature selection strategies. The first part of the thesis mainly describes the existing feature selection methods
and our experience on using those methods to attend a competition. The second part studies more feature selection strategies using the SVM. ii �ì��¬¡÷ � ��å�ç¢�ß��� � selection)��¥ì����£��È��
����È������Ú���£����æÁ ç��£�����û�� ì�Öù�¡�È��(feature é£�æÁ©Â����℄���� � �Ü � ����Æ���È��℄�¡��û���℄�ø�¢�§���� �(Support Vector Machine) iii | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=90387","timestamp":"2014-04-16T23:16:23Z","content_type":null,"content_length":"38561","record_id":"<urn:uuid:09ba58b3-937e-4b79-8682-4d379861b724>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: IEEE 754 vs Fortran arithmetic
henry@zoo.toronto.edu (Henry Spencer)
Wed, 24 Oct 90 16:25:29 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers,comp.lang.fortran
From: henry@zoo.toronto.edu (Henry Spencer)
Keywords: Fortran
Organization: U of Toronto Zoology
References: <9010230628.AA22160@admin.ogi.edu>
Date: Wed, 24 Oct 90 16:25:29 GMT
> [our moderator writes]
>... I know of no reason that an IEEE implementation of F77 would be
>nonconforming. ...
I can think of at least one: F77 flatly denies the existence of -0,
while IEEE demands it. (One of Dr. Kahan's favorite examples in his
talks is an algorithm which, when implemented straightforwardly, does
the right thing if -0 is implemented properly and screws up bizarrely
if not, so yes, it does matter.)
Henry Spencer at U of Toronto Zoology, henry@zoo.toronto.edu utzoo!henry
[Given that +0 = -0, it's not clear to me that the existence of -0 breaks
anything. Keep in mind that F77 is a permissive standard, extensions are
permitted so long as conforming programs do the right thing. -John]
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/90-10-101","timestamp":"2014-04-19T17:07:34Z","content_type":null,"content_length":"5573","record_id":"<urn:uuid:a435672a-cf91-41c2-b88e-d81fd5963546>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
Perimeter Of A Square Worksheet
Sponsored High Speed Downloads
Perimeter= _____ Perimeter of Square Worksheet . Student Name: _____ Score: Free Math Worksheets @ http://www.mathworksheets4kids.com Answers Side= 4 cm Perimeter= 16 cm Side= 9 m Perimeter= 36 m
Side= 5.5 in Perimeter= 22 in Side ...
Perimeter of Square s Worksheet . Student Name: _____ Score: Free Math Worksheets @ http://www.mathworksheets4kids.com Answers Perimeter= 22 cm Perimeter= 22 cm ...
Rectangle Trapezoid Square Perimeter = _____ Perimeter = _____ Perimeter = _____ 134 km 23 km 2,000 m 249 cm 249 cm 1,500 m 48 km 23 km 203 cm Isosceles Triangle ...
Name _____ Date _____ ©This area and/or perimeter worksheet is from www.teach-nology.com Area and Perimeter Using Square Units
Perimeter worksheet 2 Work out these problems: 1. Find the perimeter of a pentagon when each side is 20cm long. 2. What is the length of the third side of a triangle if one side measures 10
Title: Area and Perimeter Word Problems Practice Worksheet Author: http://www.mathworksheetsland.com/4/25perimeter.html Subject: Geometry Created Date
Perimeter worksheet 1 Remember: Perimeter is the total distance around the outside of a 2D shape. You calculate it by adding together all the lengths of a shape.
Perimeter = 26 m Perimeter = 16 km Perimeter = 24 cm Bonus Box Write the names of the polygons pictured above. Answer to bonus box: rectangle triangle square trapezoid hexagon rectangle parallelogram
diamond or square octagon Super Teacher Worksheets - http://www ...
square paper which covers the total area of 225 feet2. What is the length of any side of the paper? a. 550 _____ 2. William buys a new desk for his office which measures 11 feet ... Area and
Perimeter Word Problems Matching Worksheet Author:
Perimeter and Area Worksheet 1 Name Date Solve the problems below ... The perimeter of a square is 220 centimeters. What is the length of each side? 8. If one side of a stop sign measures 12 inches,
then what is its perimeter?
Name _____ Date _____ ©This area and/or perimeter worksheet is from www.teach-nology.com Perimeter of Square Created Shapes
Perimeter & Area Worksheet I For each of the following plane figures, find the Perimeter & Area Assume all units are in cm. 1) Perimeter Area 8 3 ... The perimeter of a square is 220 cm. What is the
length of each side? 17. 18.
perimeter quadrilateral squarecentimeter L e s s o n s A L I V E! ... eachsideandthatthesquaresarecalled“square ... Perimeter and Area Worksheet AgencyforInstructionalTechnology • www.ait.net 7
LessonsALIVE:EngagingLearnerswithVideo 5 in 5 in
Area and Perimeter Worksheet Color in the boxes to show the shapes you made with your ... Perimeter _____ units Area _____ square units Perimeter _____ units Area _____ square units Perimeter _____
units Area _____ square units . Copyright 2008 LessonSnips www.lessonsnips.com ...
Find the length of one side of the square. Answer:? Perimeter = 40 m (2) The area of a rectangle is 80 cm. If its length is 16 cm, what is its width? Answer: 2 Area = 80 cm 2? 16 cm (3) The area of a
square is 49 mm. What is the length of each side? Answer: 2
Finding Area of a Rectangle by Counting Square Units Created Date: 11/3/2010 12:51:18 PM ...
Grade!3:!Area!and!Perimeter! Lesson 8: A Square What? Overview and Background Information Mathematical Goals ...
Lesson 3: Perimeter and Area Worksheet Name 1. Find the area and perimeter of the figure. Units are centimeters and square centimeters. The perimeter of the figure is . The area of the figure is ...
Perimeter Worksheet For a rectangle with width of 68.75 centimeter and length of 72.50 centimeter, ... Perimeter = square feet For a parallelepiped with its sides measuring 20, 15 and 25 meter, then
its Perimeter = meter For a rhombus ...
... 9 5AIl Bl T VrziSgih wtGs5 1r4e5sZeXrWvpe Sd d.g i BMRaDdzeK 1wJi ot JhK iIAngfyi zn giTtse p DP0rqey-UAkl6g7ewbVryam.t Worksheet by Kuta Software LLC Kuta Software - Infinite Pre-Algebra
Name_____ Area of Squares ...
Name _____Per ____ Worksheet - Area and Perimeter – Rectangle, square and Triangle – Chapter 0 Angles that appear to be right angles are right angles)
• A worksheet to record measurement; ... The perimeter of a square painting is 20 meters. If Jan decided to paint a border along each side in a different color, how long would each of those borders
be? 7.
Lesson 3: Perimeter and Area Worksheet Name 1. Find the area and perimeter of the figure. Units are centimeters and square centimeters. The perimeter of the ... Units are feet and square feet. The
perimeter of the figure is . The area of the figure ...
Perimeter of polygons worksheet ... polygons are a square, equilateral triangle, and a regular pentagon. 3. A stop sign has a side of 3 inches. What is the name of this ... Perimeter is the length
around the outside of a polygon. Title:
Perimeter and Area MCAS Worksheet 1 Name Printed from myMCAS.com. ... What is the area, in square inches, of the part of the rear window that is cleared by the wiper? Show or explain how you got your
answer. d.
The perimeter of a shape is the total length of its sides. ... Area . measures the . surface. of something, usually in square metres (m. 2), square centimetres (cm. 2) or square millimetres (mm. 2).
The area of this lawn is 15 m. 2. ... Student worksheet . Try these . 1.
10. A square has an area of 196 square centimeters. What is the perimeter? P = 56 cm Copyright 2001 Mrs. Glosser’s Math Goodies, Inc. All Rights Reserved. mathgoodies.com Perimeter & Area Worksheet 2
Key Name . Title: Microsoft Word - unit1_wks2_key.docx Author: gglosser Created ...
Perimeter Worksheet Shape Inches Centimeters Triangle Square Rectangle Trapezoid #1 Trapezoid #2 Hexagon Parallelogram NSF North Mississippi GK-8 3
Area and Perimeter Word Problems Worksheets 1 and 2 Answer Keys Area and Perimeter Word Problems Worksheet 1 Area and Perimeter Word Problems Worksheet 2
Use the Perimeter worksheet, Student Resource Sheet 1, to preassess what students already know about perimeter. ... O D. 12 square units O D. 10 square units Directions: Find the perimeter of the
following shapes. 3. 4. O A. 10 ...
Student Worksheet In these problems you will be working on understanding the relationship between area and perimeter. ... perimeter given an area of 9 square units. • Mrs. Hill asked you to construct
a pen for the class rat. You can use
What is the total square footage of the new house? _____ Brain booster: Draw a floor plan of your home. Measure each room and ... Find the Area Worksheet Answer Key Item 4528 1. The living room is
270 square feet. 2. The master bedroom is 184 square feet. 3.
without the base area: 1/2 • 40 feet • 10 feet = 200 square feet. Bonus Worksheet 3: Turn Up the Volume! 1. ... Area: 3.14 • 42 = 50.24 square feet, rounded = 50 square feet 3. Perimeter: 8 + 6 + 10
= 24 feet Area: 1/2 • (8 • 6) = 24 square feet
Geometry Quadrilaterals B C A D T R Q S G P Q S T Parallelogram, Rectangle, Rhombus & Square Worksheet 1. If the perimeter of a square is 56 cm, find the 2.
© Math Worksheet Center Quiz: Perimeter and Area of Polygons ... what is the length of the base? 3 What is the area of a square with perimeter 35 cm? 4 What is the area of a parallelogram with side
length 25 m, ... 7 What is the area of a rectangle with perimeter 196.6 cm and base
Both shapes (rectangles) have the same perimeter. (A square is a particular kind of rectangle.) How can we compare their areas? Let students make suggestions, and then present the following method.
... Worksheet G12(b) for students to record solutions.
Measurement Perimeter Perimeter can be measured by counting when there are square units. = 1 square unit This means that each side of the square
Should be able to estimate the length of a side of a square/rectangle, by comparing it ... Attachment 5: Area and Perimeter lesson plan and worksheet. environment in the classroom. Strand: Physical,
Personal and Social Learning
Perimeter (square) = 4 S The students are then asked to use exactly 24 pieces (units) of wood to ... Worksheet P1 Directions: Find the perimeter of each figure. 1. _____ 2. _____ l l l l l l l l l l
Worksheet: Area and Perimeter Optimization Multiple Choice Identify the choice that best completes the statement or answers the question. ____ 1.
Area and Perimeter Geometry: Use visualization, spatial reasoning, and geometric modeling to solve problems. 7.G.4.1 Compute the perimeter and area of common geometric shapes and use the ... A park
has a square area of 225 feet, what is the perimeter of the park?
square inches Activity Using Tiles to Find the ... perimeter is a one-dimensional or linear measure, while area is a measure of two-dimensional space. SESSION FOLLOW-UP Daily Practice Daily Practice:
For reinforcement of this unit’s
closed geometric figure; Area is the number of square units needed to ... perimeter on their worksheet. • Model student answers on the overhead using tick marks. • Ask: How can you change the
dimensions of this rectangle while keeping
Use Formulas for Perimeter Solve each problem. 1. The Kids Club has gone to the park for the afternoon. Mrs. Sams has drawn a square at the edge of the parking
Perimeter of polygons worksheet 1. ... Find the perimeter of the square with the dimensions below. Please show your work. 5. Perimeter is the length around the _____ of a polygon. Title: Microsoft
Word - Perimeter.doc Created Date:
• Explain the concepts of linear measurement and perimeter, and square measurement and area. • Show the class Shape l. ... • Direct your students to use their color tiles to complete the worksheet,
estimating and measuring Shapes II through VI.
Name _____ Date _____ © Math Worksheet Center Quiz: Perimeter and Circumference 1 Find the perimeter of the given square.
Area and Perimeter Using GSP worksheet 27-28 Design a House Project paper 29 Day #4: Tangrams 30-36 ... SHAPE #3: Draw a rectangle SHAPE #4: Draw a square Perimeter of 14 units Perimeter of 12 units
Area of 12 square units Area of 9 square units Part #3:
of a square that has a perimeter of 36 inches? 7. What is the measure of the sides of a square that has an area of 49 square feet? Find Missing Dimensions of Rectangles 37 035-037_L06_112372.indd 37
7/13/09 12:23:58 PM. Title: 035-037_L06_112372.indd
• To determine the perimeter of a square you can add the length of all four sides of the ... The formative assessment worksheet (finding perimeter) can be found on this site. http://homepage.mac.com/ | {"url":"http://ebookily.org/pdf/perimeter-of-a-square-worksheet","timestamp":"2014-04-24T21:16:54Z","content_type":null,"content_length":"42593","record_id":"<urn:uuid:ee8d74bc-4f83-4b33-8663-292bd9a52116>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help with overloaded * operator
09-23-2006 #1
Registered User
Join Date
Oct 2005
Need help with overloaded * operator
I'm writing a vector class that will add subtract multiply vectors etc, but I'm having trouble with the * operator. I need to multiply a vector (a complex datatype defined by the class) by an
integer, basically scalar multiplication, but I get this error when I compile: error: no match for 'operator*' in '2 * estvec' where 2 is just the integer I used for testing. I tried two ways of
doing it but neither of them work. Here is the code:
The public/private declarations of the class (not complete):
class vector
double magnitude;
double direction;
vector(); //Default consructor
vector(const vector &old); //Copy Constructor
//Overloaded operators
//for first method
friend vector operator *(const vector &left, const vector &right);
//for second method. Note: I only kept one or the other
vector operator*(const vector &num);
Here's the first method:
vector operator *(const vector &left, const vector &right)
vector product;
product.magnitude = left.magnitude * right.magnitude;
if (left.magnitude < 0.0)
product.direction = right.direction + 180.0;
//this is based on a 360 degree system
return product;
Here's the second method:
vector vector::operator*(const vector &num)
vector result;
result.magnitude = magnitude * num.magnitude;
result.direction = num.direction;
return result;
I simply used this to test the code:
vector prod;
prod = 2 * estvec; //estvec is just another vector I declared earlier
cout << prod; //I have a working overloaded << operator
Last edited by orikon; 09-23-2006 at 11:55 PM.
You should make operator* a non-member function and have three versions. One that multiplies two vectors, one that has the vector on the left and an int on the right, and one that has an int on
the left and the vector on the right. Your second attempt is a member function, meaning a vector is automatically on the left, and it takes another vector you named num. You want one of the
variable types to be int in order to multiply by an int. You also have to make it a non-member function so the left hand side isn't forced to be a vector. In your example, 2 * estvec, the int is
on the left.
You should make operator* a non-member function and have three versions. One that multiplies two vectors, one that has the vector on the left and an int on the right, and one that has an int on
the left and the vector on the right. Your second attempt is a member function, meaning a vector is automatically on the left, and it takes another vector you named num. You want one of the
variable types to be int in order to multiply by an int. You also have to make it a non-member function so the left hand side isn't forced to be a vector. In your example, 2 * estvec, the int is
on the left.
That makes a lot of sense, thanks so much. I've got it working now!
09-24-2006 #2
Registered User
Join Date
Jan 2005
09-24-2006 #3
Registered User
Join Date
Oct 2005 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/83337-need-help-overloaded-*-operator.html","timestamp":"2014-04-20T02:26:11Z","content_type":null,"content_length":"49279","record_id":"<urn:uuid:de796293-a7af-4cca-a997-fede45fa9142>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] Re: [Haskell] View patterns in GHC: Request
for feedback
[Haskell-cafe] Re: [Haskell] View patterns in GHC: Request for feedback
Conor McBride ctm at cs.nott.ac.uk
Fri Jul 27 05:35:56 EDT 2007
> > In the dependently typed setting, it's often the case that the
> > "with-scrutinee" is an expression of interest precisely because it
> > occurs
> > in the *type* of the function being defined. Correspondingly, an
> > Epigram implementation should (and the Agda 2 implementation now
> > abstract occurrences of the expression from the type.
> Oh, I see: you use 'with' as a heuristic for guessing the motive
of the
> inductive family elim. How do you pick which occurrences of the
> with-scrutinee to refine, and which to leave as a reference to the
> original variable? You don't always want to refine all of them,
do you?
There are two components to this process, and they're quite separable.
Let's have an example (in fantasy dependent Haskell), for safe lookup.
defined :: Key -> [(Key, Val)] -> Bool
defined k [] = False
defined k ((k', _) : kvs) = k == k' || defined k kvs
data Check :: Bool -> * where
OK :: Check True
lookup :: (k :: Key; kvs :: [(Key, Val)]) -> Check (defined k kvs) ->
lookup k [] !! -- !! refutes Check False; no rhs
lookup k ((k', v) : kvs) p with k == k'
lookup k ((k', v) : kvs) OK | True = v
lookup k ((k', v) : kvs) p' | False = lookup k kvs p'
Left-hand sides must refine a 'problem', initially
lookup k kvs p where
k :: Key; kvs :: [(Key, Value)]; p :: Check (defined k kvs)
Now, {-before-} the with, we have patterns refining the problem
lookup k ((k', v) : kvs) p where
k, k' :: Key
v :: Val
kvs :: [(Key, Val)]
p :: Check (k == k' || defined k kvs)
The job of "with" is only to generate the problem which the lines in its
block must refine. We introduce a new variable, abstracting all
occurrences of the scrutinee. In this case, we get the new problem
lookup k ((k', v) : kvs) p | b where
k, k' :: Key
v :: Val
kvs :: [(Key, Val)]
b :: Bool
p :: Check (b || defined k kvs)
All that's happened is the abstraction of (k == k'): no matching, no
mucking about with eliminators and motives. Now, when it comes to
checking the following lines, we're doing the same job to check
dependent patterns (translating to dependent case analysis, with
whatever machinery is necessary) refining the new problem. Now,
once b is matched with True or False, the type of p computes to
something useful.
So there's no real guesswork here. Yes, it's true that the choice
to abstract all occurrences of the scrutinee is arbitrary, but "all
or nothing" are the only options which make sense without a more
explicit mechanism to pick the occurrences you want. Such a
mechanism is readily conceivable: at worst, you just introduce a
helper function with an argument for the value of the scrutinee and
write its type explicitly.
I guess it's a bit weird having more structure to the left-hand
side. The approach here is to allow the shifting of the problem,
rather than to extend the language of patterns. It's a much
better fit to our needs. Would it also suit Haskell?
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2007-July/029623.html","timestamp":"2014-04-19T09:30:22Z","content_type":null,"content_length":"5941","record_id":"<urn:uuid:a10d4b4f-deb0-48ee-b881-68573aeba0e1>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
Melissa Math Tutor
Find a Melissa Math Tutor
...Studying for tests and completing homework occurs most successfully when several factors are in place. First, the study space needs to be visually and orally clutter free. In other words,
television, your favorite DJ, five texting/face booking friends, and trash on the table are not conducive to a positive study environment.
41 Subjects: including prealgebra, study skills, GED, elementary (k-6th)
...Since then I have been teaching chemistry in Plano ISD. In addition to teaching, I have served on the Chemistry Leadership Team and have been a Curriculum Writer in the district for 10 years.
Everyone has a unique learning style.
2 Subjects: including algebra 1, chemistry
Hi! I'm a financial professional in Dallas who is looking to help students understand math, finance, or accounting. I have a dual undergraduate degree in Finance & Accounting, a MBA in Finance
from Top 50 school, and work for a Fortune 100 Company.
20 Subjects: including algebra 1, algebra 2, ACT Math, public speaking
As an educator, I want my students to become self-sufficient, engaged lifetime learners. I believe no matter the age of the student, it is possible to achieve this goal. I have also found
fulfilling this role requires my constant study, reflection, and adaptation of teaching methods and styles.
49 Subjects: including statistics, calculus, elementary (k-6th), GED
...With over 4 years of teaching English experience, I know I can help you reach any and all of your English goals. I too have had the experience of learning a second language, so I think this
gives me the empathy and patience required when helping students with their goals. I truly look forward t...
29 Subjects: including ACT Math, algebra 1, algebra 2, reading | {"url":"http://www.purplemath.com/Melissa_Math_tutors.php","timestamp":"2014-04-20T06:30:24Z","content_type":null,"content_length":"23399","record_id":"<urn:uuid:6e6e9516-3002-4c61-8ae2-69e601821a04>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: [Axiom-developer] Curiosities with Axiom mathematical structures
[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Axiom-developer] Curiosities with Axiom mathematical structures
From: Ralf Hemmecke
Subject: Re: [Axiom-developer] Curiosities with Axiom mathematical structures
Date: Tue, 14 Mar 2006 15:56:05 +0100
User-agent: Thunderbird 1.5 (X11/20051201)
On 03/14/2006 01:43 AM, Gabriel Dos Reis wrote:
"Bill Page" <address@hidden> writes:
| I agree with Martin. One should interpret:
| | if Integer has Monoid(*,1) | | as the question of whether F = (*,1) is a functor from the category
| containing Integer to Monoid, the category of monoids.
100% agreed.
But that looks like strange syntax to me. If I want to ask
F(Integer) \in Ob(Monoid)
and I have to write "Integer has Monoid(*,1)" that does not really look natural to me.
[Prev in Thread] Current Thread [Next in Thread]
• Re: [Axiom-developer] BINGO, Curiosities with Axiom mathematical structures, (continued)
• Re: [Axiom-developer] BINGO, Curiosities with Axiom mathematical structures, Gabriel Dos Reis, 2006/03/13
• RE: [Axiom-developer] BINGO, Curiosities with Axiom mathematical structures, Bill Page, 2006/03/09
• Re: [Axiom-developer] BINGO,Curiosities with Axiom mathematical structures, William Sit, 2006/03/10
• Re: [Axiom-developer] BINGO, Curiosities with Axiom mathematical structures, Gabriel Dos Reis, 2006/03/13
• [Axiom-developer] Re: BINGO, Curiosities with Axiom mathematical structures, Martin Rubey, 2006/03/10
• Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Ralf Hemmecke, 2006/03/09
• RE: [Axiom-developer] Curiosities with Axiom mathematical structures, Bill Page, 2006/03/10
• Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Gabriel Dos Reis, 2006/03/13
• Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Ralf Hemmecke <=
• Re: [Axiom-developer] Curiosities with Axiom mathematical structures, William Sit, 2006/03/10
• [Axiom-developer] Re: BINGO, Curiosities with Axiom mathematical structures, Ralf Hemmecke, 2006/03/10
• RE: [Axiom-developer] Re: BINGO, Curiosities with Axiom mathematical structures, Bill Page, 2006/03/11
• Re: [Axiom-developer] Re: BINGO, Curiosities with Axiom mathematical structures, Martin Rubey, 2006/03/11
• Re: [Axiom-developer] Re: BINGO, Curiosities with Axiom mathematical structures, Gabriel Dos Reis, 2006/03/13
• Re: [Axiom-developer] Re: BINGO,Curiosities with Axiom mathematical structures, Ralf Hemmecke, 2006/03/13
• RE: [Axiom-developer] Re: BINGO, Curiosities with Axiom mathematical structures, Bill Page, 2006/03/13
• Re: [Axiom-developer] Re: BINGO,Curiosities with Axiom mathematical structures, Ralf Hemmecke, 2006/03/14
• Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Martin Rubey, 2006/03/10
• Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Ralf Hemmecke, 2006/03/13 | {"url":"http://lists.gnu.org/archive/html/axiom-developer/2006-03/msg00147.html","timestamp":"2014-04-17T19:06:34Z","content_type":null,"content_length":"9084","record_id":"<urn:uuid:603816ec-2833-438f-a426-f9413093d407>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Term frequency/Inverse document frequency implementation in C#
This code implements the Term Frequency/Inverse Document frequency (TF-IDF). The TF-IDF is a text statistical-based technique which has been widely used in many search engines and information
retrieval systems. I will deal with the documents similarity problem in the next section. To understand the theory, please see this article: Wiki definition of TF/IDF for more details.
Assume that you have a corpora of 1000 documents and your task is to compute the similarity between two given documents (or a document and a query). The following describes the steps of acquiring the
similarity value:
Document pre-processing steps
• Tokenization: A document is treated as a string (or bag of words), and then partitioned into a list of tokens.
• Removing stop words: Stop words are frequently occurring, insignificant words. This step eliminates the stop words.
• Stemming word: This step is the process of conflating tokens to their root form (connection -> connect).
Document representation
• We generate N-distinct words from the corpora and call them as index terms (or the vocabulary). The document collection is then represented as a N-dimensional vector in term space.
Computing Term weights
• Term Frequency.
• Inverse Document Frequency.
• Compute the TF-IDF weighting.
Measuring similarity between two documents
• We capture the similarity of two documents using cosine similarity measurement. The cosine similarity is calculated by measuring the cosine of the angle between two document vectors.
Using the code
The main class is TFIDFMeasure. This is the testing code:
void Test (string[] docs, int i, int j)
// docs is collection of parsed documents
StopWordHandler stopWord=new StopWordsHandler() ;
TFIDFMeasure tf=new TFIDFMeasure(doc) ;
float simScore=tf.GetSimilarity( i, j);
// similarity of two given documents at the
// position i,j respectively
This library also includes stemming (Martin Porter algorithm), and N-gram text generation modules. If a token-based system did not work as expected, then you can make another choice with N-gram
based. Thus, instead of expanding the list of tokens from the document, we will generate a list of N-grams, where N should be a predefined number. That means we will hash into a table to find the
counter for the N-gram, but not words (or tokens).
The extra N-gram based similarities (bi, tri, quad...-gram) also help you compare the result of the statistical-based method with the N-gram based method. Let us consider two documents as two flat
texts and then run the measurement to compare.
Example of some N-grams for the word "TEXT":
• uni(1)-gram: T, E, X, T
• bi(2)-gram: T, TE, EX, XT, T
• tri(3)-grams: TE, TEX, EXT, XT, T
• quad(4)-grams: TEX, TEXT, EXT, XT, T
A string of length k, will have k+1 bi-grams, k+1 tri-grams, k+1 quad-grams, and so on.
Point of interest
No complex technique was used, I only utilized the hashtable indexing, and array binary search to solve this problem. The N-gram based text similarity also gives us interesting results.
Articles worth reading | {"url":"http://www.codeproject.com/Articles/12098/Term-frequency-Inverse-document-frequency-implemen?msg=4493370","timestamp":"2014-04-16T13:19:57Z","content_type":null,"content_length":"120478","record_id":"<urn:uuid:24c5ade0-648a-4611-b2d8-82641af3281f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] "Geometrical ways of reasoning" and visual proofs
Peter Smith ps218 at cam.ac.uk
Sun Nov 28 06:00:19 EST 2004
Those interested in this topic (which has come up again in the thread on
Shapiro on formal and natural languages) might find the following
Mateja Jamnik, Mathematical Reasoning with Diagrams (CSLI publications,
An attractively written and accessible short book (Jamnik is a computer
scientist, now in Cambridge). Here's the blurb from the CSLI website, which
will give you the flavour:
Theorems in automated theorem proving are usually proved by formal logical
proofs. However, there is a subset of problems which humans can prove by
the use of geometric operations on diagrams, so called diagrammatic proofs.
This book investigates and describes how such diagrammatic reasoning about
mathematical theorems can be automated.
Concrete, rather than general diagrams are used to prove particular
instances of a universal statement. The "inference steps" of a diagrammatic
proof are formulated in terms of geometric operations on the diagram. A
general schematic proof of the universal statement is induced from these
proof instances by means of the constructive omega-rule. Schematic proofs
are represented as recursive programs which, given a particular diagram,
return the proof for that diagram. It is necessary to reason about this
recursive program to show that it outputs a correct proof. One method of
confirming that the abstraction of the schematic proof from the proof
instances is sound is proving the correctness of schematic proofs in the
meta-theory of diagrams. The book presents an investigation of these ideas
and their implementation in the system, called Diamond.
Dr Peter Smith: Faculty of Philosophy, University of Cambridge
www.logicbook.net | www.godelbook.net
(for the "LaTeX for Logicians" page)
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2004-November/008590.html","timestamp":"2014-04-18T10:35:52Z","content_type":null,"content_length":"4448","record_id":"<urn:uuid:c6001dca-d6fb-425a-9bec-5fc5d10280d7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newest 'gamma-function binomial-coefficients' Questions
The following summation turned up in the course of my research: $$S_n=\sum_{k=0}^n {n \choose k}\lambda^k P(k,t)$$ where $P(k,t)=\frac{1}{\Gamma(k)}\int_{0}^t e^{-x}x^{k-1}dx$ is the lower ...
Problem Is it possible to simplify/rewrite the following expression, preferably without explicit sums, such that it can be computed without numerical issues when the $n_*$ are in the range of ... | {"url":"http://mathoverflow.net/questions/tagged/gamma-function+binomial-coefficients","timestamp":"2014-04-19T20:02:01Z","content_type":null,"content_length":"34708","record_id":"<urn:uuid:b436549f-488e-4ecf-87ed-856121a7bebe>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quartic Root Calculator - Computes complex and real roots for any quartic polynomial
A polynomial, P(x), has a factor of (x - r) if and only if P(r) = 0. Then r is said to be a zero of the polynomial.
Every quartic polynomial, P(x), has a factorization of the form:
P(x) = (x - r[1])(x - r[2])(x - r[3])(x - r[4]) = 0
where the roots, r[i], can be duplicates.
If P(x) has real coefficients (as in this calculator), and if x is a complex zero of P(x), then the complex conjugate of x is also a zero of P(x). A quartic polynomial can have four real zeros, or
two real zeros and one pair of complex zeros, or two pairs of complex zeros
Copyright © 2004, Stephen R. Schmitt | {"url":"http://www.convertalot.com/quartic_root_calculator.html","timestamp":"2014-04-21T02:00:13Z","content_type":null,"content_length":"25472","record_id":"<urn:uuid:5ae04450-ff90-4484-9574-3bd6e156e50a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
ORBi: Ruwet Christel - Robustness in ordinal regression
Robustness in ordinal regression
Ruwet, Christel [Université de Liège - ULg > Département de mathématique > Statistique mathématique >]
Haesbroeck, Gentiane [Université de Liège - ULg > Département de mathématique > Statistique mathématique >]
Croux, Christophe [ > > ]
18th Annuel meeting of the Belgian Statistical Society
du 13 octobre 2010 au 15 octobre 2010
[en] Ordinal regression ; Logistic discrimination ; Robustness ; Weights ; Diagnostic plot
[en] Logistic regression is a widely used tool designed to model the success probability of a Bernoulli
random variable depending on some explanatory variables. A generalization of this bimodal model is the multinomial case where the dependent variable has more than two categories. When these
categories are naturally ordered (e.g. in questionnaires where individuals are asked whether they strongly disagree, disagree, are indifferent, agree or strongly agree with a given statement), one
speaks about ordered or ordinal regression. The classical technique for estimating the unknown parameters is based on Maximum Likelihood estimation (e.g. Powers and Xie, 2008 or Agresti, 2002).
However, as Albert and Anderson (1984) showed in the binary context, Maximum Likelihood
estimates sometimes do not exist. Existence conditions in the ordinal setting, derived by Haberman in a discussion of McCullagh’s paper (1980), as well as a procedure to verify that they are
fulfilled on a particular dataset will be presented.
On the other hand, Maximum Likelihood procedures are known to be vulnerable to contamination in the data. The lack of robustness of this technique in the simple logistic regression setting has
already been investigated in the literature (e.g. Croux et al., 2002 or Croux et al., 2008). The breakdown behaviour of the ML-estimation procedure will be considered in the context of ordinal
logistic regression. A robust alternative based on a weighting idea will then be suggested and compared to the classical one by means of their influence functions. Influence functions can be used to
construct a diagnostic plot allowing to detect influential observation for the classical ML procedure (Pison and Van Aelst, 2004). | {"url":"http://orbi.ulg.ac.be/handle/2268/86422","timestamp":"2014-04-19T18:24:50Z","content_type":null,"content_length":"14924","record_id":"<urn:uuid:c81834f1-aa19-41df-aad6-a96577a5a716>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pascal Help!
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Sep 2004
Rep Power
Pascal Help!
Hi guys,
Been doing Pascal at college an have been told to change a number into binary. I think I've got the idea of how to do it, it's just that the code seems to be full of errors which I just can find/
I was using Freepascal, I couldn't copy it, so I took a screenshot.
So, if anyone could tell me were I'm going wrong, that would be great!
Originally Posted by mit111
Hi guys,
Been doing Pascal at college an have been told to change a number into binary. I think I've got the idea of how to do it, it's just that the code seems to be full of errors which I just can find/
The main problem I see is that you aren't actually saving the value you are reading into the variable input1; you dropped the trailing '1'. The reason you may not be getting a compiler error is
because input is the name of the standard console, equivalent to stdin in C and Perl or cin in C++. Since you are giving ReadLn() an input stream but no variables, it doesn't actually read
anything. To make things a bit less ambiguous, you might rename input1 to something like value or just simply n.
As a stylistic note, any time you have a series of variables with names like foo1, foo2, foo3, etc., you generally should simply replace them with an array:
answer: array [1..7] of Integer;
answer[1] := input1 mod 2;
input1 := input1 div 2; { I'll explain this in a moment }
answer[2] := input1 mod 2;
{ ... }
Also, as shown here, the successive values need to be from the division of the current version of the number, not the modulo; the modulo of any number by 2 is always either 0 or 1, after all,
with 0 values being even and 1 values being odd (this is the basis of the algorithm).
There's another problem with this, in that you are printing the bits in the wrong order. In the algorithm you are using, the first bit returned is the least significant, i.e., the rightmost bit.
The second bit it returns is the second twos, the third bit the fours, and so on. You can fix this by reversing either the order you produce them in, or, more usefully, the order in which you
print them.
Alternately, you can dispense with all but one of the temporary values entirely, and instead re-write the function so that it goes through the number recursively, getting the current bit, then
calling itself, then printing the bit. This is the classic form of the algorithm, in fact (in pseudo-code):
procedure printBinary(x) is
if x is less than zero
print a negative sign
if x is equal to zero
return from the function
bit := x mod 2
printBinary(x div 2)
end printBinary
Done this way, you can print binary values for values up to 2,147,483,648 accurately. (BTW, the maximum value a 7 bit number can hold is 127, not 134).
(Converting this algorithm into Pascal is left as an exercise. As a hint, I will tell you that you'll need to write it as a procedure - or perhaps a function, one which returns, say, a string -
and that you'll want to read the value in first before calling it.)
Finally, regarding the difficulty of cutting and pasting code from FreePascal, you should recall that a Pascal program is just a text file; you could, after having closed it in FreePascal, opened
the file in a text editor such as Notepad, and cut and paste it from there. Alternately, you can use DevPascal as your development system, which uses the Free Pascal compiler but gives a
Windows-based editing environment to work in.
Last edited by Schol-R-LEA; September 30th, 2006 at 09:12 AM.
Rev First Speaker Schol-R-LEA;2 JAM LCF ELF KoR KCO BiWM TGIF
#define KINSEY (rand() % 7) λ Scheme is the Red Pill
Scheme in Short • Understanding the C/C++ Preprocessor
Taming Python • A Highly Opinionated Review of Programming Languages for the Novice, v1.1
FOR SALE: One ShapeSystem 2300 CMD, extensively modified for human use. Includes s/w for anthro, transgender, sex-appeal enhance, & Gillian Anderson and Jason D. Poit clone forms. Some wear.
$4500 obo. tverres@et.ins.gov
Wow! Thanks for helping me understand it a little better.
Edit: I've made the file now and it compiles without any bugs, the only problem is, once I input my number the window closes, is there any way of stopping this so I can see if my code works?
[hl=pascal]procedure pause;
writeln('Press any key to continue...');
~James [Not currently seeking freelance work]
Like philosophy or interested in spirituality? Philosophorum.
Game Dev Experts Forums
Foresight Linux - Because your desktop should be cool!
Linux FAQ FedoraFAQ UbuntuGuide
Originally Posted by LinuxPenguin
[hl=pascal]procedure pause;
writeln('Press any key to continue...');
Jepp, or use the 'readln();' function, I belive the only difference is that readkey waits untill you enter any key and readln() ignores all keys except the ENTER key.
EDIT: Also, instead of taking screenshots of you compiler, open the source code in wordpad and copy it from there.
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Sep 2004
Rep Power | {"url":"http://forums.devshed.com/programming-languages/389778-pascal-help-last-post.html","timestamp":"2014-04-17T07:15:55Z","content_type":null,"content_length":"73607","record_id":"<urn:uuid:ea1ffd3b-2c83-46e4-9300-6f1a8484697c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
The C! is a way to reward a noder for submitting a particularly good writeup whether it be factual, humorous or artistic. Basically, a way of saying you appreciated the writeup and wanted to let the
author know and give the writeup more attention than it would have had otherwise. C!ing someone's writeup earns it a place in the Cool Archive and the front page. E2 will also reward that user with
20 XP.
You get the power to C! writeups beginning at 4th level. A writeup can accrue any number of C!s but a user can C! any given writeup only once. Further C!s must be given by other users. For details on
C!s and the rest of the level system see the voting/experience system document.
Cool Man Eddie will msg you in the Chatterbox every time somebody C!s one of your writeups, telling you who did it. If you don't want Eddie turned on you can uncheck a box in User settings. You can
also enable a "cool safety" so you don't accidentally C! somebody's writeup.
back to
E2 Glossary | {"url":"http://everything2.com/title/C%2521","timestamp":"2014-04-18T00:35:31Z","content_type":null,"content_length":"29175","record_id":"<urn:uuid:b6805be5-211a-4594-becc-474b688e67c8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tower of Hanoi Problem and skipping a move
September 18th 2011, 06:43 PM #1
Tower of Hanoi Problem and skipping a move
The Tower of Hanoi problem:
According to legend, a certain Hindu temple contains three thin diamond poles( $A$, $B$, $C$ and $A$ is closer to $B$ and $B$ is closer to $C$ but $C$ is not closer to $A$) on one of which, at
the time of creation, God placed $64$ golden disks that decrease in size as they rise from the base.
The priests of the temple work unceasingly to transfer all the disks one by one from the first pole to one of the others, but they must never place a large disk on top of a smaller one and they
are allowed to move disks from one tower to only adjacent pole. Let
$a_n = \left[ \begin{array}{c} \text{the minimum number of moves} \\ \text{needed to transfer a tower of n} \\ \text{ disks from pole A to pole C} \end{array} \right ]$
$b_n = \left[ \begin{array}{c} \text{the minimum number of moves} \\ \text{needed to transfer a tower of n} \\ \text{ disks from pole A to pole B} \end{array} \right ]$
Find a recurrence relation $a_n$ and $b_n$.
Solve for the problem to find $a_n$ (The book did this):
\begin{align*}onumber a_n =& a_{n-1} \text{ (moves to move the top n - 1 disks from pole A to pole C) } +\\ &onumber 1 \text{ (move to move the bottom disk from pole A to pole B) } + \\ & onumber
a_{n-1} \text{ (moves to move the top disks from pole C to pole A) } + \\ & onumber 1 \text{ (move to move the bottom disk from pole B to pole C) } + \\ & onumber a_{n-1} \text{ (moves to move
the top disks from pole A to pole C) } \\ onumber =& 3a_{n-1} + 2 \end{align*}
Solve for the problem to find $b_n$(I did this):
\begin{align*}onumber b_n =& b_{n-1} \text{ (moves to move the top n - 1 disks from pole A to pole C) } +\\ & onumber 1 \text{ (move to move the bottom disk from pole A to pole B } + \\ & onumber
b_{n-1} \text{ (moves to move the top disks from pole C to pole B) } \\ onumber =& 2b_{n-1} + 1 \end{align}
Why my solution of $b_n$ is wrong? For finding $b_n$ I did the same thing as done in finding $a_n$ which is skipping the move from $A$ to $B$ when moving from $A$ to $C$. I calculated this move
as $1$ like the book did it.
So why am I wrong?
Also my question is why in finding $a_n$ to move the disks from $A$ to $C$ they skipped to calculate the move to move the disks first from $A$ to $B$?
Re: Tower of Hanoi Problem and skipping a move
Unless the rules contains some restriction that takes into account how close the poles are, $a_n$ should be equal to $b_n$. The recurrence equation is $a_n=2a_{n-1}+1$, and similarly for $b_n$.
Re: Tower of Hanoi Problem and skipping a move
I am bit confused. The poles are erected like Figure 1.
The rules for moving the disks are:
You can move disks from pole $A$ to pole $C$ via Pole $B$ you can't move disks directly to Pole $C$ from Pole $A$.
Same rule goes for pole $C$. You can't move disk directly from pole $C$ to Pole $A$. You've to go via Pole $B$.
You can move disks directly from Pole $A$ to pole $B$. Also you can move disks directly from Pole $C$ to pole $B$
You can move disks from Pole $B$ to Pole $A$ or to Pole $C$ directly . And you can't move larger disks on smaller disks.
You said that:
Now $a_1 = 2$ because if you have $1$ disk in pole $A$ it will take $2$ moves to move $1$ disk from pole $A$ to pole $C$ via pole $B$. you can't move a disk from pole $A$ to pole $C$ directly
according to the rule stated above.
$a_2 = 8$ if you look at the Figure 1 you need minimum $8$ moves to move $2$ disks from pole $A$ to pole $C$ but you said $a_n=2a_{n-1}+1$
If we plug in the value of $n = 2$ we get:
$a_2 = 2a_{2-1} + 1 = 2a_{1} + 1 = 2*2 + 1 = 5 eq 8 \text{(we got previously)}$
So shouldn't $a_n = \left[ \begin{array}{c} \text{the minimum number of moves} \\ \text{needed to transfer a tower of n} \\ \text{ disks from pole A to pole C} \end{array} \right ] = 3a_{k-1} +
The same goes for $b_n$
You said that $b_n = \left[ \begin{array}{c} \text{the minimum number of moves} \\ \text{needed to transfer a tower of n} \\ \text{ disks from pole A to pole B} \end{array} \right ] = 2b_{n-1} +
Base condition is $b_1 = 1$ because to move a disk from pole $A$ to pole $B$ you need minimum $1$ move according to the rules above.
Then if you look at Figure 1 it takes $4$ minimum moves to move $2$ disks from pole $A$ to pole $B$. So $b_2 = 4$
If we replace $n$ with $2$ according to your equation:
$b_2 = 2b_{2-1} +1 = 2b_1 + 1 = 2* 1 + 1 = 3 eq 4$
You can't take $3$ move to move $2$ disks from pole $A$ to pole $B$. It's impossible according to the rules stated above.
Can you kindly shed light what you said? Sorry I didn't get it. Also is it possible to get an answer to my question I posted first in this thread?
Last edited by x3bnm; September 19th 2011 at 08:13 AM.
Re: Tower of Hanoi Problem and skipping a move
Sorry, I missed the adjacency requirement. It's not in the standard Towers of Hanoi rules.
The equation for $b_n$ should be $b_n=a_{n-1}+b_{n-1}+1$ since the first part is moving disks from A to C, which takes $a_{n-1}$ moves.
Re: Tower of Hanoi Problem and skipping a move
I know the subject should be "Modified Tower of Hanoi Problem....." Sorry for misunderstanding.
And thanks for the answer.
September 19th 2011, 04:29 AM #2
MHF Contributor
Oct 2009
September 19th 2011, 07:54 AM #3
September 19th 2011, 08:23 AM #4
MHF Contributor
Oct 2009
September 19th 2011, 08:32 AM #5 | {"url":"http://mathhelpforum.com/discrete-math/188316-tower-hanoi-problem-skipping-move.html","timestamp":"2014-04-19T20:59:17Z","content_type":null,"content_length":"65415","record_id":"<urn:uuid:76ae877a-8f69-40f2-bd09-599144e5b3f0>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
The C! is a way to reward a noder for submitting a particularly good writeup whether it be factual, humorous or artistic. Basically, a way of saying you appreciated the writeup and wanted to let the
author know and give the writeup more attention than it would have had otherwise. C!ing someone's writeup earns it a place in the Cool Archive and the front page. E2 will also reward that user with
20 XP.
You get the power to C! writeups beginning at 4th level. A writeup can accrue any number of C!s but a user can C! any given writeup only once. Further C!s must be given by other users. For details on
C!s and the rest of the level system see the voting/experience system document.
Cool Man Eddie will msg you in the Chatterbox every time somebody C!s one of your writeups, telling you who did it. If you don't want Eddie turned on you can uncheck a box in User settings. You can
also enable a "cool safety" so you don't accidentally C! somebody's writeup.
back to
E2 Glossary | {"url":"http://everything2.com/title/C%2521","timestamp":"2014-04-18T00:35:31Z","content_type":null,"content_length":"29175","record_id":"<urn:uuid:b6805be5-211a-4594-becc-474b688e67c8>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tools Discussion: Research Area, need help in generating feasible correlation matrices
Discussion: Research Area
Topic: need help in generating feasible correlation matrices
<< see all messages in this topic
< previous message | next message >
Subject: need help in generating feasible correlation matrices
Author: Sione
Date: Jan 15 2004
Your Question is need to be specific.
First , what software (or package) do you use?
Second, What is the meaning of feasible correlation
matrices. Is it correlation between 2 vectors ( A , B)
or do you mean something else. If it is correlation between 2 vectors then use
the following algorithm:
//C is the covariance, X is the data matrix
C = COV(X);
//D is the correlation of data matrix X , 'i' and 'j' are matrix entries.
D = CORRCOEF(X) ;
= C(i,j)/SQRT(C(i,i)*C(j,j)) ;
Third , what is the purpose of the need to be real, symmetric, positive
semi-definite matrices? Is it for algorithm testing purposes or something
I am not aware of anything (algrithm) that need to produce a matrix that meets
all the requirement listed above. However , you should use EIGEN VALUE
DECOMPOSITION algorithm to solve for eigen values, but this appproach will
always give you real and imaginary eigen values. I use the EIGEN VALUE
DECOMPOSITION algorithm to solve polynomal roots in my "Polynomial Roots Applet"
found here:
The eigen value algorithm will always give the roots of any polynomial order in
real and complex roots, but I filtered out the complex roots for my applet just
to display only REAL ROOTS. EIGEN VALUE DECOMPOSITION requires you matrix to be
SQUARE or otherwise , it would not be able to solve your systems of linear
I do not see why you are using QR to test matrices for
real semidefinite positive matrices. QR is to factorise a matrix ( eg, X):
[Q,R] = QR(X);
The 2 matrices produced by QR , such as the example shown above are the 2
factors for matrix X, just the same as 2 and 7 are factors of the number 14 .
The matrix factor Q is an orthogonal , ie, Q*Q' = I , which is Q times the
transpose of Q equals the identity matrix. Matrix factor R is an upper
triangular matrix. If you multiply the 2 facotrs , you will get your original
matrix X ;
X = Q*R;
I have never done trying to find out semidefinite positive matrices before , but
perhaps you should try and find it out somewhere else rather than in this forum.
I think that this forum is mainly for high school level mathematics.
I would suggest 2 forums in numerical computing where I am subscribed to. Go to
these forums site and register so that you would be able to post questions and
also receive other posts from the list.
1) Scitech ( Scientific Computing mailing list from Apple)
2) JAMA (Java Matrix Algebra and Numerical Computing group mailing list from
NIST - National Institute of Standards & Technology)
JAMA contains the algorithm for EIGEN VALUE DECOMPOSTION, if you program in
JAVA, however MATLAB has the algorithm too. JAMA is similar to MATLAB, in fact
it is MatLab written in Java. JAMA was developed by NIST and MathWorks ( MatLab
I hope this gives you a direction of where to seek further help.
Reply to this message Quote this message when replying?
yes no
Post a new topic to the Research Area Discussion discussion
Visit related discussions:
Probability & Statistics
Discussion Help | {"url":"http://mathforum.org/mathtools/discuss.html?context=dtype&do=r&msg=11029","timestamp":"2014-04-16T08:32:01Z","content_type":null,"content_length":"19789","record_id":"<urn:uuid:12253a37-5a5b-433f-88ca-c59eacfa8d80>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shock waves, part 3
In a previous post I wrote about shock waves in fluids, including the case that they are described by the Einstein-Euler equations for a self-gravitating fluid in general relativity. I mentioned
there a result of Fredrik Ståhl and myself proving that smooth solutions of the Einstein-Euler system can lose regularity in the course of their time evolution. This was done in the framework of
spacetimes with plane symmetry. Here I want to describe some complementary results which were recently obtained by Philippe LeFloch and myself. These new results concern the existence of global weak
solutions in situations where shocks may be present. This work is done under the assumption of Gowdy symmetry, which is weaker than plane symmetry. It allows the presence of gravitational waves,
which plane symmetry does not. It uses time coordinates different from the constant mean curvature (CMC) coordinate used in the work with Ståhl. This difference in the time coordinates makes it
difficult to relate the results of the two papers directly. It would be interesting to adapt the results of either of these papers to the time coordinates of the other.
In the paper with LeFloch we use coordinates (areal, conformal) which have previously been used in analysing analogous problems for vacuum spacetimes or spacetimes where the matter content is
described by collisionless kinetic theory. A big difference is the weak regularity. One effect of this is that while in the given context it has been possible to prove global existence theorems for
the initial value problem, nothing is known about the uniqueness of the solutions in terms of initial data. It should, however, be noted that in the corresponding analytical framework uniqueness is
not even known for a one-dimensional non-relativistic fluid without gravity. Another new element introduced by the use of weak solutions is that it is only possible to evolve in one time direction.
This model is not reversible, a fact implemented mathematically by the imposition of entropy inequalities. One of the results obtained concerns a forever expanding cosmological model. The other one
concerns a contracting model which ends in a singularity. The second is not a global existence result in the conventional sense but it can be thought of as saying that the solution can be extended
until certain specific things happen (a big crunch singularity).
To finish this post I want to indicate the type of regularity of the solutions obtained. I only state this roughly – more precise information can be found in the paper. The energy density and
momentum density of the fluid is integrable in space, with the $L^1$ norms locally bounded in time. The quantities parametrizing the spacetime metric have first order derivatives which are square
integrable in space. These conditions allow for jump discontinuities in the energy density which is what comes up in shock waves. It also allows singularities of Dirac $\delta$ type in the metric,
corresponding to what are often called impulsive gravitational waves.
Uwe Brauer Says:
September 25, 2012 at 12:42 pm | Reply
I just came across with this post of yours. You write:
“It should, however, be noted that in the corresponding analytical framework uniqueness is not even known for a one-dimensional non-relativistic fluid without gravity.”
oops I’d say why can’t the results of Bressan et all be applied?
Uwe Brauer
• hydrobates Says:
September 25, 2012 at 1:31 pm | Reply
To my knowledge the uniqueness results of Bressan are limited to the case of systems of conservation laws which are genuinely nonlinear. Thus, they do not apply to the full Euler equations, which
have one set of linearly degenerate characteristics, although they do apply to the isentropic case. This is what I was thinking of when I wrote this but I did not write it explicitly. If what I
thought is out of date and there are now Bressan-type results applying to the full Euler equations then please let me know.
Uwe Brauer Says:
September 25, 2012 at 1:40 pm | Reply
Ok some sort of misunderstanding then.
I *was* thinking of the _isentropic_ case. In your post you didn’t say whether it is isentropic, but I assumed it were (I did not check your work with Floch, although I read it some time ago).
But now that you mentioned it, I will check whether Bressan’s results have been generalized to the case with a linear degenerate equation. If I find out I let you know. | {"url":"http://alanrendall.wordpress.com/2010/04/06/shock-waves-part-3/","timestamp":"2014-04-20T08:15:26Z","content_type":null,"content_length":"49432","record_id":"<urn:uuid:197360c7-3a1b-453b-a39b-66e1487860ec>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archaeological Sampling Strategies
Mary Richardson
Grand Valley State University
Byron Gajewski
The University of Kansas Medical Center
Journal of Statistics Education Volume 11, Number 1 (2003)
Copyright © 2003 by Mary Richardson and Byron Gajewski, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written
consent from the authors and advance notification of the editor.
Key Words: Active learning; Advanced Placement statistics; Introductory statistics; Simulation.
This paper describes an interactive project developed to use for teaching statistical sampling methods in an introductory undergraduate statistics course, an Advanced Placement (AP) statistics
course, or, with adaptation, in a statistical sampling course or a statistical simulation course. The project allows students to compare the performance of simple random sampling, stratified random
sampling, systematic random sampling, and cluster random sampling in an archaeological setting.
1. Introduction
In this paper, we will discuss an interactive project that we use to illustrate properties of statistical sampling techniques in an introductory level statistics course or, with adaptation, in
higher-level statistics courses. The project allows students to compare the performance of different statistical sampling techniques in an archaeological context.
The project described here was initially developed for use in an undergraduate general education introductory statistics course. The students in this course have a limited mathematical background,
with most having previously taken only basic algebra. We believe that some statistical concepts are generally difficult for the introductory students to learn in a lecture setting. Some statisticians
even argue that lectures are no longer effective in an age where fast paced technology dominates our culture. Cobb (1992) states: “Shorn of all subtlety and led naked out of the protective fold of
educational research literature, there comes a sheepish little fact: lectures don’t work nearly as well as many of us would like to think.” and in describing the Activities-Based Statistics Project,
Scheaffer (1996) states: “Their fast-paced world of action movies, rapid-fire TV commercials and video games does not prepare today’s students to sit and absorb a lecture, especially on a supposedly
dull subject like statistics. To capture the interest of these students, teaching must move away from lecture-and-listen to innovative activities that engage students in the learning process.”
In the introductory course, we use many hands-on interactive projects to illustrate key statistical concepts. The projects are completed collaboratively by groups of students during one-hour class
periods. The use of hands-on explorations of statistical concepts helps to get the students more involved in the learning process and has been a very effective, fun way for the students to learn
introductory statistics.
In addition to discussing the use of the project in the introductory course, we will discuss extensions of the project that can be used in an applied probability and simulation course and/or a
statistical sampling course. We will also attempt to provide the reader with some realistic examples of statistical sampling applied in archaeological settings.
2. The Introductory Course Project
2.1 Background
As a motivation for our project, consider the following scenario. Three years ago, funding for a new campus library was awarded to a large public institution of higher learning. However, construction
was delayed during the initial digging when archaeological artifacts were discovered. Specifically, human bones from an old grave were unearthed. This discovery required the university to perform an
exhaustive, costly archaeological study on the excavation site of the library.
There are two possible ways the university could have avoided this costly, and somewhat embarrassing, controversy. The first would be to carefully study the legal documents, newspaper archives and
all other information regarding the excavation site. This option is thorough, but time consuming, and it can take years to complete. Therefore, it is not a practical option. The second option would
be to perform a kind of strategic digging, specifically to take a representative sample of the excavation site and, from this, determine if there are, in fact, archaeological “finds.”
According to Orton (2000), the term site has many meanings. The example given above is of a development site - an area of land subject to some form of proposed commercial, agricultural or
infrastructural development. Orton (2000) states that for a development site, the goal is to detect the presence and extent of any significant archaeological remains, and to either record them before
damage or destruction, or to mitigate the damage by redesign of the proposed development. For an archaeological site, the goal may be to determine the extent and character of a site (perhaps newly
discovered in a regional survey), or there may be a more site-specific research design.
According to Lizee and Plunkett (1994), one of the challenges that an archaeologist faces after the discovery of an excavation site is how to determine the locations within the excavation site that
will be dug in order to uncover artifacts. Obviously, digging everywhere within a site would be the maximal way to locate artifacts, but usually, time and resources do not allow for the total
excavation of a site. Archaeologists must develop cost and time-efficient strategies for digging.
The methods used to excavate a site may vary according to whether the site is largely invisible on the ground surface, or whether it has extensive visible remains (Orton 2000). Orton (2000) notes
that for some sites located in arid and semi-arid areas, such as the south-west USA, the visibility of archaeological remains on the surface is good. For these types of sites, fieldwalking, or
pedestrian survey can be employed and large areas can be covered at a reasonable cost in terms of time and labor. In other parts of the world (or of the USA), conditions at sites are completely
different. The land surface may be covered in grassland, arable crops, or forest, so that even the ground itself, let alone any archaeological remains, may not be visible.
According to Orton (2000), the idea of using probabilistic sampling methods explicitly in archaeological survey is usually attributed to Binford (1964). Binford stressed the idea of the region (a
collection of sites) as a natural unit of archaeological study, and, admitting the impossibility of total coverage at this scale, advocated probabilistic sampling techniques as the way of achieving
representative and reliable data with less than total coverage. Before we begin our discussion of the introductory course project, we discuss some relevant definitions and terminology for applying
statistical sampling in an archaeological context (extracted from Orton 2000).
Prior to excavation, a site must be divided into sampling units (excavation units). The units should cover the entire site, and they should not overlap. Frequently, the choice of units is not
obvious. In excavating a site, it would seem logical to proceed in stages, with the extensive use of non-invasive methods (such as fieldwalking) being followed by intensive sampling of “interesting”
areas by invasive methods (such as trial excavation in trenches or test-pits). However, this approach is fairly uncommon in practice, perhaps because it fits badly within the time constraints usually
involved in this sort of work. Typically, a site is either sampled in a purposive way, in that the digging is targeted on possible features (which have already been identified), or in a probabilistic
way, if little is known in advance about the site. If purposive sampling is used, the shape and size of the excavation units (trenches or test-pits) are likely to be determined by the nature of the
visible evidence that points to their location. For probabilistic sampling, the choice of excavation units is usually either 2 meter-wide machine-dug trenches, often 30 meters long, or hand-dug
test-pits, usually 1 meter or 2 meters square, although sizes up to 16 meters square are sometimes used. Trenches are flexible in design, cheap per area stripped and good at detecting features, but
are destructive and have a poor recovery rate for archaeological finds. Test-pits are good at detecting archaeological finds and are relatively non-destructive, but are labor-intensive, and therefore
In order to use an archaeological setting to demonstrate the use of statistical sampling, we will assume that a site will be sampled probabilistically. Further, we assume that the excavation units
are test-pits.
2.2 Procedure
Prior to completing this project, students have been exposed to basic definitions and terminology related to statistical sampling and they have seen examples of simple random sampling, stratified
random sampling, systematic random sampling, and cluster random sampling. For completeness, we define each of the four sampling techniques and comment on the use of each technique in the sampling of
archaeological sites.
Simple random sampling is the foundation for all of the sampling techniques. Simple random sampling is such that each possible sample of size n units has an equal chance of being selected. Orton
(2000) notes that in an archaeological setting, some practitioners worry that a true simple random sample has the appearance of “bunching” and seek to avoid it. Systematic sampling is one way of
doing so.
Systematic random sampling requires the user to order the population units in some fashion, randomly select one unit from among the first k ordered units, and then select subsequent units by taking
every k^th ordered unit. Orton (2000) notes that one disadvantage with systematic sampling is that the sampling interval may, by misfortune, relate to some regularity in the site. For example, when
sampling grid squares on a map, systematic sampling may generate diagonal lines of sampled squares, which in turn may relate to natural topographical or geological features. But, there are situations
in which systematic sampling performs much better than simple random sampling. Archaeologists often seem to prefer systematic samples, partly because they appear to give “better coverage” or an “even
spread” of a site, and partly because they are easier and quicker to select. These points must be weighed against the possibility of the sampling interval matching some regularity in the data.
Stratified random sampling is simply forming subgroups of the population units and selecting a simple random sample of units from within each subgroup. For example, stratification might be warranted
if the archaeologist mandates that a representative sample be taken from the west end of a site as well as the east end of a site. Orton (2000) states that the possibility of stratification of a site
should always be considered. The definition of strata could be based on any property, or combination of properties, such as geology or elevation, or on any aspect that is thought likely to affect the
parameters under study, for example, the density of artifacts located in a site.
Cluster random sampling also requires the sampling units to be placed into subgroups. However the subgroups are typically obtained from units close in location. A simple random sample of the
subgroups is then taken, and every unit within the selected subgroup is a part of the sample. For example, clustering by location of excavation units might be performed if the proximity of the units
will provide efficient use of a backhoe when digging in the excavation units.
Part 1
The introductory project is completed in two parts. We use Part 1 and Part 2 in sequence, however, Part 1 could be used without using Part 2. Part 1 gives students an opportunity to practice the
mechanical aspects of performing the four sampling techniques. The estimated interactive completion time for Part 1 is 30 minutes. Part 2 expands on Part 1 and helps students take mechanical
knowledge of the sampling techniques one step further by requiring them to investigate the performance of the sampling techniques in differing scenarios. The estimated interactive completion time for
Part 2 is 50 minutes. After completing both parts of the project, students will be able to perform the four types of sampling and have an understanding of which of the sampling procedures should be
applied in populations with differing characteristics.
To begin Part 1, students are divided into groups of between two and four. Each student is given a copy of the Project Background sheet (see Appendix A.1), the Part 1 Worksheet (see Appendix A.2),
and a random number table. The problem (which was taken from Lizee and Plunkett (1994)) is formalized as follows. The Project Background sheet contains an initial map of an archaeological site. The
site is an area of approximately 6,400 square meters to be impacted by construction of a housing development. The area consists of a mature second growth forest of maple, ash, and oak trees. Since it
is both time and labor intensive to excavate the entire site, a sampling strategy must be developed. Working within budgetary and time constraints, archaeologists can only excavate part of the site
to determine the presence of buried artifacts. The site contains100 8 by 8 meter excavation units (test-pits) and there is only enough time to dig in 20 of the test-pits. The map is shown in Figure 1
below (with an X representing a test-pit that contains artifacts or “finds”). On the initial map, we randomly assigned finds to 20 of the test-pits.
│ Site 1 │
│ ││ │ │ │ │ │X│ │ │
│ ││X│ │ │ │ │ │X│X│
│ ││ │ │X│ │ │ │ │ │
│ ││ │ │X│ │ │ │ │ │
│ ││ │X│ │ │X│X│ │ │
│X││X│X│X│ │ │X│ │ │
│ ││ │ │ │X│ │ │ │ │
│ ││ │X│X│ │ │ │X│ │
│ ││ │ │ │ │ │ │ │ │
│ ││ │ │ │ │ │X│ │X│
Figure 1. Initial Map of an Archaeological Site.
Each classroom group is to use each of the four sampling strategies to select a sample of n = 20 test-pits from the site. To perform stratified random sampling, the site is divided into two equally
sized strata containing 50 test-pits each (using column 1 through column 5 of test-pits for stratum I and column 6 through column 10 of test-pits for stratum II). Ten test-pits are selected from each
stratum. Systematic random sampling is performed by rows, using the top row as ordered test-pit numbers 1 through 10 with the leftmost test-pit being 1 and the rightmost test-pit being 10, …, so that
the bottom row represents ordered test-pit numbers 91 through 100 with the leftmost test-pit being 91 and the rightmost test-pit being 100. Cluster random sampling is performed using the rows of
test-pits for clusters. We ask the groups to use uniform starting locations on a random number table (the same seeds on a calculator) in order to have a classroom discussion of the results of
selecting the different samples.
The goal is to use each sample of 20 test-pits to estimate the total number of test-pits containing finds (which, for brevity, we will refer to as the “total number of finds”), denoted by Y. After
students have selected their simple random sample from Site 1, we ask them to explain how to use the sample number of test-pits containing finds to estimate the total number of finds at this site.
For each sampling technique, since 1/5 of the site’s test-pits are being sampled, five times the number of finds out of the 20 sampled test-pits serves as an estimate of the total number of finds,
The motivation behind estimating the total number of finds at an archaeological site is that, if the estimated total number of finds at a site, denoted by Orton (2000) notes that in some cases, any
archaeological remains may be deemed “significant,” while in other cases the remains may have to occupy a specified total area, or a specified proportion of a site, before they can be called
Part 2
To begin Part 2, students are divided into groups of between two and four. Each student is given a copy of the Part 2 Worksheet (see Appendix A.3) and two sticky notes. The problem is formalized as
follows. Through artificial examples, students are to explore scenarios where one sampling technique yields relatively more precise estimates of the total number of finds at a site than does another
sampling technique. Students compare the performance of the four sampling techniques by examining three different layouts of archaeological sites. It is assumed that each site contains100 total
test-pits and that 20 of the test-pits contain artifacts.
For Comparison 1, we place an X in the appropriate test-pits on a blank grid in order to illustrate the layout of an archaeological site for which repeated stratified random sampling of 20 test-pits
would most likely produce a less variable (more accurate) estimate of the total number of artifact finds at the site than would repeated simple random sampling of 20 test-pits. Once again, we use
column 1 through column 5 of test-pits for stratum I and column 6 through column 10 of test-pits for stratum II. The layout for this site is shown in Figure 2.
│ Site 2 │
││ │ │ ││││  │││
││ │X│ ││││ │││
││X│X│X││││ │││
││X│X│X││││ │││
││X│X│X││││ │││
││X│X│X││││ │││
││X│X│X││││ │││
││X│X│X││││ │││
││ │X│ ││││ │││
││ │ │ ││││ │││
Figure 2. Layout for Comparing Simple Random Sampling to Stratified Random Sampling.
For this comparison, we do not use equal sample sizes from the two strata. Our motivation for sampling from the strata at different rates is based on an attempt to realistically illustrate the use of
stratification in archaeological sampling. Orton (2000) discusses a case study for which an urban site contains clearly visible structures and notes that many urban sites fall into this category,
especially if they have been deserted and not re-occupied or built over. Orton (2000) states that for urban sites, stratification may be more useful and more feasible than in other situations. A site
may be divisible into zones (e.g. religious, industrial, domestic), which can be demarcated as statistical strata and sampled from at different rates according to the nature of the research
questions. Redman (1987) discusses an excavation of a site at Shoofly Village in Arizona that was completed in three stages. In the first stage, stratification was used, with an area inside of an
enclosure wall being sampled from at four times the intensity of an area immediately outside.
With this case study in mind, we instruct students to select sixteen test-pits from stratum I and four test-pits from stratum II and we ask them to explain how to use the sample number of test-pits
containing finds to estimate the total number of finds at the site. For the stratified samples, since 16/50 of the test-pits are being sampled from stratum I (and stratum II contains no finds), for
each selected sample, 50/16 times the number of finds out of the sixteen sampled test-pits in stratum I serves as an estimate of the total number of finds at the site,
We begin our discussion of this comparison by asking students to examine Site 2 and state whether they think that repeated stratified random sampling of test-pits from this site will be likely to
produce less variable estimates of the total number of finds at the site. It has been our experience that this is a difficult question for students to answer. They are usually quite baffled until we
quote a question from Aliaga and Gunderson (1999, p. 78): “When you form the strata, how should the variability of the units within each stratum compare to the variability between stratum?” After
remembering that the variability within each stratum should be small and the variability between stratum should be large, many students see that by forcing one of the stratum to have all of the
finds, we are forming very homogenous strata and we are giving stratified random sampling a better chance to produce consistent estimates of the total number of finds in repeated sampling.
One student wrote “I believe that stratified random sampling will produce less variable estimates than simple random sampling. I believe this will happen because all of the finds are so closely
placed that doing just simple random it is likely that very few will be hit, while in stratified you know that sixteen attempts will be on the side where all the finds are and they will be hit more
often and more uniformly from trial to trial.” Another student wrote “Stratified would be less variable because if you use simple random sampling it would be possible to choose more test-pits in the
right columns than in the left columns and find nothing.” Another wrote “A stratified random sample would be the better choice because there would be a high number of finds in stratum one but zero in
stratum two so the correct number of finds is more likely. In a simple random sample there is a higher chance of getting test-pits scattered about the entire site so there is more of a chance of not
getting enough finds or too many.”
Next, we ask each student to pick their own starting position on a random number table and select both a simple random sample and a stratified random sample of 20 test-pits from Site 2. Students
record their estimated total number of finds on sticky notes, and place the sticky notes in the appropriate positions on frequency plots on the white board. In Figure 3, we have included example
class results for the estimated total number of finds using each sampling technique.
: : :
: : : : .
. . : : : : : :
-----+---------+---------+---------+---------+---------+- Stratified
7.0 14.0 21.0 28.0 35.0 42.0
: :
. : : : :
: : : : : . : :
-----+---------+---------+---------+---------+---------+- Simple Random
7.0 14.0 21.0 28.0 35.0 42.0
Figure 3. Class Results for Comparing Simple Random Sampling to Stratified Random Sampling.
Once everyone has selected their samples and placed their results on the white board, we begin a discussion of the class results. For each sampling technique, students calculate descriptive
statistics for the class estimated total numbers of finds (mean, standard deviation, and quartiles) and construct comparative boxplots. Figure 4 shows descriptive statistics and comparative boxplots
for the example class results.
┃ Stratified Random Sampling │ Simple Random Sampling ┃
┃ mean = 21.15 │ mean = 20.50 ┃
┃ standard deviation = 5.67 │ standard deviation = 9.68 ┃
┃ first quartile = 15.63 │ first quartile = 15.00 ┃
┃ median = 20.31 │ median = 20.00 ┃
┃ third quartile = 25.00 │ third quartile = 25.00 ┃
Figure 4. Descriptive Statistics and Comparative Boxplots of Class Results for Comparing Simple Random Sampling to Stratified Random Sampling.
We then discuss how the numerical calculations and graphs of the class estimated totals support the fact that, for Site 2, repeated stratified random sampling is more likely to produce less variable
estimates than repeated simple random sampling. Note that for the example class results, the standard deviation of the stratified random sample estimated totals is 5.67 compared to 9.68 for the
simple random sample estimated totals. And, clearly, from the comparative boxplots, we see that the simple random sample estimates vary considerably more than the stratified estimates. Note that
another valuable aspect of collecting and analyzing the class data is that it enables the instructor to introduce the concept of unbiasedness. Students can see that the distributions of estimated
totals (for both sampling techniques) are centered at approximately 20 finds.
Finally, we ask students to state whether they think that the class’ repeated stratified random sampling of test-pits from Site 2 produced less variable estimates of the total number of finds at the
site. Students must justify their answers by using the numerical descriptive statistics and the graphs produced from the class estimated totals.
For Comparison 2, we place an X in the appropriate test-pits on a blank grid in order to illustrate the layout of an archaeological site for which repeated (1-in-5) systematic random sampling of 20
test-pits (assuming that the systematic random sampling is performed by rows) would most likely produce a less variable estimate of the total number of artifact finds at the site than would repeated
simple random sampling of 20 test-pits. The layout for this comparison is shown in Figure 5 below.
│ Site 3 │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ │
Figure 5. Site Layout for Comparing Simple Random Sampling to Systematic Random Sampling.
We ask students to state whether they think that repeated systematic random sampling of test-pits from Site 3 will be likely to produce less variable estimates of the total number of finds. After
having discussed Comparison 1 and working with a concrete example of repeated sampling from Site 2, most students, when given the layout for Site 3, are able to correctly identify 1-in-5 systematic
random sampling of this site as producing the least variable estimated totals.
One student wrote “The 1-in-5 systematic sample is going to be more accurate than a simple random sample in this case because every time you do this you are going to get an estimated total of 20. The
first four digs are going to hit artifacts no matter where you start. You won’t be able to have as accurate of results with simple random sampling.” Another wrote “A 1-in-5 systematic sample because
of this particular layout whichever number you start with will always get you four finds. If you use a simple random sample then the number of finds will be less accurate. This is because each time
the sample is taken the results could be different.” Another wrote “With the one in five sample you will always get 20 estimated total no matter where you start. With repeated random sampling the
estimated total should eventually even out around 20. However, with systematic, you always get 20.”
For Comparison 3, we ask students to place X’s in the appropriate test-pits on a blank grid in order to illustrate the layout of an archaeological site for which repeated cluster random sampling of
20 test-pits (again assuming that the cluster sampling is performed by rows) would most likely produce a less variable estimate of the total number of finds than would repeated simple random sampling
of 20 test-pits. Here, we give a hint that challenges students to create a layout that will produce exactly four finds in every possible cluster sample of 20 test-pits (two rows). A correct answer
for this comparison is shown in Figure 6 below.
│ Site 4 │
│││X │││││X │││
│││X │││││X │││
│││X │││││X │││
│││X │││││X │││
│││X │││││X │││
│││X │││││X │││
│││X │││││X │││
│││X │││││X │││
│││X │││││X │││
│││X │││││X │││
Figure 6. Site Layout for Comparing Simple Random Sampling to Cluster Random Sampling.
For Comparison 3, assessment of student answers is straightforward. Following the hint, students will place two finds in each row of the site and cluster random sampling will provide exact estimates
of the total number of finds. Simple random sampling will provide deviations from the true total number of finds.
3. A Case Study Between Stratified Random Sampling and Simple Random Sampling
In this section, we discuss an alternative approach to Comparison 1 for Part 2 of the introductory project.
Under the same layout for Site 2, students can be asked to compare the performance of repeated stratified random sampling to repeated simple random sampling (of 20 total test-pits) when ten test-pits
are sampled from stratum I (n[1] = 10) and ten test-pits are sampled from stratum II (n[2] = 10). It has been our experience that, in a typical class, with only 25 to 30 replications, it is difficult
to see a difference in the performance of these two sampling techniques for Site 2. However, to generate a larger-scale simulation, each student could be asked to perform each of the sampling
techniques four times so that the class as a whole provides roughly 100 simulated samples.
To explore the relationship between sample size from stratum I and the performance of stratified random sampling for Site 2, students could repeat their simulations, but for a different sample size
in stratum I. It seems logical that more information should be taken from stratum I than from stratum II, since there are more artifacts in stratum I. Figure 7 displays the relationship between
sample size from stratum I and the performance of stratified random sampling, for sample sizes from stratum I ranging from n[1] = 2 all the way to n[1] = 18.
Figure 7. Boxplots for Stratified Random Sampling, with Unequal Sample Sizes from Each Stratum, Versus Simple Random Sampling, Under Site 2.
The simulation results presented in Figure 7 show that the higher values of n[1] provide overall better estimates of the total number of finds at the site. Essentially this result demonstrates that
all of our sampling resources need to be placed in the stratum with all of the information. The first stratum has all of the artifacts, therefore, optimally, the sample size allocated to the first
stratum is n[1] = 20 and the sample size allocated to the second stratum is n[2] = 0. The reason this is optimal is due to the variation within each stratum. Consider placing a “1” in a test-pit with
an artifact and a “0” in a test-pit without an artifact. The standard deviation for stratum II is 0 and the standard deviation for stratum I is 0.4949 (for Site 2). For equal sized strata, the
optimum sample size allocated to each stratum is proportional to the standard deviation within that stratum. This is called “optimal allocation” or more generally “Neyman allocation.” For more on
this topic, for multiple strata and unequal sized strata, see Cochran (1977) or Lohr (1999).
4. The Simulation Course Project
For the simulation course project, we introduce the four sampling techniques and we assign Parts 1 and 2 of the introductory project as homework. For Part 2 of the introductory project, rather than
have students select samples in class, we include an additional sheet that contains simulation results for estimated totals obtained by drawing repeated stratified and simple random samples from Site
2 (see Appendix A.4). After collecting this homework assignment, we give a follow-up assignment that requires students to write simulation programs to compare the performance of all four sampling
techniques for each of Sites 1 through 4.
We ask students to base comparisons of the performance of the four sampling techniques on an examination of simulated values of Mean Squared Error (MSE) (in estimating the total number of finds). In
general, if
Simple random sampling and stratified random sampling, for our problem, can be coded in MINITAB using the hypergeometric distribution. The commands to perform simple random sampling for any site
layout are:
MTB > Random 10000 c1;
SUBC> Hypergeometric 100 20 20.
MTB > Let c2=(c1/20)*100
The “SUBC” command samples from a population of 100 test-pits with 20 successes (finds), using a sample size of 20. The second column (c2) contains the estimated totals for the 10,000 replications.
To generate stratified random sampling, under Site 2 (all of the artifacts in the first strata), one generates two hypergeometric columns in MINITAB and combines them (via a weighted formula) to
obtain the estimated total. For example, if we wish to sample n[1] = 14 and n[2] = 6 , we use the following commands:
MTB > Random 10000 c3;
SUBC> Hypergeometric 50 20 14.
MTB > Random 10000 c4;
SUBC> Hypergeometric 50 0 6.
MTB > Let c5=((c3/14*.5+c4/6*.5))*100
It has been our experience that our simulation students (often engineers, computer scientists and statisticians) prefer to program in C++, SAS, or MATLAB. Thus programming results can vary depending
on the type of language used. The second author has created code to perform all of the sampling techniques in this paper using MATLAB. This MATLAB code can be obtained by contacting the second
Table 1 below gives example results for simulations of 10,000 samples of size 20 drawn using each of the four sampling techniques for each of Sites 1 through 4. The simulated MSE was calculated from
the 10,000 values by taking the estimated totals,
Table 1. MSE’s for 10,000 samples drawn for each of the four sampling techniques for each of Sites 1 through 4.
┃ Type of Sample │ Site 1 │ Site 2 │ Site 3 │ Site 4 ┃
┃ Simple Random │ 65.7 │ 65.0 │ 65.0 │ 65.8 ┃
┃ Stratified Random (n[1] = 10, n[2] = 10) │ 64.8 │ 49.0 │ 64.9 │ 67.3 ┃
┃ Cluster Random │ 89.7 │ 71.5 │ 709.2 │ 0.0 ┃
┃ Systematic Random │ 109.0 │ 281.4 │ 0.0 │ 1639.6 ┃
The simulated MSE’s provide very good approximations to the theoretical MSE’s. The theoretical MSE is calculated by using probability arguments that take into account the structure of the site and
the way that sampling was performed. The theoretical MSE is given by:
Note that the concept of Mean Squared Error is easily illustrated from the systematic random sampling approach. By stacking the first five columns on the last five columns, the number of finds in
each column times five would be the estimated total, where each total has a probability of selection of 1/5. For example, the five estimated totals for Site 1 are 10, 5, 30, 25 and 30. Thus, the
theoretical MSE is:
= ((10 - 20)^2 + (5 - 20)^2 + (30 - 20)^2 + (25 - 20)^2 + (30 - 20)^2)/5 = 110.0,
corresponding almost exactly to the simulated value given in Table 1. Similarly, for systematic random sampling, the remaining three sites have theoretical MSE values of 280.0, 0.0, and 1600.0.
Extending this idea to derive the theoretical MSE values for the other sampling designs can provide the foundation for a discussion of general variance formulas presented in a typical upper-division
sampling course.
5. The Sampling Course Project
It has been our experience that variance formulas tend to be overwhelming in an upper level undergraduate sampling course. The classic text where such formulas are presented is Cochran (1977). A
sampling course is a perfect setting for demonstrating the exercises from Section 2. In all courses class time is valuable, therefore it is recommended that the instructor assign Parts 1 and 2 of the
introductory project as homework. For Part 2 of the introductory project, rather than have students select samples in class, the instructor could include an additional sheet that contains simulation
results for estimated totals obtained by drawing repeated stratified and simple random samples from Site 2 (see Appendix A.4). The simulation exercises from Sections 3 and 4 could be discussed
heuristically. During the introductory stages of a sampling course this would be ideal. The project would emphasize general definitions, finite population correction, unbiased estimators, and provide
a foundation for a discussion of MSE.
Typically in a sampling course, all four major sampling designs are covered and a connection among the sampling designs provides an understanding of the “big picture.” By revisiting the project later
in the semester, the use of variance formulas could be compared to the simulation results.
MSE formulas found in Cochran (1977) or Lohr (1999) are summarized in Table 2. Traditionally, variance formulas are presented differently for 0-1 populations relative to a continuous response. The
formulas described here use the approach reserved for continuous measures, however the continuous response, y[i], is generalized to a 0-1 population (with a “1” placed in a test-pit containing an
artifact, and a “0” placed in the remaining test-pits). The theoretical calculations for each of the sites are presented adjacent to the closed form formulas. The theoretical calculations in Table 2
can be compared to the simulated values given in Table 1.
Table 2. Theoretical MSE’s for each of the four sampling techniques for each of Sites 1 through 4. The variance formulas follow the notation used in Cochran (1977) and are detailed further in
Appendix B.
┃ Type of Sample │ Theoretical Variance │ Site 1 │ Site 2 │ Site 3 │ Site 4 ┃
┃ Simple Random │ │ 64.6 │ 64.6 │ 64.6 │ 64.6 ┃
┃ Stratified Random (n[1] = 10, n[2] = 10) │ │ 65.3 │ 49.0 │ 65.3 │ 65.3 ┃
┃ Cluster Random │ │ 88.9 │ 71.1 │ 711.1 │ 0.0 ┃
┃ Systematic Random │ │ 110.0 │ 280.0 │ 0.0 │ 1600.0 ┃
Note that the sampling units have changed since we are sampling clusters as experimental units. The systematic approach can be viewed as a clustering approach before the formula is applied (i.e.
stack the first five columns on the last five columns making the columns clusters and then select one column at random).
Recall that the simulated systematic random sampling MSE for Site 1 was reported as 109.0 and the true systematic random sampling MSE for Site 1 is 110.0, which is easily derived since there are only
five possible responses under the systematic random sampling approach. It is much more difficult to use this brute MSE calculation under simple random sampling since there are approximately
In addition, general Neyman allocation principles can be illustrated if knowledge about the number of test-pits containing artifacts in each strata is assumed and that the cost to dig in the first
strata differs from the cost to dig in the second. The instructor can play “what if” games and the students can use intuition to get closed formulas of Neyman allocation. At the very least, students
will be “ready” to see this beautiful theory with an example that they can relate to. For a discussion of Neyman allocation, we refer the reader to Cochran (1977) or Lohr (1999).
6. Conclusions
This project has a wide range of possible uses and extensions. It can be used in upper division undergraduate courses as well as in an introductory undergraduate course or an AP course.
We use the project to give introductory statistics students an opportunity to practice the mechanics of performing different statistical sampling techniques and then build on a mechanical knowledge
of sampling to gain an understanding of which sampling procedure should be applied in populations with differing characteristics. In a statistical simulation course, we use the project to provide a
challenging simulation problem as well as a means for an introduction to the concepts of statistical sampling and the evaluation of estimators of population parameters. Finally, in a statistical
sampling course, we propose to use the project as a numerical introduction to formulas for calculating theoretical variances.
Appendix A
The worksheets are stored as Adobe PDF documents. Click on the title of the worksheet to view it.
Appendix B: Notation and Specific Formulas
y[i] = 0 if no artifact find, y[i] = 1 if artifact find
n = number of test-pits sampled
N = total number of test-pits
L = number of strata
n[h] = number of test-pits sampled from stratum h
N[h] = total number of test-pits in stratum h
M = number of test-pits in each cluster
N[M] = number of clusters in the population = N / M
n[M] = number of clusters in the sample = n / M
k = the selection number for a 1-in-k systematic sample
Note: The variance formula for systematic sampling is calculated by converting the problem to cluster sampling of size 1. The matrix is found by taking the transpose of the stack of the first
five columns on the last five columns (Cochran 1977, p. 277). The cluster formula is then applied, with the number of test-pits in each cluster, M = N / k.
The authors gratefully acknowledge the helpful comments and suggestions of the editor and the reviewers during the preparation of this manuscript.
Aliaga, M., and Gunderson, B. (1999), Interactive Statistics, New Jersey: Prentice Hall.
Binford, L. R. (1964), “A Consideration of Archaeological Research Design,” American Antiquity 29(4): 425-41.
Cobb, G. (1992), “Teaching Statistics,” in Heeding the Call for Change: Suggestions for Curricular Action, ed. L. Steen, MAA Notes, 22, 3-43.
Cochran, W. G. (1977), Sampling Techniques, New York: John Wiley & Sons.
Lizee, J., and Plunkett, T. (1994), “Archaeological Sampling Strategies,” archnet.asu.edu.
Lohr, S. L. (1999), Sampling: Design and Analysis, New York: Duxbury Press.
Orton, C. (2000), Sampling in Archaeology, Cambridge: Cambridge University Press.
Redman, C. L. (1987), “Surface Collection, Sampling, and Research Design: A Retrospective,” American Antiquity 52(2): 249-65.
Scheaffer, R. (1996), Overview for Activity-Based Statistics: Instructor Resources, New York: Key Curriculum Press; Springer.
Mary Richardson
Department of Statistics
2309 Mackinac Hall
Grand Valley State University
Allendale, MI 49401
Byron Gajewski
The University of Kansas Medical Center
SON 3024
3901 Rainbow Boulevard
Kansas City, KS 66160
Volume 11 (2003) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications | {"url":"http://www.amstat.org/publications/jse/v11n1/richardson.html","timestamp":"2014-04-17T04:04:31Z","content_type":null,"content_length":"159419","record_id":"<urn:uuid:ecd3333b-91ea-4590-bf4e-2a6883b2a839>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Centreville, VA Precalculus Tutor
Find a Centreville, VA Precalculus Tutor
...Feel free to email me. Depending on the course and level my tutoring, rates would vary.I have taught & tutored algebra courses at university for 10 years. I have taught and tutored all levels
of calculus courses to students at a university for 6 years.
13 Subjects: including precalculus, calculus, geometry, algebra 1
...After all, we do it every day. But the SAT is not an everyday experience. Specific approaches and strategies can achieve better results, and what works best for a short reading selection may
not be the best approach for a long reading selection.
17 Subjects: including precalculus, chemistry, calculus, geometry
...I am an energetic and happy teacher and I really enjoy finding several different ways to explain something new until it clicks for the student; I am very patient in this sense. Also,
translating dry math concepts into engaging analogies is a characterizing part of my teaching style. Writing and...
13 Subjects: including precalculus, writing, calculus, algebra 1
...I am currently working at United States Patent and Trademark office, examining patent applications in computer-related area. My graduate work from George Mason University in computer science
and and some computer engineering courses landed a job there. In contrast to my current work, my undergr...
15 Subjects: including precalculus, chemistry, calculus, physics
...I've been told multiple times by my students that "I wish my teacher had just said that in the first place." I firmly believe that most math teachers are simply "bad" and that they don't
remember what it's like not to know a subject. I've worked at think tanks and have extensive experience with...
24 Subjects: including precalculus, writing, physics, GRE
Related Centreville, VA Tutors
Centreville, VA Accounting Tutors
Centreville, VA ACT Tutors
Centreville, VA Algebra Tutors
Centreville, VA Algebra 2 Tutors
Centreville, VA Calculus Tutors
Centreville, VA Geometry Tutors
Centreville, VA Math Tutors
Centreville, VA Prealgebra Tutors
Centreville, VA Precalculus Tutors
Centreville, VA SAT Tutors
Centreville, VA SAT Math Tutors
Centreville, VA Science Tutors
Centreville, VA Statistics Tutors
Centreville, VA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Annandale, VA precalculus Tutors
Burke, VA precalculus Tutors
Chantilly precalculus Tutors
Fairfax Station precalculus Tutors
Fairfax, VA precalculus Tutors
Herndon, VA precalculus Tutors
Manassas Park, VA precalculus Tutors
Manassas, VA precalculus Tutors
Mc Lean, VA precalculus Tutors
Oakton precalculus Tutors
Reston precalculus Tutors
Sterling, VA precalculus Tutors
Sully Station, VA precalculus Tutors
Vienna, VA precalculus Tutors
Woodbridge, VA precalculus Tutors | {"url":"http://www.purplemath.com/Centreville_VA_precalculus_tutors.php","timestamp":"2014-04-19T02:10:23Z","content_type":null,"content_length":"24281","record_id":"<urn:uuid:78bf6ca4-71d7-4eca-9ea1-d4e7cdb88087>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Blawenburg Geometry Tutor
...I try my best to be flexible and available the days that work best for you. I am easily accessible outside of our sessions with questions and will make myself available for your success. Feel
free to reach out at any time.
14 Subjects: including geometry, calculus, ASVAB, algebra 1
...I have experience with tutoring, as I tutored college athletes and other college students for four years. I have also taught various other students of all ages and levels (K-12), for
standardized test preparation in particular. I teach with the goal of building confidence.
12 Subjects: including geometry, calculus, algebra 1, algebra 2
...I tutored two student in Honor Geometry. Their understanding of the subject had increased and the grades had improved. I am tutoring another student in Geometry.
9 Subjects: including geometry, calculus, algebra 1, algebra 2
...I have worked with college, high school, and grade school students. I have a genuine love for educating others, and in increasing my own knowledge. The moments when students understand concepts
provide the greatest job satisfaction I can imagine.
17 Subjects: including geometry, chemistry, reading, calculus
...My tutoring began in college with the subjects of inorganic and organic chemistry. It continued with my four children. The process I use in teaching science and mathematics is to break down the
subject matter into small blocks.
9 Subjects: including geometry, chemistry, physics, biology | {"url":"http://www.purplemath.com/blawenburg_nj_geometry_tutors.php","timestamp":"2014-04-16T16:06:12Z","content_type":null,"content_length":"23785","record_id":"<urn:uuid:dc398ad8-dbcc-49e2-9ec3-8fb706f52a75>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/muratcankalem/answered","timestamp":"2014-04-17T16:33:09Z","content_type":null,"content_length":"100632","record_id":"<urn:uuid:ff6da6bc-0bbf-4d60-9e5c-080bff86799e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
dark energy
Originally posted by marcus
There are two hard-to-understand things that must be grasped.
(1) why does a constant energy density cause negative pressure?
(2) why does negative pressure promote expansion?
These two things are not merely current theory but have been known by relativists (GR specialists) for almost 90 years. They are not "intuitive" but they are basic and worth trying to understand.
You can find explanations on the web (try gurus like John Baez, Ned Wright, Charles Lineweaver, as well as the people mathman mentioned). But I can try to save you some trouble by giving the standard
explantions here.
1. The measured dark energy density is half a joule per cubic kilometer, if I remember correctly. Why does this cause a negative pressure?
For easy numbers, suppose the density was 1 joule per cubic meter (much higher!).
Imagine a cylinder with sliding piston, that contains 1 cubic meter of space. For simplicity suppose the cylinder cross-section is one square meter. Take this device outside the universe for study.
Carefully pull the piston out one meter, until the volume in the cylinder is 2 cubic meters. There are now 2 joules of energy inside. You must have done one joule of work in pulling the piston
outwards (or else what made the extra joule?). Therefore you exerted one newton of force for a distance of one meter.
The pressure in the cylinder is minus one pascal (-1 newton per square meter).
The calculation is easy and in fact shows that if the Lambda energy density is X joules per cubic meter then the pressure
it exerts is minus the same number of pascals: -X newtons per square meter.
This proportionality (called w, or the "equation of state") is -1 in this simplest case but it is something theorists play around with to see how the model depends on it.
The other question is
(2) Why does negative pressure promote expansion?
This is a consequence of the Friedmann equation---the basic equation used in cosmology. It has a pressure term which for roughly 80 years was largely ignored because stars and galaxies and dust do
not exert significant pressure.
Pressure couples to gravity, somewhat as energy does. So positive pressure (like a concentration of mass-energy) can cause collapse---it exerts gravitational attraction just as mass does.
However negative pressure does just the opposite!
This is explicitly spelled out in the Friedmann equations, which I will save for another post. | {"url":"http://www.physicsforums.com/showpost.php?p=71324&postcount=4","timestamp":"2014-04-19T22:43:10Z","content_type":null,"content_length":"10281","record_id":"<urn:uuid:8306918d-04c2-4489-8856-0911f59df4ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contributed Poster Index
(all posters are to be max. 36" wide by 42"high)
Posters displays Posters displays
Tuesday August 13 Wednesday August 14
• Koji Azuma • Juan David Botero
Classical analog of "quantum nonlocality without entanglement" On the Dynamics of Spin-1/2 particles: A Phase-space Path-integral Approach
• James Bateman • Rolf Horn
Improving the efficiency of N-photon super-resolution using an optical centroid measurement On chip generation of polarization entanglement in a monolithic semiconductor waveguide
• Salil Bedkihal • Hoi Kwan Lau
Flux dependent effects in degenerate and symmetric quantum double-dot Aharonov-Bohm Quantum secret sharing with continuous variable cluster states
interferometer • Xiongfeng Ma
• Aharon Brodutch Experimental realization of measurement-device-independent quantum key distribution
Restricted, distributed quantum gates • Dylan Mahler
• Zhu Cao Adaptive quantum state tomography improves accuracy quadratically
Efficient Synchronization Scheme for Quantum Communication • Sebastian Duque Mesa
• Aurelia Chenu Relativistic Dynamical Quantum Non-locality
Sunlight can not be viewed as a series of random ultra-fast pulses • Leonardo A Pachon
• Greg Dmochowski Coherent Phase Control in Closed and Open Quantum Systems
Increasing The Giant Kerr Effect By Narrowing The EIT Window Beyond The Signal Bandwidth • Alexandru Paler
• Amir Feizpour Resource Optimization in Topological Quantum Computation: Verification.
Weak-value Amplification of Low-light-level Cross Phase Modulation • Kyungdeock Park (Daniel)
• Kent Fisher Heat Bath Algorithmic Cooling and Multiple Rounds Quantum Error Correction Using Nuclear and
Quantum computing on encrypted data Electron Spins
Roohollah (Farid) Ghobadi Creating and detecting micro-macro photon-number entanglement • Nicolas Quesada
• Roohollah (Farid) Ghobadi Self-calibrating tomography for non-unitary processes
Creating and detecting micro-macro photon-number entanglement • Christoph Reinhardt
• Gilad Gour Design of a Strong Optomechanical Trap
Universal Uncertainty Relations • Katja Ried
• Horacio Grinberg Quantum process tomography with initial correlations
Nonclassical effects in highly nonlinear two-level spin models • Lee Rozema
• Timur Grinev Experimental Demonstration of Quantum Data Compression
Coherent control and incoherent excitation dynamics of pyrazine • Lena Simine
• Andres Estrada Guerra Numerical simulations of molecular conducting junction: transport and stability
Non-Markovian effects in the dynamics of entanglement in high temperature limit • Cathal Smyth and Dr. Gregory D. Scholes
• Matin Hallaji A Method of Developing Analytical Multipartite Measures for mixed W-like States
Quantum control of population transfer between vibrational states in an optical lattice • Xin Song
• Wolfram Helwig Enhanced probing of fermionic interaction using weak-value amplification
Absolutely Maximal Entanglement and Quantum Secret Sharing • Zhiyuan Tang
• Nathaniel Johnston Experimental demonstration of polarization encoding measurement-device-independent quantum key
On the Minimum Size of Unextendible Product Bases distribution
• Dongpeng Kang • Johan F. Triana
Bragg reflection waveguides: The platform for monolithic quantum optics in semiconductors The Quantum Limit at Thermal Equilibrium
• Eric Kopp • Timur Tscherbul
New Control Frontiers in Noiseless Subspace Quantum coherent dynamics of Rydberg atoms driven by cold blackbody radiation
• Hoi Kwan Lau • Tian Wang
Rapid laser-free ion cooling by controlled collision Demonstrating macroscopic entanglement based on Kerr non-linearities requires extreme phase
• X. Xing
Multidimensional quantum information based on temporal photon modulation
• Feihu Xu
Measurement Device Independent Quantum Key Distribution in a Practical Setting
• Zhen Zhang
Decoy-state quantum key distribution with biased basis choice
• Eric Zhu
Broadband Polarization Entanglement Generation in a Poled Fiber
WITHDRAWN • Ray-Kuang Lee
Spin-flip for a Parity-Time symmetric Hamiltonian on the Bloch sphere
• Clement Ampadu • Andreia Mendonça Saguia
Decoherence Matrix of the Gudder-Sorkin Type for Quantum Walks on Z^2 and the Konno-Segawa One-norm geometric quantum discord under decoherence
Conjecture • Angelo Lucia
• Agata M Branczyk Stability of local quantum dissipative systems
Optimised shaping of optical nonlinearities in poled crystals • Vaibhav Madhok
• Sinan Bugu Information gain in tomography - A quantum signature of chaos
Fusing Several Polarization Based Entangled Photonic W States • Shengjun Wu
• Peter Cameron State and process tomography via weak measurements
Quantum Impedances, Entanglement, State Reduction, and the Black Hole Information Paradox
Koji Azuma NTT Basic Research Laboratories
Classical analog of "quantum nonlocality without entanglement"
Coauthors: Masato Koashi, Shinya Nakamura, and Nobuyuki Imoto
Quantum separable operations are defined as those that cannot produce entanglement from separable states from scratch, and it is known that they strictly surpass local operations and classical
communication (LOCC) in a number of tasks, which is sometimes referred to as "quantum nonlocality without entanglement." Here we consider a task with such a gap regarding the trade-off between
state discrimination and preservation of entanglement. We show that this task has a complete classical analog, in which distant parties attempt to preserve secrecy of given bits as much as
possible while they also try to discriminate whether their bits are the same or not. This purely classical scenario is shown to have the same amount of the gap as seen in the quantum case. This
fact suggests that the public communication (corresponding to LOCC in the quantum case) is less powerful than "classical" separable operations that cannot produce secret key from scratch. As a
result, contrary to a common belief that may be inferred from previous known examples in quantum information theory, quantum properties, such as nonorthogonality, measurement backaction, and
entanglement, are not essential in the existence of a nonzero gap between the separable operations and LOCC.
This presentation is based on the paper in arXiv:1303.1269.
James Bateman, University of Toronto
Improving the efficiency of N-photon super-resolution using an optical centroid measurement
Coauthors: Lee A. Rozema, Amir Feizpour, Dylan H. Mahler, Aephraim M.
Precise measurements using light and the precise manipulation of light are essential to many modern technologies. The resolution of the smallest possible features is required in diverse
applications ranging from technical fields such as lithography to basic sciences and biomedical imaging. However, these measurements face fundamental limits. For instance, the resolution of
spatial features is limited by the diffraction of light. There has been much work towards surpassing these limits using novel quantum states. The so-called N00N is known to exhibit
super-resolution, displaying an N-photon interference pattern which is N times narrower than that of classical light.
However, observing such a spatial interference pattern is very inefficient. The probability of all N photons arriving at the same point in space decreases exponentially with N. Here, we
experimentally overcome this hurdle by utilizing an optical centroid measurement. By using an array of 11 single photon detectors, and measuring N-photon correlations among all 11 detectors we
observe the spatial N-fold super-resolution without the exponential loss. We will present experimental results for N=2 and progress towards N=3.
Salil Bedkihal University of Toronto, Chemical Physics Theroy Group, Department of Chemistry
Flux dependent effects in degenerate and symmetric quantum double-dot Aharonov-Bohm interferometer
Coauthors: Malay Bandyopadhyay, Department of Physics, Indian Institute of Technology, Bhubaneshwar India, Dvira Segal, University of Toronto, Chemical Physics Theory Group, Department of Chemistry
We study the steady-state characteristics and the transient behaviour of the non equilibrium double-dot Aharonov-Bohm interferometer using analytical tools and numerically exact influence
functional path integrals. Our simple setup includes non interacting degenerate quantum dots that are coupled to two biased metallic leads at the same strength. A magnetic flux pierces the
interferometer perpendicularly. As we tune the degenerate dot energies away from the symmetric point we observe four non-trivial magnetic flux control effects: (i) flux dependency of the
occupation of the dots, (ii) magnetic flux induced occupation difference between the dots, at degeneracy, (iii) the effect of phase-localization of the dots coherence holds only at the symmetric
point, while in general both real and imaginary parts of the coherence are non-zero, and (iv) coherent evolution survives even when the dephasing strength, introduced via Büttiker probes, is
large and comparable to the dots energies and the bias voltage. In fact, finite elastic dephasing can actually introduce new types of coherent oscillations into the systems dynamics. These four
phenomena take place when the dots energies are gated, to be positioned away from the symmetric point, demonstrating that the combination of bias voltage, magnetic flux and gating field, can
provide delicate control over the occupation of each of the quantum dots, and their coherence.
Juan David Botero Instituto de Física, Universidad de Antioquia, Medellín, Colombia
On the Dynamics of Spin-1/2 particles: A Phase-space Path-integral Approach
Coauthors: Leonardo A. Pachón
The two-level quantum system is the most fundamental element in quantum-information-processing theory (QIPT) and one of its more natural physical implementations comprises a spin-1/2-system.
Entangling these systems and their subsequence manipulation, base on the non-local character of quantum correlations, are the most fundamental protocols in QIPT. The non-locality that is
exploited in those protocols is a non-locality between quantum systems; however, in order to get a complete picture of the quantum correlations, one has to analyze the influence of the non-local
character of the quantum dynamics itself (dynamical non-locality).
We use the proposal given by Bjork et al [1] to construct the Wigner function in a discrete phase space, then with the aim to analize the dynamics of the of the spin-1/2 particles, we develop a
formula for the discrete Wigner propagator and calculate it by means of a direct method based on the path integral formalism for discrete systems[2]. Having already the explicit form for the
Wigner propagator, we can see explicitly the non-local behavior of the quantum dynamics for the discrete systems.
[1] G. Björk, A. Klimov, L. Sánchez-Soto. Progress in Optics, 51, 496 (2008)
[2] L.S. Schulman. Techniques and applications of path integration. Wiley-interscience publication. 1Ed (1996)
Aharon Brodutch IQC, University of Waterloo
Restricted, distributed quantum gates
The role of entanglement in quantum algorithms is somewhat challenged by the existence of mixed state algorithms that generate very little entanglement [1]. Moreover it not clear if any other
property of quantum states can account for the source behind the quantum advantage [2]. A different candidate for this source is quantum gates or more generally quantum operations. In this
case entanglement can be brought into the picture by considering distributed implementations. To minimize resources it is useful to take into account only the relevant set of input states and
simplify the gate's distributed implantation [3,4]. Using this approach we can identify the need for entanglement resources as a function of the input/output sets. This lets us make
meaningful statements about the 'quantumness' of various mixed state algorithms.
[1] Laflamme, R., D. G. Cory, C. Negrevergne, and L. Viola, 2002, Quantum Inf. Comput. 2, 166
[2] Vedral, V., 2010, Found. Phys. 40, 1141.
[3] Brodutch, A., and D. R. Terno, 2011, Phys. Rev. A 83, 010301.
[4] Brodutch, A., arXiv:1207.5105
Zhu Cao Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084 China
Efficient Synchronization Scheme for Quantum Communication
Coauthors: Hai-Lin Yong, Yan-Lin Tang, Wei-Yue Liu, Dong-Dong Li, Cheng-Zhi Peng
Quantum key distribution (QKD) is the first practical application in quantum information science. Time synchronization technology plays an important role in QKD implementations. In this
field, the synchronization precision is required to be in the sub-nanosecond regime, while the accuracy of the current GPS system is in the order of a few nanoseconds. To fill this gap, we
propose an effective synchronization algorithm, where we calibrate the frequency difference and the offset between two remote clocks using internal correlation between quantum signals. More
specifically, the frequency difference is derived by the ratio between the transmitter's and receiver's internal clock time lengths within a large number of GPS signals. Then the offset is
figured by fitting an optimal offset to maximize the raw key rate. With the frequency difference and the offset calibrated, we achieve a sub-nanosecond-precision synchronization.
With our synchronization algorithm, we complete two QKD field tests. In our free-space 32 km decoy-state QKD experiment test, we manage to substantially improve the synchronization precision,
comparing to the conventional hardware-based synchronization scheme. As a result, we are able to decrease the error rate and increase the raw key rate. Results from another test, where the
transmitter is set in a moving vehicle, show that our synchronization scheme is robust under harsh conditions. These two tests demonstrate that our scheme can be useful for future global
high-speed QKD with a LEO satellite. Finally we remark that our scheme may also be valuable for other quantum communication applications, such as teleportation.
Aurelia Chenu Department of Chemistry and Center for Quantum Information and Quantum Control, 80 Saint George Street, University of Toronto, Toronto, Ontario, M5S 3H6 Canada
Sunlight can not be viewed as a series of random ultra-fast pulses
Coauthors: Agata M. Branczyk1, Greg D. Scholes1, and John Sipe2 1 Department of Chemistry and Center for Quantum Information and Quantum Control, 80 Saint George Street,University of Toronto,
Toronto, Ontario, M5S 3H6 Canada 2 Department of Physics, 60 Saint George Street, University of Toronto, Toronto, Ontario, M5R 3C3 Canada
Dynamics of energy transfer in photosynthetic complexes occurs on a femtosecond (fs) time scale. It can be resolved with ultrafast non-linear spectroscopy [1], where coherent fs pulses are
being used to excite the molecular systems. This is in contrast with natural excitation conditions given by almost continuous and fully incoherent light from the sun. In an attempt to make
connections between the experimental results using 2D electronic spectroscopy and the biological processes occurring in photosynthetic organisms under natural conditions, it has been
suggested that sunlight can be viewed as a series of random ultra-fast pulses, with a duration as short as the bandwidth allows [2].
To investigate this proposal, we construct a quantum state of light composed of an incoherent mixture of multi-mode coherent states. In the attempt to fit the properties of thermal light, we
show that the radiation spectrum and the photon statistics can be well represented by a mixture of pulses, as long as their spectral bandwidth is narrow enough (>ps pulses). However, no
physical solution can be found for fs pulses, for which the bandwidth is comparable to that of thermal light. Going to the second-order correlation function and the simultaneous excitation of
two atoms, any mixture of pulses is expected to fail in representing excitation by thermal light in general.
[1] E. Collini et al., Nature, 463, 644 (2010).
[2] Y.-C. Cheng & G.R. Fleming, Annu. Rev. Phys. Chem., 60, 241 (2009).
Greg Dmochowski University of Toronto, Department of Physics
Increasing The Giant Kerr Effect By Narrowing The EIT Window Beyond The Signal Bandwidth
Coauthors: Amir Feizpour, Matin Hallaji, Chao Zhuang, Alex Hayat, Aephraim Steinberg
We experimentally show that EIT-based Kerr nonlinearities continue to benefit from narrowing the EIT window even as the signal bandwidth comes to exceed this transparency width. While previous
studies have shown that narrow transparency windows yield slow step-response times, thereby suggesting a limitation of EIT-enhanced nonlinearities, our results show that many practical applications
of such nonlinearities, which rely on pulsed fields, are not hindered by these effects. In fact, these slow dynamics are at the root of the enhancement offered by EIT in the regime of most interest,
namely, narrow EIT windows combined with high intensity, broadband signal pulses. For applications such as quantum non-demolition measurements and nonlinear optical gates where the goal is simply to
detect an observable single-shot phase shift, we see that EIT can be used to increase the signal size even for broadband signal pulses.
Amir Feizpour University of Toronto
Weak-value Amplification of Low-light-level Cross Phase Modulation
Coauthors: Greg Dmochowski, Matin Hallaji, Chao Zhuang, Alex Hayat, Aephraim M. Steinberg
We report on our experimental progress towards observing weak-value amplification of low-light-level cross-phase modulation. This will be the first observation of a weak measurement relying on true
entanglement between distinct systems which has no classical interpretations, unlike previous weak measurement experiments. In this scheme, classical pulses at single-photon level are sent to an
interferometer one arm of which is interacting with a probe pulse through a cross-Kerr effect. Post-selecting on having m photons in the dark port of the interferometer results in an amplified m
photon cross phase shift.
Kent Fisher Institute for Quantum Computing, University of Waterloo
Quantum computing on encrypted data
Coauthors: Anne Broadbent, L. Krister Shalm, Zhizhong Yan, Jonathan Lavoie, Robert Prevedel, Thomas Jennewein, Kevin Resch
Performing computations on encrypted data is of strong significance for protecting privacy over public networks. Such capabilities would allow a client with a weak computer to send sensitive data to
a more powerful,but untrusted, server to be processed. Recent works, called fully homomorphic encryption schemes, have produced a long sought-after solution to the problem of carrying out classical
computations on encrypted data. Here we present an efficient solution to the quantum analogue of this problem, allowing arbitrary quantum computations to be carried out on encrypted quantum data. We
prove that an untrusted server can carry out a universal set of quantum gates on encrypted qubits without learning any information about the inputs while the client, who knows the decryption key, can
easily obtain the computed results. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme for each quantum gate in a set sufficient for arbitrary
quantum computations. Our protocol can be easily incorporated into the design of quantum servers with few extra resources. This result paves the way for delegated quantum computing to take place,
ensuring the privacy and security of future quantum networks.
Roohollah (Farid) Ghobadi Institute for Quantum Science and Technology, University of Calgary
Creating and detecting micro-macro photon-number entanglement
Coauthors: Alexander Lvovsky and Christoph Simon
We propose a scheme for the observation of micro-macro entanglement in photon number based on amplifying and de-amplifying a single-photon entangled state in combination with homodyne quantum state
tomography. The created micro-macro entangled state, which exists between the amplification and de-amplification steps, is a superposition of two components with mean photon numbers that differ by
approximately a factor of three. We show that for reasonable values of photon loss it should be possible to detect micro-macro photon-number entanglement where the macro system has a mean number of
one hundred photons or more.
Gilad Gour Department of Mathematics and Statistics, IQST, University of Calgary
Universal Uncertainty Relations
Coauthors: Shmuel Friedland, Vlad Gheorghiu
Uncertainty relations are a distinctive characteristic of quantum theory that imposes intrinsic limitations on the precision with which physical properties can be simultaneously determined. The
modern work on uncertainty relations employs entropic measures to quantify the lack of knowledge associated with measuring non-commuting observables. However, I will show here that there is no
fundamental reason for using entropies as quantifiers; in fact, any functional relation that characterizes the uncertainty of the measurement outcomes can be used to define an uncertainty relation.
Starting from a simple assumption that any measure of uncertainty is non-decreasing under mere relabeling of the measurement outcomes, I will show that Schur-concave functions are the most general
uncertainty quantifiers. I will then introduce a novel fine-grained uncertainty relation written in terms of a majorization relation, which generates an infinite family of distinct scalar uncertainty
relations via the application of arbitrary measures of uncertainty. This infinite family of uncertainty relations includes all the known entropic uncertainty relations, but is not limited to them. In
this sense, the relation is universally valid and captures the essence of the uncertainty principle in quantum theory.
Horacio Grinberg Department of Physics, FCEyN, University of Buenos Aires, and IFIBA, Argentina
Nonclassical effects in highly nonlinear two-level spin models
The nonclassical squeezing effect emerging from a nonlienar coupling model (generalized Jaynes-Cummings model) of a two-level atom interacting with a bimodal cavity field via two-photon
transitions is investigated in the rotating wave approximation. Various Bloch coherent initial states (rotated states) for the atomic sysem are assumed. Initially the atomic system and the field
are in disentangled state, where the field modes are in Glauber coherent states via Poisson distribution. The model is numerically tested against simulations of time evolution of the based
Heisenberg uncertainty variance and Shannon information entropy squeezng factors. The quantum state purity is computed and used as a criterion to get information about the entanglement of the
components of the system. Analytical expression of the total density operator matrix elements at t > 0 shows in fact, the present nonlinear model to be strongly entangled, where each of the
definite initial Bloch coherent states are reduced to statistical mixtures. Thus, the present model does not preserve the modulus of the Bloch vector.
Timur Grinev Chemical Physics Theory Group, Department of Chemistry, and Center for Quantum Information and Quantum Control, University of Toronto, Toronto, Ontario M5S 3H6, Canada
Coherent control and incoherent excitation dynamics of pyrazine
Coauthors: Paul Brumer
First, we present coherent control of internal conversion (IC) between the S_1 and S_2 singlet excited electronic states in pyrazine. S_2 state is populated from S_0 singlet state in the process of
weak field excitation. Coherent control with respect to certain control objective is performed by shaping the exciting laser. Excitation and IC are considered simultaneously. Successful control is
demonstrated by optimizing both the amplitude and phase profiles of the laser, and its dependence on the properties of S_2 resonances is established.
Second, we present the S_0 -> S_2/S_1 photoexcitation dynamics of pyrazine due to weak incoherent CW light excitation after sudden turn-on of the light. Dynamical evolution of S_2 and S_1
populations, as well as the purity of the excited mixed state, is studied. It is shown, that the S_1 to S_2 populations ratio becomes constant in the long time regime, thus being an evidence of the
spatially distributed nature of the resulting excited mixed state. At the same time, the excited mixed state purity decreases monotonically, but non-uniformly, to a small asymptotic value (which is
still restricted in value by the purity of the maximally mixed state).
Andres Estrada Guerra Universidad de Antioquia
Non-Markovian effects in the dynamics of entanglement in high temperature limit
Coauthors: Leonardo Pachon Contreras
In the past years, some quantum phenomena have been observed at macroscopic scales. In particular, superconductivity, coherent superpositions of Bose-Einstein condensates and interference patterns in
fullerenes have been detected. This fact has made that the border between the quantum and classical realms become more diffuse and intricate, although, more interesting, than before.
However, in order to observe these quantum features, one needs to reach the low temperature regime, E/(k[B]T), where E denotes a characteristic system energy-scale and k[B]T the thermal energy.
Therefore, some delicate and elaborate cooling processes have been developed.
Our work aims to show that, even in the the high temperature regime, some quantum features such entanglement can be present, if the system is placed out from equilibrium. In particular, we study the
non-Markovian dynamic of two different harmonic oscillators coupled to different baths at different temperatures and with different coupling-to-the-bath-strengths. We found that, despite the absence
of symmetries in the parameters space, entanglement between the oscillators can be created and maintained in the long-time regime. We also discuss the implementation of our setup for studying the
influence of the non-Markovian dynamics in the optimal sideband cooling of nano-mechanical resonators.
Matin Hallaji Physics Department, University of Toronto
Quantum control of population transfer between vibrational states in an optical lattice
Coauthors: Chao Zhuang, Alex Hayat, and Aephraim M. Steinberg
We experiment on two quantum control techniques, Adiabatic Rapid Passage (ARP) and Gradient Ascent Pulse Engineering (GRAPE), to realize population transfer between vibrational states of atoms
trapped in an optical lattice. The ARP pulse gives the highest population transfer among all the techniques we have tested so far: 38.9±0.2 of the initial ground state population is transferred into
the first excited state, which exceeds the 1/e boundary of coupling the ground and the first excited vibrational states in a harmonic oscillator potential. The ARP pulse also gives the highest
normalized population inversion among all the techniques we have tested so far: the highest ratio of the difference between the ground state and the first excited state population to the sum of the
ground state and the first excited state population is 0.21±0.02. For the GRAPE technique, we use the GRAPE algorithm to engineer a pulse involving both the displacement of the optical lattice and
modulation of the lattice depth, while the fidelity between the state after the pulse is applied and the first excited state is taken as the figure of merit. The GRAPE pulse gives as high population
transfer as the ARP pulse does: 39 ± 2 of the initial ground state population is transferred into the first excited state. The GRAPE pulse outperforms the ARP pulse if the leakage is concerned.
Because the GRAPE pulse gives almost no leakage compared to the 18.7 ± 0.3 leakage for the ARP pulse, when the highest population transfer occurs.
Wolfram Helwig University of Toronto
Absolutely Maximal Entanglement and Quantum Secret Sharing
Coauthors: Wei Cui, José Ignacio Latorre, Arnau Riera, Hoi-Kwong Lo
We study the existence of absolutely maximally entangled (AME) states in quantum mechanics and its applications to quantum information. AME states are characterized by being maximally entangled for
all bipartitions of the system and exhibit genuine multipartite entanglement. We show that these states exist for any number of parties if the system dimension is chosen appropriately, and that they
can be conveniently described within the graph states formalism for qudits.
With such states, we present a novel parallel teleportation protocol which teleports multiple quantum states between groups of senders and receivers. The notable features of this protocol are that (i
) the partition into senders and receivers can be chosen after the state has been distributed, and (ii) one group has to perform joint quantum operations while the parties of the other group only
have to act locally on their system. We also prove the equivalence between pure state quantum secret sharing schemes and AME states with an even number of parties.
Rolf Horn University of Waterloo, Institute for Quantum Computing
On chip generation of polarization entanglement in a monolithic semiconductor waveguide
Coauthors: Piotr Kolenderski, Dongpeng Kang, Payam Abolghasem, Carmelo Scarcella, Adriano Della Frera, Alberto Tosi, Lukas G. Helt, Sergei V. Zhukovsky, John E. Sipe, Gregor Weihs, Amr S. Helmy,
Thomas Jennewein
From unraveling the mysteries of the quantum world, to solving really hard problems, a quest of those in the quantum information community is to discover a technology that will facilitate large scale
implementations of quantum processes. In photonics, the quest starts with finding a stable and scalable source of single and entangled photons -- the building blocks of a photonic quantum computer.
Here we present the Bragg Reflection Waveguide (BRW), a tiny, stable and scalable semiconductor waveguide, capable of directly producing polarization entangled photons. It's design is perhaps the
most truly monolithic of any photon source available today; -- the architecture on which it is built promises electrical self pumping, and in contrast to many other non-linear optics type sources,
nothing is required to create entanglement but the device itself. To demonstrate this, we examine the photon pairs produced via Spontaneous Parametric Down Conversion in a 2.2mm long, 3.8 micron wide
BRW. We perform quantum state tomography on the photon pairs, splitting them immediately after they emerge from the chip, and show their significant departure from classical behaviour. Solidified via
the observation of their spectra, we calculate a concurrence of approximately 0.5, demonstrate polarization entanglement visibilities from 64% to 96% in various basis, and determine the fidelity with
a maximally entangled state to be 0.83. Combined with the BRW's truly monolithic architecture these results signify the BRW chip architecture as a serious contender on which to build large scale
implementations of optical quantum processes.
Nathaniel Johnston Institute for Quantum Computing, University of Waterloo
On the Minimum Size of Unextendible Product Bases
Coauthors: Jianxin Chen
A long-standing open question asks for the minimum number of vectors needed to form an unextendible product basis in a given bipartite or multipartite Hilbert space. A solution to this problem
has applications to the construction of bound entangled states and Bell inequalities with no quantum violation. A partial solution was found by Alon and Lovasz in 2001, but since then only a few
other cases have been solved. We solve all remaining bipartite cases (i.e., where there are only 2 subsystems), all remaining qubit cases (i.e., where each local dimension is 2), as well as many
other multipartite cases.
Dongpeng Kang Department of Electrical & Computer Engineering, University of Toronto
Bragg reflection waveguides: The platform for monolithic quantum optics in semiconductors
Coauthors: Amr S. Helmy
Photon pairs are one of the most important and widely used nonclassical states of light in quantum optics. They are indispensible sources in various applications in domains such as quantum key
distribution, optical quantum computing, amongst others. One of the more popular methods to generate photon pairs is via spontaneous parametric down-conversion, which requires a laser source
pumping a nonlinear crystal in a specific set of configurations. The system is generally bulky, vulnerable, and sensitive to the external environment, therefore it’s useful in specially equipped
labs. On the other hand, a mobile and commercially viable quantum information processing system, such as an optical quantum computer, requires chip-scale, portable, robust sources of photon pairs
operating at room temperature. Although significant progress has been made using different techniques, electrically pumped, room-temperature photon pairs are still unavailable. To this end, Bragg
reflection waveguides (BRWs) made of III-V semiconductors such as Aluminum Gallium Arsenide have been shown as the most promising platform to realize this class of sources. Efficient photon pair
generation as well as polarization entanglement have been demonstrated in BRWs.
In this work, we first review BRWs as a platform for phase matching in isotropic and highly dispersive semiconductors. We will show dispersion and birefringence engineering can be employed to
tailor the properties of the photon pairs, for example, to generate polarization entangled photons on-chip without any off-chip compensation or interferometry. Our results, combined with its
truly monolithic nature, show that BRWs could lead to fully integrated nonclassical photon sources.
Eric Kopp University of Toronto
New Control Frontiers in Noiseless Subspaces
Quantum control is largely divided into two independent problems: control for the purpose of protecting information and preventing decoherence, and control for the purpose of manipulating states
to accomplish computational goals. Achieving both control objectives simultaneously is a formidable task and has typically only been addressed for specific low-dimensional systems. Our research
examines control strategies for realizing a universal set of operations (gates) while confining states to noiseless subspaces in systems of 4 qubits and greater. These strategies are applicable
to a broad class of models and control inputs. Aspects of the research focus on recasting a specific class of noiseless subspace into a classical problem in geometric control, and addressing the
computational challenges in working with extremely large, extremely sparse tensor operator representations in an efficient way. Preliminary results will also be shown for a 4-qubit
'representative' trapped ion model with a realistic experimental setup.
Hoi Kwan Lau University of Toronto
Rapid laser-free ion cooling by controlled collision
I propose a method to transfer the axial motional excitation of a hot ion to a coolant ion with possibly different mass by precisely controlling the ion separation and the local trapping
potentials during ion collision. The whole cooling process can be conducted diabatically, involving only a few oscillation periods of the harmonic trap. With sufficient coolant ions pre-prepared,
this method can rapidly re-cool ion qubits in quantum information processing without applying lengthy laser cooling.
Hoi Kwan Lau University of Toronto
Quantum secret sharing with continuous variable cluster states
Coauthors: Christian Weedbrook
We extend the idea of cluster state quantum secret sharing to the continuous variable regime. Both classical and quantum information can be shared by distributing finitely squeezed continuous
variable cluster states through either secure or insecure channels. We show that the security key rate of the classical information sharing can be obtained by standard continuous variable quantum
key distribution techniques. We analyse the performance of quantum state sharing by computing the shared entanglement of between the authorised parties and the dealer. Our techniques can be
applied to analyse the security of general continuous variable quantum secret sharing.
Xiongfeng Ma Tsinghua University
Experimental realization of measurement-device-independent quantum key distribution
Coauthors: Yang Liu, Teng-Yun Chen, Liu-Jun Wang, Hao Liang, Guo-Liang Shentu, Jian Wang, Ke Cui, Hua-Lei Yin, Nai-Le Liu, Li Li, Jason S. Pelc, M. M. Fejer, Cheng-Zhi Peng, Qiang Zhang, and Jian-Wei
Throughout history, every advance in encryption has been defeated by advances in hacking, often with severe consequences. Quantum cryptography [1] holds the promise to end this battle by offering
unconditional security when ideal single-photon sources and detectors are employed. Unfortunately, ideal devices never exist in practice and device imperfections have become the targets of
various attacks. By developing up-conversion single-photon detectors with high efficiency and low noise, we faithfully demonstrate the measurement-device-independent quantum key distribution
(MDI-QKD) protocol [2], which is immune to all hacking strategies on detection. Meanwhile, we employ the decoy-state method [3] to defend attacks on non-ideal source. By assuming a trusted source
scenario, our practical system, which generates more than 25 kbits secure key over a 50 km fiber link, serves as a step stone in the quest for unconditionally secure communications with realistic
The gap between ideal devices and realistic setups has been the root of various security loopholes [4], which have become the targets of many attacks [5,6]. Tremendous efforts have been made
towards loophole-free QKD with practical devices [7,8]. However, the question of whether security loopholes will ever be exhausted and closed still remains. Here, we report a QKD experiment that
closes the loopholes in detection and hence can achieve secure communication in a trusted source scenario. Firstly, ideal single-photon sources are replaced with weak coherent states by varying
mean photon intensities, a technique called decoy-state method [3]. Secondly, by implementing the recently developed MDI-QKD protocol [2], all the detection side channels are removed from our
[1]. C. H. Bennett and G. Brassard, in Proceedings of the IEEE International Conference on Computers, Systems and Signal Processing (IEEE Press, New York, 1984) pp. 175-179.
[2]. H.-K. Lo, M. Curty, and B. Qi, Phys. Rev. Lett. 108, 130503 (2012).
[3]. H.-K. Lo, X. Ma, and K. Chen, Phys. Rev. Lett. 94, 230504 (2005).
[4]. D. Gottesman, H.-K. Lo, N. Lutkenhaus, and J. Preskill, Quantum Inf. Comput. 4, 325 (2004).
[5]. V. Makarov, A. Anisimov, and J. Skaar,Phys. Rev. A 74, 022313 (2006).
[6]. B. Qi, C.-H. F. Fung, H.-K. Lo, and X. Ma, Quantum Inf. Comput. 7, 073 (2007).
[7]. D. Mayers and A. Yao, in FOCS, 39th Annual Symposium on Foundations of Computer Science (IEEE, Computer Society Press, Los Alamitos, 1998), p. 503.
[8]. A. Acin, N. Gisin, and L. Masanes, Phys. Rev. Lett. 97, 120405 (2006).
Dylan Mahler University of Toronto,
Adaptive quantum state tomography improves accuracy quadratically
Coauthors: Lee A. Rozema, Ardavan Darabi, Chris Ferrie, Robin Blume-Kohout, and A.M. Steinberg
In quantum state tomography, an informationally complete set of measurements is made on N identically prepared quantum systems and from these measurements the quantum state can be determined. In
the limit as N → ∞, the estimate of the state converges on the true state. The rate at which this convergence occurs depends on both the state and the measurements used to probe the state. On the
one hand, since nothing is known a priori about the state being probed, a set of maximally unbiased measurements should be made. On the other hand, if something was known about the state being
measured a set of biased measurements would yield a more accurate estimate. It has been shown[1, 2] that by adaptively choosing measurements, optimal accuracy in the state estimate can be
obtained regardless of the state being measured. Here we present an experimental demonstration of one- and two-qubit adaptive tomography that achieves a rate of convergence of approximately 1-O
([1/N]) in the quantum state fidelity with only a single adaptive step and local measurements, as compared to 1-O([1/(√(N))]) for standard tomography. [1] Phys. Rev. Lett. 97, 130501 (2006) [2]
Phys. Rev. A 85, 052120 (2012)
Sebastian Duque Mesa
Relativistic Dynamical Quantum Non-locality
Coauthors: Leonardo A. Pachon
In nonrelativistic quantum mechanics, quantum correlations are largely thought to be absolute. However, when they are studied in the framework of relativistic quantum mechanics they could depend
on the reference frame [1]. In particular, two particles could be entangled in one reference frame but unentangled in another one, thus quantum non-locality depends upon the reference frame.
Here, the non-locality of quantum dynamics was tracked, by working to the Weyl’s representation of quantum mechanics, to the superposition principle. This is a kind of single particle
non-locality, of different nature as the discussed above [2]. We extend this work to the relativistic framework of quantum mechanics. To do so, we review the basics of the relativistic Weyl’s
formalism and discuss the construction of the path-integral representation of the Wigner function, as well as the influence of the reference frame on this dynamical quantum non-locality.
[1] Robert M. Gingrich and Christoph Adami. Quantum entanglement of moving bodies. Phys. Rev. Lett., 89:270402, Dec 2002.
[2] S. Popescu. Dynamical quantum non-locality. Nature Phys., 6:151, 2010.
Leonardo A. Pachon Department of Chemistry, University of Toronto
Coherent Phase Control in Closed and Open Quantum Systems
Coauthors: Paul Brumer
The underlying mechanisms for one photon phase control are revealed through a master equation approach and based on the path integral approach in the energy basis representation. Specifically,
two mechanisms are identified, one operating on the laser time scale and the other on the time scale of the system-bath interaction. The effect of the secular and non-secular Markovian
approximations are carefully examined. We discuss the possibility of enhancing this environment-assisted effect when a description based on sub-Ohmic spectral densities applies.
Kyungdeock Park (Daniel) Institute for Quantum Computing
Heat Bath Algorithmic Cooling and Multiple Rounds Quantum Error Correction Using Nuclear and Electron Spins
Coauthors: Robabeh Darabad, Ben Criger, Jonathan Baugh and Raymond Laflamme
Nuclear Magnetic Resonance (NMR)-based devices have been excellent test beds for Quantum Information Processing (QIP). However, the spin polarization bias in a typical experimental setup is very
small at thermal equilibrium, giving a highly mixed qubit, and the polarization decreases exponentially in the number of qubits. Thus it is very difficult to have close-to-pure ancilla qubits
which are essential in the implementation of quantum error correction (QEC). Moreover, for practically stable systems against noise, QEC should be performed multiple rounds. This requires ancilla
qubits to be refreshed at the initial stage of each round to very high polarization. In order to accomplish this in NMR QEC experiment, we seek to implement Heat Bath Algorithmic Cooling (HBAC)
with cold electron spin bath. HBAC is an implementation independent cooling method that combines reversible entropy compression and interaction with the cold external bath. It is capable of
cooling a qubit of interest far beyond the bath polarization. Electron spins possess higher polarization and faster relaxation rate than nuclear spins under similar experimental conditions, and
thus can be used as the heat bath while nuclear spins encode system qubits. In this talk, I will present our progress towards achieving high polarization of nuclear spin qubits using an electron
spin and HBAC. In addition, I will show how this will be used in future for the experimental realization of multiple rounds of three-qubit QEC.
Alexandru Paler University of Passau
Resource Optimization in Topological Quantum Computation: Verification.
Coauthors: Simon Devitt*, Kae Nemoto*, Ilia Polian+; * National Institute of Informatics, Tokyo, Japan; + University of Passau, Passau, Germany;
Recent advances in large scale quantum architecture design has focused on utilizing topological codes to perform necessary error correction protocols. These codes use a geometric description to
specify quantum circuits in terms of topological braiding. Recent results have introduced several techniques to optimize topological circuits by compressing the overall 3-dimensional volume of
the circuit description which acts to minimize the total number of physical qubits and the total amount of computational time needed to realize a given circuit [1].
These compression techniques have as yet only been implemented manually, and on small topological circuits. Therefore, it is reasonably straightforward to check that no mistakes are made. Future
classical programs and game based efforts [2] that are used to compile and optimized topological circuits will be automated and used to compress extremely large topological structures. As with
classical circuit designs, the output of these automated protocols must be verified before accepted.
In this presentation we will outline the steps required to verify topological quantum circuits. We will illustrate several algorithmic steps that are required in order to accurately check the
function of optimized circuits without having to directly simulate topological computation.
[1] A.G. Fowler and S.J. Devitt, arXiv:1209.0510
[2] www.qubit-game.com
Nicolas Quesada University of Toronto
Self-calibrating tomography for non-unitary processes
Coauthors: Agata M. Branczyk and Daniel F.V. James
Characterizing quantum states and processes is a key step for many quantum information and quantum computing protocols [1]. We extend upon the idea of using an incompletely characterized process
to perform quantum state tomography---known as self-calibrating tomography [2,3]---by including the possibility that the process itself is not unitary. We study a two level atom, with an unknown
dipole moment, that undergoes spontaneous emission and is irradiated by a laser whose phase and intensity can be controlled at will. We show that by using five different parameter settings of the
electric field of the laser it is possible to reconstruct the state as well as obtain the unknown spontaneous emission rate and dipole moment of the atom---simultaneously performing quantum state
and quantum process tomography.
[1] M. A. Nielsen and I. L. Chuang, Quantum computation and quantum information (Cambridge university press, 2010).
[2] A. Branczyk, D. H. Mahler, L. A. Rozema, A. Darabi, A. M. Steinberg, and D. F. James, “Self-calibrating quantum state tomography,” New Journal of Physics 14, 085003 (2012).
[3] N. Quesada, A. M. Branczyk, and D. F. James, “Self-calibrating tomography for multi-dimensional systems,” arXiv preprint arXiv:1212.0556 (2012).
Katja Ried Perimeter Institute for Theoretical Physics
Quantum process tomography with initial correlations
Coauthors: Robert W. Spekkens
When preparing input states for quantum process tomography (QPT), one may face undesired correlations between the system and environment degrees of freedom. In this case the results obtained by
the standard QPT scheme may not characterize the process in question accurately. Instead, the data may reflect properties of the joint initial state of system and environment, as one would expect
in quantum state tomography (QST). We present a unified framework for QPT and QST that can handle this scenario and report on progress in distinguishing the “process-type” from the “state-type”
contributions in data from this tomography.
Christoph Reinhardt McGill University
Design of a Strong Optomechanical Trap
Coauthors: Simon Bernard, Jack Sankey
We report progress toward an optomechanical setup in which a partially-reflective micromechanical element is positioned within an optical cavity formed by two rigidly-fixed mirrors. This
three-mirror system provides a highly versatile platform for studying new optomechanical effects; in particular, it is possible to generate a nonlinear coupling in which the cavity resonance
varies quadratically as a function of mechanical displacement, enabling (among other things) a strong cavity optical trap. We fabricate our mechanical elements by patterning free-standing silicon
nitride membranes into lightweight, weakly-tethered trampolines so that a strong optical trap can completely dominate over the forces exerted by the supporting material. Such systems are
predicted to achieve extraordinarily high mechanical quality factors, and our ultimate goal is to use them to sense incredibly small forces, such as those exerted by quantum systems prepared in
superposition states.
Lee Rozema University of Toronto
Experimental Demonstration of Quantum Data Compression
Coauthors: Dylan H. Mahler, Alex Hayat, Peter S. Turner, and Aephraim M. Steinberg
In quantum state tomography N identically prepared copies of a quantum state are measured to reconstruct a density matrix describing the single particle state. One purpose of reconstructing a
density matrix is to allow the prediction of measurements that could have been made on the initial state. On the other hand, if only one measurement is of interest then performing that
measurement on the each of the N copies of the state will yield the most accurate estimate. However, if the measurement choice is unknown the quantum states must be stored in a quantum memory
until a later time. The question then becomes: how much memory is required?
Hilbert space grows exponentially in the number of qubits. The dimensionality of an N qubit system is 2^N, but if all of the qubits are identical the initial N qubit state can be described by the
symmetric subspace, which has dimension N+1. Physically, the information of the initial state can be mapped onto the first log[2](N+1) qubits using the Quantum Schur-Weyl transform (QSWT),
leading to an exponential savings in space.
Here, we present an experiment compressing three qubits into two. In our experiment the three qubits are encoded in two photons. The first photon encodes a path and a polarization qubit, while
the second photon encodes a single polarization qubit. We use the QSWT to map all of the information from the 3 qubits onto the path and polarization qubits encoded in the first photon, allowing
us to discard the second photon.
Lena Simine Chemical Physics Theory Group, Dept. of Chemistry, University of Toronto
Numerical simulations of molecular conducting junction: transport and stability
Coauthors: Dvira Segal
We present a computational study of a minimalistic molecular conducting junction using a numerically exact path integral method. The effects of bias induced vibrational instability and mode
equilibration with secondary phonon modes are investigated. We also address the competition between direct tunneling and phonon assisted transport, and look into thermoelectric regime.The
comparison of exact numerical simulations to perturbative master equation results indicate on the importance of high order electron-phonon scattering processes.
Xin Song University of Toronto
Enhanced probing of fermionic interaction using weak-value amplification
Coauthors: Amir Feizpour, Yao Tian, Alex Hayat, Aephraim Steinberg
We demonstrate a scheme for enhanced probing of an interaction between two single fermions by probing the spin-dependent energy splitting of an excitonic system in semiconductor quantum dots
based on weak-value amplification. Since both spin and energy of the anisotropic electron-hole exchange interaction in quantum dots can be mapped to emitted photons, we can use the polarization
of these emitted photons to initialize and post-select the system. By preparing and post-selecting the emitted photons into two quasi-orthogonal quantum polarization state |i> and |f>, which
satisfies <i|f> << 1, we are able to obtain an enhanced outcome of the weak value <A>=<f|A|i>/<f|i>, which is proportional to the energy splitting of the excitonic system. Weak-value
amplification provides an effective technique for enhanced-precision measurement of fermion system when considering the limitation due to slow noise.
Zhiyuan Tang University of Toronto
Experimental demonstration of polarization encoding measurement-device-independent quantum key distribution
Coauthors: Zhongfa Liao, Feihu Xu, Bing Qi, Li Qian, Hoi-Kwong Lo
Measurement-device-independent quantum key distribution (MDI-QKD) has been proposed to close all the potential security loopholes due to imperfections in the detectors without compromising the
performance of a standard QKD system [1]. Various experimental attempts on MDI-QKD have been reported in both time-bin [2, 3] and polarization encoding [4]. We remark that in [2, 4] only Bell
state measurements with different combinations of BB84 states and photon levels are conducted, and thus no real MDI-QKD (which requires Alice and Bob randomly switch their qubits’ states and
intensity levels) is implemented. A complete time-bin encoding MDI-QKD experiment has been reported in [3]. However, phase randomization, a crucial assumption in the security of QKD, is neglected
in their experiment, which leaves the system vulnerable to attacks on the imperfect weak coherent sources [5].
Here we report the first complete demonstration of polarization encoding MDI-QKD over 10 km optical fiber. Decoy state technique is employed to estimate gain and error rate of single photon
signals. Photon levels and probability distributions for the signal and decoy states are chosen numerically to optimize the key rate. Active phase randomization is implemented for the first time
in MDI-QKD to protect against attacks on the imperfect sources. A 1600-bit secure key is generated in our experiment. Our experiment verifies the feasibility to implement MDI-QKD with
polarization encoding.
[1] H. –K. Lo, M. Curty , and B. Qi, “Measurement-Device-Independent Quantum Key Distribution,” Phys. Rev. Lett. 108, 130503 (2012).
[2] A. Rubenok, et al., “A Quantum Key Distribution Immune to Detector Attacks,” arXiv: 1204.0738.
[3] Y. Liu, et al., “Experimental Measurement-Device-Independent Quantum Key Distribution,” arXiv:1209.6178.
[4] T. Ferreira da Silva, et al., “Proof-of-Principe Demonstration of Measurement Device Independent QKD Using Polarization Qubits,” arXiv: 1207.6345.
[5] Y. Tang, et al., “Source Attack of Decoy State Quantum Key Distribution Using Phase Information,” arXiv: 1304.2541.
Johan F. Triana Instituo de Física, Universidad de Antioquia
The Quantum Limit at Thermal Equilibrium
Coauthors: Leonardo A. Pachón (Instituo de Física, Universidad de Antioquia)
The aim of constructing and designing machines working at the nanometre-length scale, such as atomic motors, photocells, gyrators or heat engines, has boosted the developing of a quantum version
of thermodynamics. One of the foundational conundra in this emerging field is, to what extent nanomachines can display quantum features and how this quantum behaviour could be used to improve
their efficiency. Intuitively, one can suggest that if the energy of the thermal fluctuations is much smaller than the typical energy scale of the nanosystem, then there is room for the
nanosystem to revels its quantum nature. However, as it has been discussed recently in almost all fields related to quantum mechanics (e.g quantum information science, quantum biophysics,
nanotechnology, quantum chemsitry or condensed matter physics), the border between the quantum/classical operating regime is far from being trivial. We predict here, at thermodynamical
equilibrium, the existence of a regime where, e.g., nanoelectromechanical structures or optomechanical systems can be found in an entangled state at high temperature assisted by the non-Markovian
interactions. Complementarily, we report the existence of a second regime, characterized by Markovian interactions at low temperature, where quantum nanodevices do not thermalize into the
canonical Boltzmann distribution, and therefore all their thermodynamical properties are expected to deviate, even, from current quantum thermodynamics. Our findings not only provides a solid
ground for understanding the presence of quantum features in most of current investigations in bio and handmade systems, but also points out the direction to follow in protecting and isolating of
quantum systems.
Timur Tscherbul Department of Chemistry and Centre for Quantum Information and Quantum Control, University of Toronto
Quantum coherent dynamics of Rydberg atoms driven by cold blackbody radiation
Coauthors: Paul Brumer
The interaction of incoherent blackbody radiation with atoms and molecules is usually considered in the framework of Markovian rate equations parametrized by the Einstein coefficients, leading to
a linear increase of excited-state populations with time. While the validity of the rate equations is justified by the extremely short coherence time of hot blackbody radiation (2 ps at a
temperature of 4100 K), deviations from the linear behavior are expected on shorter time scales. By solving perturbative equations of motion for the density matrix of an atom interacting with a
cold thermal reservoir of radiation field modes, we obtain the dynamics of eigenstate populations and coherences without invoking the Markovian approximation. The theory is applied to examine the
coherent effects in highly excited Rydberg atoms subject to a cosmic microwave background radiation.
X. Xing University of Toronto
Multidimensional quantum information based on temporal photon modulation
Coauthors: A. Hayat, A. Feizpour, and A. M. Steinberg.
Multidimensional quantum information processing has been shown to open a wide range of possibilities. The spatial degree of freedom has been recently employed to encode multidimensional quantum
information using photon orbital angular momentum. This approach, however, is not suitable for the single-mode fiber-optical communication infrastructure. We demonstrate experimentally a
multidimensional quantum information encoding approach based on temporal modulation of single photons, where the Hilbert space can be spanned by an in-principle infinite set of orthonormal
temporal profiles. We implement the temporal encoding using a scheme where the projection onto temporal modes is implemented by an electro-optical modulator and a narrow-band optical filter. The
demonstrated temporal multidimensional quantum encoding allows quantum communication over existing fiber optical infrastructure, as well as probing multidimensional time entanglement approaching
the limit of continuous-time measurements.
Zhen Zhang Tsinghua University
Decoy-state quantum key distribution with biased basis choice
Coauthors: Zhengchao Wei,Weilong Wang, Xiongfeng Ma
Quantum key distribution (QKD) plays an important role in the field of Quantum Information. The most well-known QKD scheme is BB84 protocol [1], where a single photon source is assumed. In
reality, a perfect single photon source does not exist. Instead, highly attenuated lasers are widely used for QKD. The multi-photon component in a laser source leads to a security threat (e.g.,
photon number splitting attack [2]). The decoy-state method is proposed to address this issue by using more than one intensities and its security has been proven by Lo, Ma, and Chen [3].
Meanwhile, in the original BB84, Alice encodes the key information randomly into the X and Z bases with the same probability and Bob measures the received qubits in two bases randomly with equal
probabilities. We denote the basis-sift factor to be the ratio between the lengths of the sifted key and the raw key. The basis-sift factor of the original BB84 protocol is 1/2. The efficient
BB84 protocol proposed by Lo, Chau and Ardehali [4], in which Alice and Bob put a bias between the probabilities of choosing the Z and X bases, can improve the basis-sift factor up to 100%.
In this work, we propose a QKD protocol that combines the decoy-state method with the efficient BB84 protocol. In this scheme, Alice sends all signal states in the Z basis. We optimize the
probabilities of Alice sends decoy states, signal states and vacuum states, the probabilities of the X and Z bases in decoy state, the probabilities of the X and Z bases Bob chooses for measure,
and the intensity of the decoy state. From the simulation result, after taking into account of statistical fluctuations, our protocol can improve the key rate by at least 45% comparing to the
original decoy-state protocol.
[1] C. H. Bennett and G. Brassard, in Proceedings of the IEEE International Conference on Computers, Systems and Signal Processing (IEEE Press, New York, 1984) pp. 175-179.
[2] G. Brassard, N. Lutkenhaus, T. Mor, and B. C. Sanders, Phys. Rev. Lett. , 85, 1330 (2000).
[3] H.-K. Lo, X. Ma, and K. Chen, Phys. Rev. Lett. 94, 230504 (2005).
[4] H.-K. Lo, H. F. Chau , and M. Ardehali, Journal of Cryptology 18, 133 (2005).
Tian Wang Institute for Quantum Science and Technology, University of Calgary
Demonstrating macroscopic entanglement based on Kerr non-linearities requires extreme phase resolution
Coauthors: Roohollah Ghobadi, Sadegh Raeisi, Christoph Simon
Entangled coherent states, which can in principle be created using strong Kerr non-linearities, allow the violation of Bell inequalities for very coarse-grained measurements. This seems to
contradict a recent conjecture that observing quantum effects in macroscopic systems generally requires very precise measurements. However, here we show that both the creation of the required
states and the required measurements rely on being able to control the phase of the necessary Kerr-nonlinearity based unitary operations with extreme precision. And the requirement for phase
control increases dramatically with increasing size of the cat state. This lends support to the idea that there is a general principle that makes macroscopic quantum effects difficult to observe,
even in the absence of decoherence.
Feihu Xu University of Toronto
Measurement Device Independent Quantum Key Distribution in a Practical Setting
Coauthors: Marcos Curty, Bing Qi, Wei Cui, Charles Ci Wen Lim, Kiyoshi Tamaki, and Hoi-Kwong Lo
A ground-breaking scheme – measurement device independent QKD (MDI-QKD) [Phys. Rev. Lett. 108, 130503, 2012] – was proposed to solve the “quantum hacking” problem. More precisely, MDI-QKD removes
all attacks in the detection system, the most important loophole of QKD implementations. It is highly practical and can be implemented with standard optical components. Very recently, MDI-QKD has
been demonstrated by a number of research groups, but before it is applicable in real life, it is important to resolve a number of practical issues.
In this paper, we solve the practical issues in the real-life implementations of MDI-QKD. Firstly, we study the physical origins of the quantum bit error rate in real-life MDI-QKD by proposing
general models for various practical errors. Secondly, we present a rigorous method to study both the finite-decoy protocol and the finite-key analysis. In the finite-key analysis, we use the
Chernoff bound to estimate the statistical fluctuations and consider the smooth min-entropy formalism to analyze the finite-key effect. Finally, we offer a general framework to evaluate the
optimal choice of intensities of signal and decoy states. Our result is of particular interest both to researchers hoping to demonstrate MDI-QKD and to others performing non-QKD experiments
involving quantum interference.
Eric Zhu Department of Electrical & Computer Engineering, University of Toronto
Broadband Polarization Entanglement Generation in a Poled Fiber
Coauthors: Z. Tang, L. Qian, L.G. Helt, M. Liscidini, J.E. Sipe, C. Corbari, P.G. Kazansky
In this paper, we will give an overview of our recent work in poled twin-hole fiber, fiber that has a non-zero second-order nonlinearity. Along with the intrinsic form birefringence of the fiber,
we have been able to exploit the type-II phase-matched parametric downconversion process to generate high-fidelity polarization-entangled photon pairs. We emphasize that this generation of
polarization entanglement is direct, without the need for interferometric means or walkoff compensation.
The quality of our source is examined through a number of characterization techniques, including two-photon interference, Hong-Ou-Mandel interference, and quantum state tomography. Furthermore,
the unique dispersion properties of the poled fiber allow for broadband polarization-entanglement over 100 nm centered at 1550 nm, opening up our source to many potential applications from
high-resolution quantum optical coherence tomography, to wavelength-division-multiplexed schemes of distributing entangled photons to multiple bi-parties for quantum cryptography and other
exciting quantum technologies. | {"url":"http://www.fields.utoronto.ca/programs/scientific/13-14/CQIQCV/poster-abstracts.html","timestamp":"2014-04-17T12:44:12Z","content_type":null,"content_length":"123338","record_id":"<urn:uuid:b33292ee-df61-4da5-9fc5-71cc695253d8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
free group of finite rank can contain free groups of infinite rank as a subgroup
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
how a free group of rank greater than or equal to 3 contains every free group of countable ranks as a sugroup?
up vote 1 down vote favorite gr.group-theory
add comment
how a free group of rank greater than or equal to 3 contains every free group of countable ranks as a sugroup?
Actually a free group of rank 2 contains a free group of countable rank as a subgroup, namely the commutator subgroup. The easiest way to see this is to view the free group as the
fundamental group of a wedge of 2 circles. The covering space associated to the commutator subgroup is the Cayley graph of ZxZ, or if you like, the grid formed by the integer lattice in R^
up vote 2. Clearly the fundamental group of this covering space is free of countable rank since there are countably infinitely many edges off a spanning tree. You can take the x-axis and all
10 down vertical lines as the spanning tree.
add comment
Actually a free group of rank 2 contains a free group of countable rank as a subgroup, namely the commutator subgroup. The easiest way to see this is to view the free group as the fundamental group
of a wedge of 2 circles. The covering space associated to the commutator subgroup is the Cayley graph of ZxZ, or if you like, the grid formed by the integer lattice in R^2. Clearly the fundamental
group of this covering space is free of countable rank since there are countably infinitely many edges off a spanning tree. You can take the x-axis and all vertical lines as the spanning tree.
In case an explicit description of such a subgroup would help, let $F_2 = \langle a,b|\rangle$ be the free group on 2 generators. Then the subgroup generated by the elements $\{a^nb^n\}_{n
\in\mathbb{N}}$ is free since it is a subgroup of a free group. Next, it is easy to check that the generator $a^kb^k$ is not an element of the free group generated by the $a^\ell b^\ell$
up vote 6 for $\ell\neq k$ so one concludes that this subgroup has infinite rank.
down vote
add comment
In case an explicit description of such a subgroup would help, let $F_2 = \langle a,b|\rangle$ be the free group on 2 generators. Then the subgroup generated by the elements $\{a^nb^n\}_{n\in\mathbb
{N}}$ is free since it is a subgroup of a free group. Next, it is easy to check that the generator $a^kb^k$ is not an element of the free group generated by the $a^\ell b^\ell$ for $\ell\neq k$ so
one concludes that this subgroup has infinite rank.
Even better, no normal subgroup of infinite index of a group of cohomological dimension at most two is finitely presented (for free groups, it must be free, so countable rank).
EDIT Bieri's theorem does state:
up vote 3 If $G$ is a group of cohomological dimension of at most two, while $N$ is a normal subgroup of G of infinite index, then either $N$ is free, or $N$ is not finitely presentable. Sigh. I was
down vote thinking of $G$ a free or a surface group, in which case my statement is correct (it is a theorem of Jaco (easy mod a not so easy theorem of Whitehead) that a subgroup of a surface group
is free if and only if it is of infinite index...
show 3 more comments
Even better, no normal subgroup of infinite index of a group of cohomological dimension at most two is finitely presented (for free groups, it must be free, so countable rank).
If $G$ is a group of cohomological dimension of at most two, while $N$ is a normal subgroup of G of infinite index, then either $N$ is free, or $N$ is not finitely presentable. Sigh. I was thinking
of $G$ a free or a surface group, in which case my statement is correct (it is a theorem of Jaco (easy mod a not so easy theorem of Whitehead) that a subgroup of a surface group is free if and only
if it is of infinite index...
There is a very nice theorem which states:
Thm: If $F$ is a free group on $n$ generators and $H$ is a normal subgroup of index $j$ then if both $n$ and $j$ are finite $H$ is a free group on $j(n-1)+1$ generators. If $n$ is infinite
and $j$ is finite then $H$ is a free group on infinitely many generators. Finally, if $j$ is infinite, then $H$ may be finitely or infinitely generated; however, if $H$ contains a normal
subgroup $N$ of $F$, $N\neq 1$, then $H$ is a free group on infinitely many generators.
up vote 2
down vote This is theorem 2.10 in Magnus, Karrass and Solitar "Combinatorial Group Theory" (and has a name...Schreier's formula?)
Once you have this theorem you can prove, for example, that the commutator subgroup is a free group on infinitely many generators (it contains a normal subgroup and is itself
characteristic so you can apply the last line of the theorem). But doing it the way Benjamin Steinberg did it is much neater. I just thought it would be useful to mention this formula!
add comment
Thm: If $F$ is a free group on $n$ generators and $H$ is a normal subgroup of index $j$ then if both $n$ and $j$ are finite $H$ is a free group on $j(n-1)+1$ generators. If $n$ is infinite and $j$ is
finite then $H$ is a free group on infinitely many generators. Finally, if $j$ is infinite, then $H$ may be finitely or infinitely generated; however, if $H$ contains a normal subgroup $N$ of $F$, $N
\neq 1$, then $H$ is a free group on infinitely many generators.
This is theorem 2.10 in Magnus, Karrass and Solitar "Combinatorial Group Theory" (and has a name...Schreier's formula?)
Once you have this theorem you can prove, for example, that the commutator subgroup is a free group on infinitely many generators (it contains a normal subgroup and is itself characteristic so you
can apply the last line of the theorem). But doing it the way Benjamin Steinberg did it is much neater. I just thought it would be useful to mention this formula! | {"url":"http://mathoverflow.net/questions/74902/free-group-of-finite-rank-can-contain-free-groups-of-infinite-rank-as-a-subgroup","timestamp":"2014-04-19T02:23:26Z","content_type":null,"content_length":"71915","record_id":"<urn:uuid:d277d770-3a9b-4d68-8793-161869a8b7b9>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algorithmic Problem Solving
Research Group, University of Nottingham
The algorithmic problem solving group conducts research into mathematical method, in particular the problem-solving skills involved in the formulation and solution of algorithmic problems. Our goal
is to articulate these skills primarily by way of concrete examples, but also by the development of appropriate mathematical theory.
Algorithmic problems are problems where the solution involves —possibly implicitly— the design of an algorithm. Algorithmic problem solving is about the formulation and solution of such problems.
The demands on the reliability of computer software have, we believe, lead to massive improvements in our problem-solving skills and in mathematical method. The improvements are centred on
goal-directed, calculational construction of algorithms as opposed to the traditional guess-and-verify methodology.
Of course, many algorithmic problems still pose massive challenges, and we have a very great deal to learn about good and bad technique in solving such problems. We believe, however, that the time is
now ripe for a greater focus on the methodology of problem solving, rather than on specific results. Our goal is to ensure that future generations are much better problem solvers than we are, not
because they know more facts but because their skills are more refined.
We aim to achieve our goals using a problem-driven approach. We intend to tackle challenging problems and document our successes and failures in solving these problems. The choice of problem is
crucial. We have no intention of trying to win the jackpot by tackling famous outstanding problems; instead, we will tackle problems that we suspect are within our grasp and from which we can learn
the most. (Of course, such problems may one day include some famous outstanding problems!) These may be new problems, including ones we invent ourselves in order to explore particular techniques, or
old problems, where we feel there is scope for improvement of the existing method.
The group holds an informal weekly club meeting (called the “Tuesday Morning Club”). The current interests cover topics like (algorithmic) number theory, calculational mathematics and program
construction. If you are interested, or would like to obtain more information, please do not hesitate to contact (one of) the current members; time and day of week vary throughout the year depending
on teaching commitments. | {"url":"http://aps.cs.nott.ac.uk/","timestamp":"2014-04-17T09:47:08Z","content_type":null,"content_length":"13249","record_id":"<urn:uuid:6e76e784-612e-49f8-ab20-8d5a7c7aeab1>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reports and Publications M. Ayala-Rincón
See also: MathSciNet DBLP IEEE ACM Scopus
• D.M. Muñoz-Arboleda, C. H. Llanos, L. Coelho and M. Ayala-Rincón, Accelerating the Artificial Bee Colony Algorithm by Hardware Parallel Implementations. In 3^rd IEEE LA Symp. on Circuits and
Systems (LASCAS 2012). (doi)
• J. Arias-Garcia, C. H. Llanos, M. Ayala-Rincón and R. P. Jacobi A fast and low cost architecture developed in FPGAs for solving systems of linear equations. In 3^rd IEEE LA Symp. on Circuits and
Systems (LASCAS 2012). (doi)
• J. Arias-Garcia, C. H. Llanos, M. Ayala-Rincón and R. P. Jacobi , FPGA implementation of large-scale Matrix Inversion using single, double and custom floating-point precision. In: VIII Southern
Conference on Programmable Logic SPL 2012, 2012, IEEE Proc. SPL 2012, 2012. p. 1-6. (doi)
• M. Ayala-Rincón Reusing Formal Proofs Through Isomorophisms. Invited Talk LACREST 2012 (PDFs abstract Proc. LACREST 2012 talk ).
• D. Saad and M. Ayala-Rincón A Compressed suffix array based index with succinct longest common prefix information. Short version accepted BSB 2012.
• A.C. Rocha Oliveira and M. Ayala-Rincón Formalizing the Confluence of Orthogonal Rewriting Systems. Presented in LSFA 2012. EPTCS Vol. 113:145-152, 2013 (doi, Extended Version, PVS tgz file)
• D. Nantes Sobrinho, M. Fernández and M. Ayala-Rincón Elementary Deduction Problem for Locally Stable Theories with Normal Forms, PDF. Presented in LSFA 2012. EPTCS Vol. 113:45-60, 2013 (doi)
• J.L. Soncco-Álvarez and M. Ayala-Rincón Sorting Permutations by Reversals through a Hybrid Genetic Algorithm based on Breakpoint Elimination and Exact Solutions for Signed Permutations, PDF.
Special Issue best papers CLEI 2012, ENTCS Vol. 292:119-133, 2013 (doi) .
• T.A. de Lima and M. Ayala-Rincón Complexity of Reversal Distance and other General Metrics on Permutation Groups, PDF. 7CCC 2012. (doi)
• J.L. Soncco-Álvarez and M. Ayala-Rincón A Genetic Approach with a Simple Fitness Function for Sorting Unsigned Permutations by Reversals, Code, PDF. 7CCC 2012. (doi)
• A.L. Galdino and M. Ayala-Rincón A Formalization of the Knut-Bendix(-Huet) Critical Pair Theorem (doi, PVS files: TRS.tgz). J. of Automated Reasonning 45:301-325, 2010.
• D.M. Muñoz-Arboleda, D. F. Sanchez, C. H. Llanos and M. Ayala-Rincón, FPGA Based Floating-point Library for CORDIC Algorithms. In Proc. IEEE - SPL - VI Southern Programmable Logic Conference SPL
2010 doi, pp 55-60, 2010.
• D.N. Sobrino and M. Ayala-Rincón, Reduction of the Intruder Deduction Problem into Equational Elementary Deduction for Electronic Purse Protocols with Blind Signatures, PDF, doi. Proc. 17th
WoLLIC, LNCS vol. 6188, pag. 218-231, 2010.
• A.B. Avelar, F.L.C. de Moura, M. Ayala-Rincón, and A.L. Galdino Verification of the Completeness of Unification Algoritms à la Robinson, PDF, doi. Proc. 17th WoLLIC, LNCS vol. 6188, pag. 110-124,
• D.L. Ventura, M. Ayala-Rincón and F. Kamareddine, Intersection type systems and explicit substitutions calculi , doi Proc. 17th WoLLIC, LNCS vol. 6188, pag. 232-246, 2010.
• R. B. Nogueira, A.C.A. Nascimento, F.L.C. de Moura and M. Ayala-Rincón, Formalization of Security Proofs Using PVS in the Dolev-Yao Model, (PDF, PVS files: verifD_Y.tgz)). Booklet Proc.
Computability in Europe CiE, 2010.
• D.M. Muñoz-Arboleda, C. H. Llanos, L. Coelho and M. Ayala-Rincón, Accelerating the Shuffled Frog Leaping algorithm by parallel implementations in FPGAs. In Proc. IEEE Fifth International
Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA 2010) doi, pp 1526-34, 2010.
• D.M. Muñoz-Arboleda, C. H. Llanos, L. Coelho and M. Ayala-Rincón, Comparison between two FPGA implementations of the Particle Swarm Optimization algorithm for high-performance embedded
applications. In Proc. IEEE Fifth International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA 2010) doi, pp 1637-45, 2010.
• D.M. Muñoz-Arboleda, D. F. Sanchez, C. H. Llanos and M. Ayala-Rincón, Tradeof of FPGA Design of a Floating-point Library for Arithmetic Operators. JICS Journal of Integrated Circuits and Systems
5:42-52, 2010.
• E.F.O. Sandes, A.C.M.A. de Melo, M. Ayala-Rincón, Comparação Paralela Exata de Sequências Biolólogicas Longas en Clusters de Computadores. I2TS 2005.
• R. P. Jacobi, M. Ayala-Rincón, L. G. A. Carvalho, C. Llanos and R. Hartenstein, Reconfigurable Systems for Sequence Alignment and for General Dynamic Programming. Genetics and Molecular Research,
4(3):543-552, 2005.
• André Braga, C. Llanos, M. Ayala-Rincón and R. P. Jacobi, VANNGen: a Flexible CAD Tool for Hardware Implementation of Artificial Neural Networks, (Postscript, 370 KB). In IEEE Computer Society,
Int. Conf. on Reconfigurable Computing and FPGAs - ReConFig05, 2005.
• M. Ayala-Rincón and P. D. Conejo, A Linear Time Lower Bound on McCreight and General Updating Algorithms for Suffix Trees, in Algorithmica, 37(3):233-241, Springer-Verlag, (communicated by F. P.
Preparata), 2003.
• M. Ayala-Rincón, R. B. Nogueira, C. Llanos, R. P. Jacobi and R. Hartenstein, Efficient Computation of Algebraic Operations over Dynamically Reconfigurable Systems Specified by Rewriting-Logic
Environments, (PDF). In IEEE CS press Proc. 23^rd SCCC, pp 60-69, 2003..
• M. Ayala-Rincón and F. Kamareddine, On Applying the lambda s[e]-Style of Unification for Simply-Typed Higher Order Unification in the Pure lambda Calculus. Special Issue of WoLLIC 2001 selected
papers, John T. Baldwin, Ruy J. G. B. de Queiroz and Edward H. Haeusler, Eds. Matemática Contemporânea, Vol. 24:1-22, 2003.
• M. Ayala-Rincón, R. B. Nogueira, R. P. Jacobi, C. Llanos and R. Hartenstein, Modeling a Reconfigurable System for Computing the FFT in Place via Rewriting-Logic, (postscritp, X MB; Pdf, X MB). In
IEEE CS Press Proc. 16th Symposium on Integrated Circuits and System Design - SBCCI 03, pp 205-210, São Paulo, Brasil, (Sep 8-11, 2003). .
• R. Hartenstein, R. P. Jacobi, M. Ayala-Rincón and C. Llanos, Using Rewriting-Logic Notation for Functional Verification in Data-Stream Based Reconfigurable Computing, (postscritp, X MB; Pdf, X
MB). In Forum on Specification and Design Languages - FDL 03, Frankfurt, Germany, (Sep 23-26, 2003).
• M. Ayala-Rincón, R. Hartenstein, R. Maya Neto, R. P. Jacobi and C. Llanos, Architectural Specification, Exploration and Simulation Through Rewriting-Logic, (Postscript, 134 KB). Colombian Journal
of Computation, Vol 3(2):20-34, 2003.
• M. Ayala-Rincón, and C. Muñoz, Explicit Substitutions and All That, postscript version. Colombian Journal of Computation, 1(1):47-71, 2000. Also available as NASA ICASE Technical Report 2000-45.
• M. Ayala-Rincón, and F. Kamareddine, Strategies for Simply-Typed Higher Order Unification via lambda s[e]-Style of Explicit Substitution, in Proc. The Third International Workshop on Explicit
Substitutions Theory and Applications to Programs and Proofs (WESTAPP 2000), pages 3-17. Held in conjunction with RTA2000, Norwich, England, 10-13 July 2000.
• M. Ayala-Rincón, and F. Kamareddine, Unification via lambda s[e]-Style of Explicit Substitution, in Proc. 2nd International Conference on Principles and Practice of Declarative Programming (PPDP
2000), pages 163-174, ACM Press. Held as part of PLI 2000, Montreal, Canada 17-22 September 2000. (Talk (Postscript ), 355 KB)
• M. Ayala-Rincón, Church-Rosser Property for Conditional Rewriting Systems with Built-in Predicates as Premises, chapter in Frontiers of Combining Systems 2 (Studies in Logic and Computation, 7),
Dov M. Gabbay and Maarten de Rijke, editors, Research Studies Press/Wiley, 17-38, 1999.
Click here to order the book at amazon.com.
• M. Ayala-Rincón and F. Kamareddine, Higher Order Unification via lambda s[e]-Style of Explicit Substitution, Technical Report ULTRA Group ( Useful Logics, Type Theory, Rewriting Systems and Their
Applications), CEE, Heriot-Watt University, Edinburgh, Scotland ( Link to ULTRA publications, You will find link to postscript file 600 KB). 53 pages, December 1999.
• M. Ayala-Rincón and L. M. Gadelha: Some Applications of (Semi-)Decision Algorithms for Presburger Arithmetic in Automated Deduction based on Rewriting Techniques -, in La Revista de la Sociedad
Chilena de Ciencia de la Computación , 2(1):14-23 1997 (Edited in June 1998).
• M. Ayala-Rincón and P. D. Conejo, A Linear Time Lower Bound on Updating Algorithms for Suffix Trees, in Proc. String Processing and Information Retrieval: A South American Symposium, IEEE
Computer Society, Santa Cruz de La Sierra, Bolivia. September 1998.
• M. Ayala-Rincón, Church-Rosser Property for Conditional Rewriting Systems with Built-in Predicates as Premises, in Proc. Frontiers of Combining Systems, Amsterdam, Holland. October 1998. See
chapter version in 1999.
• I. E. T. de Araújo and M. Ayala-Rincón, An Algorithm for General Unification Modulo Presburger Arithmetic, in First Brazilian Workshop on Formal Methods, Porto Alegre, Brasil. October 1998.
• M. Ayala-Rincón: A Deductive Calculus for Conditional Equational Systems with Built-in Predicates as Premises, in Revista Colombiana de Matemáticas 31(2):77-98, 1997 (Edited in December 1998).
See paper in the nearest EMIS mirror: Brasil, Germany or USA.
• M. Ayala-Rincón: A Deduction Procedure for Conditional Rewriting Systems with Built-in Predicates -, in Proc. SEMISH'97, Brasília, Brasil, August 1997 (Gziped Postscript, 74 KB).
• L. M. Gadelha and M. Ayala-Rincón: Métodos de Decisão para a Aritmética de Presburger na Dedução Automãtica com Técnicas de Reescrita -, Abstract CNMAC'97 (XX Congresso Nacional de
Matemática Aplicada e Computacional), Gramado, Brasil, September 1997 (Gziped Postscript, 24 KB). In Portuguese.
• L. M. Gadelha and M. Ayala-Rincón: Applications of Decision Algorithms for Presburger Arithmetic in Rewrite Automated Deduction -, in Proc. PANEL'97 (XXIII Latin American Conference on
Informatics), Valparaíso, Chile, November 1997 (Gziped Postscript, 56 KB).
• M. Ayala-Rincón: Confluence of Conditional Rewriting Systems with Built-in Predicates and Standard Premises as Conditions, in Proc. SEMISH'94, Caxambú, Brasil, July-August 1994 (Compressed
Postscript, 82 KB).
• M. Ayala-Rincón: Problemas actuales en el campo de la reescritura, in Revista Escuela Colombiana de Ingeniería, 4(10):27-31 January-March 1993. In Spanish
• M. Ayala-Rincón: Expressiveness of Conditional Equational Systems with Built-in Predicates, PhD thesis under the supervision of Prof. Dr. K. Madlener, Fachbereich Informatik, Universität
Kaiserslauten, Germany, December 1993. In English. | {"url":"http://ayala.mat.unb.br/publications.html","timestamp":"2014-04-20T13:19:19Z","content_type":null,"content_length":"55549","record_id":"<urn:uuid:5268ce1f-6e1b-458f-a5fe-588abccc2c9f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |