content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Shop Essentials Training
Tooling U classes are offered at the beginner, intermediate, and advanced levels. The typical class consists of 12 to 25 lessons and will take approximately one hour to complete.
│ Class Name: │ Basic Shop Math 110 │ Next Steps
│ Description: │ This class reviews basic fractions and decimals as well as basic triangle and circle geometry relevant to the shop. │
│ Prerequisites: │ arithmetic skills │
│ Difficulty: │ Beginner │
│ Number of Lessons: │ 18 │
│ Language: │ English │
Class Outline Class Objectives
• Objectives • Determine the priority of operations.
• Order of Operations • Explain the parts of a fraction.
• Fractions • Add fractions.
• Fraction Addition and Subtraction • Subtract fractions.
• Fraction Multiplication and Division • Multiply fractions.
• Decimals • Divide fractions.
• Reading Decimals • Identify the relationship of decimals to fractions.
• Decimal Addition and Subtraction • Read a decimal.
• Decimal Multiplication and Division • Add decimals.
• Percents • Subtract decimals.
• Scientific Notation • Multiply decimals.
• Triangle Geometry • Divide decimals.
• Pythagorean Theorem • Define percent.
• Right Triangle Trigonometry • Convert between decimals and scientific notation.
• Triangle Measurement Applications • Identify the major parts of a triangle.
• Circle Geometry • Define the Pythagorean theorem.
• Circle Measurement Applications • State the three trigonometric ratios for a right triangle.
• Summary • Find missing right triangle information using the Pythagorean Theorem.
• Find missing right triangle information using the trigonometric ratios.
• Identify the major parts of a circle.
• Identify uses for circular dimensions.
Vocabulary Term Definition
acute triangle A triangle containing three angles that each measure less than 90 degrees.
angle A figure formed at the intersection of two lines.
arc A portion of the edge of a circle.
chord A segment connecting two points on a circle.
circle The figure formed by the group of points that are an equal distance from a point, or center.
CNC programmer The person responsible for the creation of a part program. The part programmer translates the workpiece design into program instructions for the CNC machine.
cosine In a right triangle, the ratio of the side adjacent the angle and the hypotenuse of the triangle.
decimal A representation of a fraction whose denominator is some power of 10.
denominator The expression in the bottom location of a fraction, below the fraction bar.
diameter The distance from one edge of the circle to the opposite end through the center.
division An operation that determines how many times a quantity is contained in another.
divisor The quantity by which a number is divided. The divisor resides in the denominator of a fraction.
equilateral triangle A triangle with three equal sides.
exponent A number or symbol that indicates the power to which an expression is raised.
hypotenuse In a right triangle, the side opposite the right angle.
inverse A fraction obtained by reversing, or inverting, the numerator and denominator. 4/5 is the inverse of 5/4.
inverse operation An operation that counteracts or undoes another.
isosceles triangle A triangle with two equal sides.
least common denominator The least common factor shared by two or more numbers that is used to find equivalent fractions.
multiplication An operation that describes the repetitive addition of numbers.
numerator The expression in the top location of a fraction, above the fraction bar.
obtuse triangle A triangle with one angle that is greater than 90 degrees.
percent One part in a hundred; the amount per hundred.
product The result of a multiplication operation.
Pythagorean theorem A mathematical rule describing how sides of a right triangle are related. The sum of the square of both sides equals the square of the hypotenuse.
quotient The result of a division operation.
radius The distance from the center to the edge of a circle.
ratio The relationship between two quantities expressed as a fraction.
right triangle A triangle containing an angle that measures exactly 90 degrees.
scalene triangle A triangle with no two equal sides.
scientific notation A system of condensing powers of ten.
sine In a right triangle, the ratio of the side opposite an angle and the hypotenuse of the triangle.
tangent In a right triangle, the ratio of the sides opposite and adjacent to the angle in question.
tolerance limit The boundaries of allowable variation from a specified dimension.
triangle A three-sided figure determined by connecting three points that are not part of the same line.
trigonometric ratio A ratio that describes a relationship between sides and angles of triangles.
vertex The point where the two sides of an angle or triangle intersect.
|
{"url":"http://www.toolingu.com/class_class_desc.aspx?class_ID=800110","timestamp":"2014-04-17T10:01:30Z","content_type":null,"content_length":"108596","record_id":"<urn:uuid:fcee9761-e356-4507-ba09-b167598bcea7>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Disquisitiones Arithmeticae
Gauss: Disquisitiones Arithmeticae
In 1801 Carl Friedrich Gauss published his classic work Disquisitiones Arithmeticae. He was 24 years old. A second edition of Gauss' masterpiece appeared in 1870, fifteen years after his death. This
second edition was produced for the Göttingen Academy of Sciences (Königliche Gesellschaft der Wissenschaften) by Schering who gave the following information:-
In the year 1801 seven sections of the Disquisitiones Arithmeticae were published in octavo. The first reprint was published under my direction in 1863 as the first volume of Gauss' Works. That
edition has been completely sold out, and a new edition is presented here. The eighth section, to which Gauss makes frequent reference and which he had intended to publish with the others, was
found among his manuscripts. Since he did not develop it in the same way as the first seven sections, it has been included with his other unpublished arithmetic essays in the second volume of
this edition of his Works. ... The form of this edition has been changed to allow for ease of order and summary. I believe that this was justified because Gauss had made such a point of
economising on space. Many formulas which were included in the running text have been displayed to better advantage.
An English translation of the second edition of 1870 was made by Arthur A Clarke and published by Yale University Press in 1966. A version of Gauss' Dedication and Preface are given below essentially
following the translation by Clark:
TO THE MOST SERENE
PRINCE AND LORD
CHARLES WILLIAM FERDINAND
DUKE OF BRUNSWICK AND LUNEBURG
I consider it my greatest good fortune that You allow me to adorn this work of mine with YOUR most honourable name. I offer it to YOU as a sacred token of my filial devotion. Were it not for YOUR
favour, Most Serene Prince, I would not have had my first introduction to the sciences. Were it not for YOUR unceasing benefits in support of my studies, I would not have been able to devote myself
totally to my passionate love, the study of mathematics. It has been YOUR generosity alone which freed me from other cares, allowed me to give myself to so many years of fruitful contemplation and
study, and finally provided me the opportunity to set down in this volume some partial results of my investigations. And when at length I was ready to present my work to the world, it was YOUR
munificence alone which removed all the obstacles that threatened to delay its publication. Now that I am constrained to acknowledge YOUR remarkable bounty toward me and my work I find myself unable
to pay a just and worthy tribute. I am capable only of a secret and ineffable admiration. Well do I recognize that I am not worthy of YOUR gift, and yet everyone knows YOUR extraordinary liberality
to all who devote themselves to the higher disciplines. And everyone knows that You have never excluded from YOUR patronage those sciences which are commonly regarded as being too recondite and too
removed from ordinary life. YOU YOURSELF in YOUR supreme wisdom are well aware of the intimate and necessary bond that unites all sciences among themselves and with whatever pertains to the
prosperity of the human society. Therefore I present this book as a witness to my profound regard for You and to my dedication to the noblest of sciences. Most Serene Prince, if YOU judge it worthy
of that extraordinary favour which You have always lavished on me, I will congratulate myself that my work was not in vain and that I have been graced with that honour which I prize above all others.
Your Highness' most dedicated servant
C F GAUSS
Brunswick, July 1801
AUTHOR'S PREFACE
The inquiries which this volume will investigate pertain to that part of Mathematics which concerns itself with integers. I will rarely refer to fractions and never to surds. The Analysis which is
called indeterminate or Diophantine and which discusses the manner of selecting from an infinite set of solutions for an indeterminate problem those that are integral or at least rational (and
especially with the added condition that they be positive) is not the discipline to which I refer but rather a special part of it, just as the art of reducing and solving equations (Algebra) is a
special part of universal Analysis. And as we include under the heading ANALYSIS all discussion that involves quantity, so integers (and fractions in so far as they are determined by integers)
constitute the proper object of ARITHMETIC. However what is commonly called Arithmetic hardly extends beyond the art of enumerating and calculating (i.e. expressing numbers by suitable symbols, for
example by a decimal representation, and carrying out arithmetic operations). It often includes some subjects which certainly do not pertain to Arithmetic (like the theory of logarithms) and others
which are common to all quantities. As a result it seems proper to call this subject Elementary Arithmetic and to distinguish from it Higher Arithmetic which properly includes more general inquiries
concerning integers. We consider only Higher Arithmetic in the present volume.
Included under the heading "Higher Arithmetic" are those topics which Euclid treated with elegance and rigor in Book VII ff., and they can be considered an introduction to this science. The
celebrated work of Diophantus, dedicated to the problem of indeterminateness, contains many results which excite a more than ordinary regard for the ingenuity and proficiency of the author because of
their difficulty and the subtle devices he uses, especially if we consider the few tools that he had at hand for his work. However, these problems demand a certain dexterity and skilful handling
rather than profound principles and, because the questions are too specialized and rarely lead to more general conclusions, Diophantus' book seems to fit into that epoch in the history of Mathematics
when scientists were more concerned with creating a characteristic art and a formal Algebraic structure than with attempts to enrich Higher Arithmetic with new discoveries. The really profound
discoveries are due to more recent authors like those men of immortal glory P de Fermat, L Euler, L Lagrange, A M Legendre (and a few others). They opened the door to what is penetrable in this
divine science and enriched it with enormous wealth. I will not recount here the individual discoveries of these geometers since they can be found in the Preface to the appendix which Lagrange added
to Euler's Algebra and in the recent volume of Legendre (which I shall soon cite). I shall give them their due praise in the proper places in these pages.
The purpose of this volume whose publication I promised five years ago is to present my investigations into the field of Higher Arithmetic. Lest anyone be surprised that the contents here go back
over many first principles and that many results had been given energetic attention by other authors, I must explain to the reader that when I first turned to this type of inquiry in the beginning of
I795 I was unaware of the more recent discoveries in the field and was without the means of discovering them. What happened was this. Engaged in other work I chanced on an extraordinary arithmetic
truth (if I am not mistaken, it was the theorem of art. 108 [-1 is a quadratic residue of all numbers of the form 4n + 1 and a non-residue of all numbers of the form 4n + 3]). Since I considered it
so beautiful in itself and since I suspected its connection with even more profound results, I concentrated on it all my efforts in order to understand the principles on which it depended and to
obtain a rigorous proof. When I succeeded in this I was so attracted by these questions that I could not let them be. Thus as one result led to another I had completed most of what is presented in
the first four sections of this work before I came into contact with similar works of other geometers. Only while studying the writings of these men of genius did I recognize that the greater part of
my meditations had been spent on subjects already well developed. But this only increased my interest, and walking in their footsteps I attempted to extend Arithmetic further. Some of these results
are embodies in Sections V, VI, and VII. After a while I began to consider publishing the fruits of my new awareness. And I allowed myself to be persuaded not to omit any of the early results,
because at that time there was no book that brought together the works of other geometers, scattered as they were among Commentaries of learned Academies. Besides, many of these results are so bound
up with one another and with subsequent investigations that new results could not be explained without repeating from the beginning.
Meanwhile there appeared the outstanding work of that man who was already an expert in Higher Arithmetic, Legendre's "Essai d'une théorie des nombres." Here he collected together and systematized not
only all that had been discovered up to that time but added many new results of his own. Since this book came to my attention after the greater part of my work was already in the hands of the
publishers, I was unable to refer to it in analogous sections of my book. I felt obliged, however, to add some observations in an Appendix and I trust that this understanding and illustrious man will
not be offended.
The publication of my work was hindered by many obstacles over a period of four years. During this time I continued investigations which I had already undertaken and deferred to a later date so that
the book would not be too large, but I also undertook new investigations. Similarly, many questions which I touched on only lightly because a more detailed treatment seemed less necessary (e.g. the
contents of art. 37, 82 ff., and others) have been further developed and have been replaced by more general considerations (cf. what is said in the Appendix about art. 306). Finally, since the book
came out much larger than I expected, owing to the size of Section V, 1 shortened much of what I first intended to do and, especially, I omitted the whole of Section Eight (even though I refer to it
at times in the present volume; it was to contain a general treatment of algebraic congruences of indeterminate rank). All the treatises which complement the present volume will be published at the
first opportunity.
In several difficult discussions I have used synthetic proofs and have suppressed the analysis which led to the results. This was necessitated by brevity, a consideration that must be consulted as
much as possible.
The theory of the division of a circle or of a regular polygon treated in Section VII of itself does not pertain to Arithmetic but the principles involved depend uniquely on Higher Arithmetic. This
will perhaps prove unexpected to geometers, but I hope they will be equally pleased with the new results that derive from this treatment.
These are the things I wanted to warn the reader about. It is not my place to judge the work itself. My greatest hope is that it pleases those who have at heart the development of science and that it
proposes solutions that they have been looking for or at least opens the way for new investigations.
JOC/EFR August 2007
The URL of this page is:
|
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Extras/Gauss_Disquisitiones.html","timestamp":"2014-04-18T13:14:20Z","content_type":null,"content_length":"12856","record_id":"<urn:uuid:cc34edb7-1e77-492b-84ba-c23a4bbff641>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Re: [mg5061] Solve results
Replies: 0
Re: [mg5061] Solve results
Posted: Oct 30, 1996 12:14 AM
>At 02:48 AM 10/26/96 +0000, you wrote:
>>I recently came upon this and was wondering if it's a Solve bug, some
>>notation I'm not familiar with, or just an unfortunate happenstance.
>>When I give Mathematica the equation x^3-8==0, this is what it returns:
>>But if I first factor the polynomial instead:
>>if I use reduce instead of solve
>>My question is: why the different behaviors, and how can I tell in advance
>>which to use?
>>| | I understand mine's a risky plan,
>>| Greg Anderson | and your system can't miss
>>| dwarf@wam.umd.edu | but is security, after all a cause
>>| timbwolf@eng.umd.edu | or symptom of happiness.
>>| |
>>+----------------------------+ Dream Theater -- A Matter of Time
>Somehow the Mma input and output did not make it into your message.
>When I input Solve[x^3==8,x], I get the following answer
>{{x->2}, {x->-2(-1)^1/3},{x->2(-1)^2/3} which is correct.
>Most people do not recognize the solution process when
>imaginary solutions are present. The 8 on the rhs of the
>equation is really 8*Exp[I*2*Pi]. The cube root of this
>expression is 2*Exp[I*0], 2*Exp[I*(2/3)*Pi], and 2*Exp[I*(4/3)*Pi].
>Note that the 3rd power of each is 8*Exp[I*2*Pi]. Applying Euler's
>Formula (Exp[I*theta]=Cos[theta]+I*Sin[theta]) will give
>you the more conventional solution, which I suspect is the
>second example. The third example, is I believe just the
>rectangular version of the Solve solution.
>Very few Algebra courses cover the use of Euler's formula,
>which is a bridge between polar and rectangular,
>which is commonly used in Electrical Engineering for AC circuit
>theory problems and I suspect many other places.
>I have not seen Solve make a mistake, so I would place my
>trust on the initial results. Incidentally, one can always
>check the results from all three cases, with ease.
>Hope this helps.
>Sherman C. Reed
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=224145","timestamp":"2014-04-17T13:15:44Z","content_type":null,"content_length":"16202","record_id":"<urn:uuid:8afb1323-dc80-4432-844f-1891d6d4636b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
|
away3d.core.base Summary
Class Description
Geometry Geometry is a collection of SubGeometries, each of which contain the actual geometrical data such as vertices, normals, uvs, etc.
Object3D Object3D provides a base class for any 3D object that has a (local) transformation. Standard Transform: The standard order for transformation is [parent transform]
(Translate+Pivot) (Rotate) (-Pivot) (Scale) [child transform] This is the order of matrix multiplications, left-to-right.
SkinnedSubGeometry SkinnedSubGeometry provides a SubGeometry extension that contains data needed to skin vertices.
SubGeometry The SubGeometry class is a collections of geometric data that describes a triangle mesh.
SubMesh SubMesh wraps a SubGeometry as a scene graph instantiation.
|
{"url":"http://www.away3d.com/livedocs/away3d/4.0/away3d/core/base/package-detail.html","timestamp":"2014-04-19T09:24:17Z","content_type":null,"content_length":"5950","record_id":"<urn:uuid:5396be5f-200b-4479-8f2f-84ef96e51597>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Charles Bossut
Born: 11 August 1730 in Tartaras (near Rive de Gier), Rhône-et-Loire, France
Died: 14 January 1814 in Paris, France
Click the picture above
to see two larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Charles Bossut's father was Barthélemy Bossut and his mother was Jeanne Thonnerine. Charles never knew his father for his died when he was only six months old. The arrangement which allowed him to be
brought up in a reasonably affluent home was that his father's brother took over the role of his father and Charles was brought up in his home. He was educated at the Jesuit College of Lyon which he
entered at the age of fourteen. There he was taught mathematics by Père Béraud who had already taught Montucla and inspired him to study mathematical physics. Montucla was five years older than
Bossut and completed his studies at the Jesuit College of Lyon about the time that Bossut entered. Another famous mathematician also studied under Père Béraud at the College, namely Jérôme Lalande
who was two years younger than Bossut so the two overlapped at the College.
Although his main interests were in mathematics, and particularly in mathematical education, Bossut continued in the Church after his studies at the Jesuit college, and took holy orders becoming an
abbé. He was then properly addressed as the Abbé Charles Bossut. He was encouraged in his mathematical researches by d'Alembert, Clairaut and Camus. He became a student of d'Alembert and in that
capacity was admitted to the Academy of Sciences as a 'correspondant' on 12 May 1753. Before this in 1752, at the age of 21, Bossut had been appointed professor of mathematics at the École du Génie
at Mézières, the first school of military engineering which had been established at Mézières in 1748. (Lazare Carnot, himself a graduate of Mézières who studied there after Bossut had left, moved the
school to Metz in 1794 where it was renamed the École Polytechnique). While at Mézières, Bossut transformed the quality and content of the courses. Among his students at Mézières were Borda and
Coulomb. He also did fine research and won Academy prizes for his work on mechanics applied to ships and on resistance to planetary motion. It was a memoir of 1762 which won Bossut the Grand Prix of
the Academy of Sciences for work on the resistance of fluids to the motion of planets. He also won the Grand Prix (or shared it) in 1761 and 1765.
Then in 1765 Monge was appointed to the École du Génie as a draftsman. Of course, in this post Monge was undertaking tasks that were not entirely to his liking, for he aspired to a position in life
which made far more use of his mathematical talents. However the École du Génie brought Monge into contact with Bossut who encouraged him to develop his ideas on geometry. After Bossut left the chair
of mathematics at the École du Génie at Mézières in 1768, Monge was appointed to succeed him in January 1769. C B Boyer writes:-
On 22 January 1769 Monge wrote to his former mentor, the Abbé Charles Bossut, that he was composing a memoir on the evolutes of curves of double curvature, and he asked the abbé for an opinion on
the originality and usefulness of the work. Bossut's reply has not survived; but the judgment evidently was encouraging, for in June of the same year there appeared in the J Encyclopédique a
"Lettre de M Monge" containing a summary of his results. The theory of evolutes for plane curves had been presented by Huygens about a century earlier in connection with the study of pendulum
clocks. The "Lettre" of Monge not only generalized the conclusions of Huygens to space curves, but added further discoveries, including the significant fact that the surface containing the
centres of curvature of a gauche curve is developable.
The letter is published in [5].
Bossut is famed for his textbooks which were widely used throughout France. He wrote his first textbook Traité élémentaire de méchanique et de dinamique appliqué principalement aux mouvements des
machines (1763) while at the École du Génie. He also published the more famous Cours complet de mathematiques in 1765. Although, as we indicated above, he left the chair of mathematics at Mézières in
1768, he remained as an examiner at the School until the it moved to Metz in 1794. He then essentially continued his role, becoming an examiner at the École Polytechnique.
The economist Turgot, Baron De L'Aulne, was appointed 'comptroller general' of France by Louis XVI on 24 August 1774. Among his first actions was the creation of a chair of hydrodynamics at the
Louvre, where he himself had studied. Turgot's friend the Marquis de Condorcet, who he had appointed as Inspector General of the Mint, may well have influenced him to create the chair. Since
Condorcet and Bossut were close collaborators it may have essentially been created for Bossut who certainly was appointed and filled it until 1780. In 1775 Bossut participated with d'Alembert and
Condorcet in experiments on fluid resistance. Also during this period he was editing an edition of the works of Pascal which was published in five volumes in 1779.
He was later to collaborate with d'Alembert on the mathematical part of Diderot's Encyclopédie méthodique. Also later in his career he wrote Méchanique en général (1792) and his treatise on the
history of mathematics in two volumes Essai sur l'histoire générale des mathématique (1802).
Bossut became somewhat of a recluse during the last years of his life. He had never married and had no close family. Although he seems to have come to dislike the company of people, he had been
honoured for his work by many scientific academies. The academies of Lyons and Toulouse awarded him prizes, and he was elected to the St Petersburg Academy of Sciences as well as the academies at
Turin and Bologna.
Article by: J J O'Connor and E F Robertson
List of References (5 books/articles)
A Poster of Charles Bossut Mathematicians born in the same country
Additional Material in MacTutor
Honours awarded to Charles Bossut
(Click below for those honoured in this way)
Paris street names Rue Charles Bossut (12th Arrondissement)
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © August 2006 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is:
|
{"url":"http://www-history.mcs.st-and.ac.uk/Biographies/Bossut.html","timestamp":"2014-04-19T09:34:56Z","content_type":null,"content_length":"17523","record_id":"<urn:uuid:be833ef7-458c-4409-9401-5b0cacb4c5b5>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Kline Directive: Technological Feasibility (2d)
To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of
our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts, Legal Standing, Safety Awareness, Economic Viability, Theoretical-Empirical
Relationships, and Technological Feasibility.
In this post on technological feasibility, I point to some more mistakes in physics, so that we are aware of the type of mistakes we are making. This I hope will facilitate the changes required of
our understanding of the physics of the Universe and thereby speed up the discovery of new physics required for interstellar travel.
The scientific community recognizes two alternative models for force. Note I use the term recognizes because that is how science progresses. This is necessarily different from the concept how Nature
operates or Nature’s method of operation. Nature has a method of operating that is consistent with all Nature’s phenomena, known and unknown.
If we are willing to admit, that we don’t know all of Nature’s phenomena — our knowledge is incomplete — then it is only logical that our recognition of Nature’s method of operation is always
incomplete. Therefore, scientists propose theories on Nature’s methods, and as science progresses we revise our theories. This leads to the inference that our theories can never be the exact
presentation of Nature’s methods, because our knowledge is incomplete. However, we can come close but we can never be sure ‘we got it’.
With this understanding that our knowledge is incomplete, we can now proceed. The scientific community recognizes two alternative models for force, Einstein’s spacetime continuum, and quantum
mechanics exchange of virtual particles. String theory borrows from quantum mechanics and therefore requires that force be carried by some form of particle.
Einstein’s spacetime continuum requires only 4 dimensions, though other physicists have add more to attempt a unification of forces. String theories have required up to 23 dimensions to solve
However, the discovery of the empirically validated g=τc^2 proves once and for all, that gravity and gravitational acceleration is a 4-dimensional problem. Therefore, any hypothesis or theory that
requires more than 4 dimensions to explain gravitational force is wrong.
Further, I have been able to do a priori what no other theories have been able to do; to unify gravity and electromagnetism. Again only working with 4 dimensions, using a spacetime continuum-like
empirically verified Non Inertia (Ni) Fields proves that non-nuclear forces are not carried by the exchange of virtual particles. And therefore, if non-nuclear forces are not carried by the exchange
of virtual particles, why should Nature suddenly change her method of operation and be different for nuclear forces? Virtual particles are mathematical conjectures that were a convenient mathematical
approach in the context of a Standard Model.
Sure there is always that ‘smart’ theoretical physicist who will convert a continuum-like field into a particle-based field, but a particle-continuum duality does not answer the question, what is
Nature’s method? So we come back to a previous question, is the particle-continuum duality a mathematical conjecture or a mathematical construction? Also note, now that we know of g=τc^2, it is not
a discovery by other hypotheses or theories, if these hypotheses/theories claim to be able to show or reconstruct a posteriori, g=τc^2, as this is also known as back fitting.
Our theoretical physicists have to ask themselves many questions. Are they trying to show how smart they are? Or are they trying to figure out Nature’s methods? How much back fitting can they keep
doing before they acknowledge that enough is enough? Could there be a different theoretical effort that could be more fruitful?
The other problem with string theories is that these theories don’t converge to a single set of descriptions about the Universe, they diverge. The more they are studied the more variation and
versions that are discovered. The reason for this is very clear. String theories are based on incorrect axioms. The primary incorrect axiom is that particles expand when their energy is increased.
The empirical Lorentz-Fitzgerald transformations require that length contracts as velocity increases. However, the eminent Roger Penrose, in the 1950s showed that macro objects elongate as they fall
into a gravitational field. The portion of the macro body closer to the gravitational source is falling at just a little bit faster velocity than the portion of the macro body further away from the
gravitational source, and therefore the macro body elongates. This effect is termed tidal gravity.
In reality as particles contract in their length, per Lorentz-Fitzgerald, the distance between these particles elongates due to tidal gravity. This macro expansion has been carried into theoretical
physics at the elementary level of string particles, that particles elongate, which is incorrect. That is, even theoretical physicists make mistakes.
Expect string theories to be dead by 2017.
Previous post in the Kline Directive series.
Next post in the Kline Directive series.
Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity
Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.
Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.
|
{"url":"http://lifeboat.com/blog/2012/11/the-kline-directive-technological-feasibility-2d","timestamp":"2014-04-18T00:13:23Z","content_type":null,"content_length":"73574","record_id":"<urn:uuid:d840f1c8-a981-44a3-81bb-515bbcceee88>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Use implicit differentiation on x=cot(y) to find dy/dx in terms of x
I have x'=-csc^2y Or dy/dx=-csc^2y Am I doing this problem right?
you are differntiating wrt x d(x)/dx= d[cot(y)]/dx 1= -csc^2(y) dy/dx dy/dx= -1/csc^2(y) = -sin^2(y) Alternatively y = arcot(x) dy/dx = -1/ (1+x^2) Since y= arcot(x) draw a right triangle with an
acute angle t whose adjacent side is x and whose opposite side is 1 making the hypoteneuese sqrt(1+x^2) so sin^2(y) = 1 /(1+x^2) so our previous result dy/dx = -sin^2(y) = -1/ (1+x^2) is consistent
with our alternative approach
|
{"url":"http://mathhelpforum.com/calculus/127840-use-implicit-differentiation-x-cot-y-find-dy-dx-terms-x.html","timestamp":"2014-04-18T19:20:37Z","content_type":null,"content_length":"32320","record_id":"<urn:uuid:ad49bb3c-b2cb-44e2-9452-1709217b0010>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
hi Still Learning
You have to show it obeys the four properties of a group: closure, identity, inverses, asociativity.
I've made a group combination table (see below).
From that it is obvious that closure holds, it has an identity (o) and all members are self inverse.
So what about associativity ? This is often the hardest to prove. You have to show that
a(bc) = (ab)c for all a b and c in the set.
As Stefy has pointed out, commutativity holds (ab = ba) so it is fairly easy to cover all cases by using that property.
I'll use * for a tilda as I cannot see that symbol above, and show one example:
0*(0*n) = 0* n = n
(0*0)*n = 0 * n = n
I'll leave the rest to you.
|
{"url":"http://www.mathisfunforum.com/post.php?tid=18937&qid=251994","timestamp":"2014-04-17T12:32:21Z","content_type":null,"content_length":"18874","record_id":"<urn:uuid:1c9d9fd1-32bd-4259-907d-33eeea142ada>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Oil Change Center Queue
Cars arrive at an oil change center following a Poisson process at the rate of four per hour. There is a single available mechanic, and the time taken to perform an oil change is exponentially
distributed, with an average of 12 minutes.
Steady-state distribution for the system.
Distribution for the number of cars in the system.
Probability that there are more than three cars at the oil change center.
Mean and variance for the number of cars in the system.
|
{"url":"http://www.wolfram.com/mathematica/new-in-9/markov-chains-and-queues/oil-change-center-queue.html","timestamp":"2014-04-21T10:09:46Z","content_type":null,"content_length":"5452","record_id":"<urn:uuid:19a41c0f-9ad1-4160-936c-a07041e50274>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Decatur, GA ACT Tutor
Find a North Decatur, GA ACT Tutor
...Currently I am tutoring regularly via Skype which is allowing students to get last minute help with homework when needed - even in 15 minute increments - and allowing for longer scheduled
tutoring sessions to prepare for quizzes and tests all from the convenience of one's home. The key to your s...
10 Subjects: including ACT Math, geometry, algebra 1, algebra 2
...I passed the GACE content exams necessary to teach High School Mathematics in Georgia with a score of 577 out of 600. Altogether I have tutored math at the high school level for 5+ years. I
have been tutoring math since I was in high school myself, and did so throughout college as an America Counts math tutor.
25 Subjects: including ACT Math, English, reading, writing
...I believe that the best way to teach others algebra I is to allow them to do sample problems to see whether they are lacking any fundamentals or simply need refinement, and then proceed from
there, whether it's teaching the basics over again to build a strong foundation, reviewing concepts to mak...
65 Subjects: including ACT Math, English, reading, calculus
...As well as tutoring, I have volunteered in my local elementary school to help student with their homework for their homework club. Also I mentor students from middle school to high school on
behavior, studies, and other topics. I have attended many lectures on best study skills, tutored other c...
14 Subjects: including ACT Math, chemistry, geometry, biology
...Math and science has opened many doors for me and they can do the same for you!Differential Equations is an intimidating and potentially frustrating course. The course is usually taken by
engineering students and taught by mathematics professors. The pure mathematical approach can be discouraging to engineering students and make the course seem like a waste of time.
15 Subjects: including ACT Math, calculus, physics, algebra 2
Related North Decatur, GA Tutors
North Decatur, GA Accounting Tutors
North Decatur, GA ACT Tutors
North Decatur, GA Algebra Tutors
North Decatur, GA Algebra 2 Tutors
North Decatur, GA Calculus Tutors
North Decatur, GA Geometry Tutors
North Decatur, GA Math Tutors
North Decatur, GA Prealgebra Tutors
North Decatur, GA Precalculus Tutors
North Decatur, GA SAT Tutors
North Decatur, GA SAT Math Tutors
North Decatur, GA Science Tutors
North Decatur, GA Statistics Tutors
North Decatur, GA Trigonometry Tutors
Nearby Cities With ACT Tutor
Avondale Estates ACT Tutors
Belvedere, GA ACT Tutors
Briarcliff, GA ACT Tutors
Decatur, GA ACT Tutors
Dunaire, GA ACT Tutors
Embry Hls, GA ACT Tutors
North Atlanta, GA ACT Tutors
North Springs, GA ACT Tutors
Overlook Sru, GA ACT Tutors
Scottdale, GA ACT Tutors
Snapfinger, GA ACT Tutors
Tucker, GA ACT Tutors
Tuxedo, GA ACT Tutors
Vinnings, GA ACT Tutors
Vista Grove, GA ACT Tutors
|
{"url":"http://www.purplemath.com/north_decatur_ga_act_tutors.php","timestamp":"2014-04-19T23:54:54Z","content_type":null,"content_length":"24152","record_id":"<urn:uuid:f95681f3-c3a0-43d6-ad31-52966f523ee5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Trace a Graph on the TI-84 Plus
Tracing a graph on the TI-84 Plus graphing calculator shows you the coordinates of the points that make up the graph. After you have graphed your function, you can press [TRACE] and then use
to more closely investigate the function.
If you use only the
keys instead of [TRACE] to locate a point on a graph, all you will get is an approximation of the location of that point. You rarely get an actual point on the graph. So always use [TRACE] to
identify points on a graph.
The following list describes what you see, or don’t see, as you trace a graph:
• The definition of the function: The function you’re tracing is displayed at the top of the screen, provided the calculator is in ExprOn format. If the Format menu is set to ExprOff and CoordOn,
then the Y= editor number of the function appears at the top right of the screen.
If the Format menu is set to ExprOff and CoordOff, then tracing the graph is useless because all you see is a cursor moving on the graph. The calculator won’t tell you which function you’re
tracing, nor will it tell you the coordinates of the cursor location.
If you’ve graphed more than one function and you would like to trace a different function, press
Each time you press this key, the cursor jumps to another function. Eventually it jumps back to the original function.
• The values of x and y: At the bottom of the screen, you see the values of the x- and y-coordinates that define the cursor location, provided the calculator is in CoordOn format. In the PolarGC
format, the coordinates of this point display in polar form.
When you press [TRACE], the cursor is placed on the graph at the point having an x-coordinate that is approximately midway between Xmin and Xmax. If the y-coordinate of the cursor location isn’t
between Ymin and Ymax, then the cursor does not appear on the screen.
Each time you press
the cursor moves right to the next plotted point on the graph, and the coordinates of that point are displayed at the bottom of the screen. If you press
the cursor moves left to the previously plotted point. And if you press
to trace a different function, the tracing of that function starts at the point on the graph that has the x-coordinate displayed on-screen before you pressed this key.
Press [CLEAR] to terminate tracing the graph. This also removes the name of the function and the coordinates of the cursor from the screen.
When you’re using TRACE, if you want to start tracing your function at a specific value of the independent variable x, just key in that value and press [ENTER] when you’re finished. (The value you
assign to x must be between Xmin and Xmax; if it’s not, you get an error message.) After you press [ENTER], the trace cursor moves to the point on the graph having the x-coordinate you just entered.
If the name of the function and the values of x and y are interfering with your view of the graph when you use TRACE, increase the height of the screen by pressing [WINDOW], and then decrease the
value of Ymin and increase the value of Ymax.
|
{"url":"http://www.dummies.com/how-to/content/how-to-trace-a-graph-on-the-ti84-plus.navId-612812.html","timestamp":"2014-04-18T08:41:07Z","content_type":null,"content_length":"55932","record_id":"<urn:uuid:032a71a3-a726-4588-ba24-d747888da0f6>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Use Integration by Parts to prove the Reduction Formula
February 19th 2010, 03:03 PM #1
Junior Member
Sep 2009
Use Integration by Parts to prove the Reduction Formula
int[(x^2 + a^2)^n] dx =
[x(x^2 + a^2)/(2n + 1)] + [(2na^2)/(2n + 1)]*int[(x^2 + a^2)^(n-1)] dx
Where n =/= -(1/2)
I've done a few other examples of integration by parts just fine, but I can't seem to find selections for u and dv which allow me to solve this.
If anyone can grant me the FIRST step, I'm sure I can take it from there.
int[(x^2 + a^2)^n] dx =
[x(x^2 + a^2)^n/(2n + 1)] + [(2na^2)/(2n + 1)]*int[(x^2 + a^2)^(n-1)] dx
Where n =/= -(1/2)
I've done a few other examples of integration by parts just fine, but I can't seem to find selections for u and dv which allow me to solve this.
If anyone can grant me the FIRST step, I'm sure I can take it from there.
Hi Tulki,
You can let $u=(x^2+a^2)^n,\ dv=dx,\ v=x$
Or let dv=1, v=x.
Integration by parts gives
$\int{\left(x^2+a^2\right)^n}dx=x\left(x^2+a^2\righ t)^n-\int{nx\left(x^2+a^2\right)^{n-1}(2x)}dx$
(Adding $a^2-a^2$ to $x^2$ helps solve the integral easily).
$=x\left(x^2+a^2\right)^n-2n\int{\left(x^2+a^2\right)^n}dx+2na^2\int{\left(x ^2+a^2\right)^{n-1}}dx$
$\Rightarrow\ I=x\left(x^2+a^2\right)^n-2nI+2na^2I_{n-1}$
$\Rightarrow\ I(1+2n)=x\left(x^2+a^2\right)^n+2a^2nI_{n-1}$
$\Rightarrow\ I=\frac{x\left(x^2+a^2\right)^n}{1+2n}+\frac{2a^2n }{1+2n}\int{\left(x^2+a^2\right)^{n-1}}dx$
Um... when you quoted the question, you added in a power of n that isn't in the actual question I'm looking at. May have been a mistake, but I'm not sure.
I think it's a misprint.
I wasn't able to get the answer quoted, Tulki.
Yes, it's definately a misprint, Tulki.
Yeah I've been trying to figure out a way through it as it is printed, but it seems it is a misprint. Thank you!
February 19th 2010, 05:07 PM #2
MHF Contributor
Dec 2009
February 19th 2010, 05:11 PM #3
Junior Member
Sep 2009
February 19th 2010, 05:13 PM #4
MHF Contributor
Dec 2009
February 19th 2010, 05:22 PM #5
MHF Contributor
Dec 2009
February 19th 2010, 05:37 PM #6
Junior Member
Sep 2009
|
{"url":"http://mathhelpforum.com/calculus/129676-use-integration-parts-prove-reduction-formula.html","timestamp":"2014-04-17T23:42:01Z","content_type":null,"content_length":"47543","record_id":"<urn:uuid:f3340662-fa07-4244-b356-168cc994a86e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Combining different procedures for adaptive regression
Results 1 - 10 of 30
- Journal of American Statistical Association
"... Adaptation over different procedures is of practical importance. Different procedures perform well under different conditions. In many practical situations, it is rather hard to assess which
conditions are (approximately) satisfied so as to identify the best procedure for the data at hand. Thus auto ..."
Cited by 39 (7 self)
Add to MetaCart
Adaptation over different procedures is of practical importance. Different procedures perform well under different conditions. In many practical situations, it is rather hard to assess which
conditions are (approximately) satisfied so as to identify the best procedure for the data at hand. Thus automatic adaptation over various scenarios is desirable. A practically feasible method, named
Adaptive Regression by Mixing (ARM) is proposed to convexly combine general candidate regression procedures. Under mild conditions, the resulting estimator is theoretically shown to perform optimally
in rates of convergence without knowing which of the original procedures work the best. Simulations are conducted in several settings, including comparing a parametric model with nonparametric
alternatives, comparing a neural network with a projection pursuit in multi-dimensional regression, and combining bandwidths in kernel regression. The results clearly support the theoretical property
of ARM. The ARM ...
- SUBMITTED TO THE ANNALS OF STATISTICS , 2008
"... We develop minimax optimal risk bounds for the general learning task consisting in predicting as well as the best function in a reference set G up to the smallest possible additive term, called
the convergence rate. When the reference set is finite and when n denotes the size of the training data, w ..."
Cited by 24 (3 self)
Add to MetaCart
We develop minimax optimal risk bounds for the general learning task consisting in predicting as well as the best function in a reference set G up to the smallest possible additive term, called the
convergence rate. When the reference set is finite and when n denotes the size of the training data, we provide minimax convergence rates of the form C () log |G | v with tight evaluation of the
positive constant C and with n exact 0 < v ≤ 1, the latter value depending on the convexity of the loss function and on the level of noise in the output distribution. The risk upper bounds are based
on a sequential randomized algorithm, which at each step concentrates on functions having both low risk and low variance with respect to the previous step prediction function. Our analysis puts
forward the links between the probabilistic and worst-case viewpoints, and allows to obtain risk bounds unachievable with the standard statistical learning approach. One of the key idea of this work
is to use probabilistic inequalities with respect to appropriate (Gibbs) distributions on the prediction function space instead of using them with respect to the distribution generating the data. The
risk lower bounds are based on refinements of the Assouad lemma taking particularly into account the properties of the loss function. Our key example to illustrate the upper and lower bounds is to
consider the Lq-regression setting for which an exhaustive analysis of the convergence rates is given while q ranges in [1; +∞[.
"... Abstract. In the present paper, we study the problem of aggregation under the squared loss in the model of regression with deterministic design. We obtain sharp oracle inequalities for convex
aggregates defined via exponential weights, under general assumptions on the distribution of errors and on t ..."
Cited by 23 (2 self)
Add to MetaCart
Abstract. In the present paper, we study the problem of aggregation under the squared loss in the model of regression with deterministic design. We obtain sharp oracle inequalities for convex
aggregates defined via exponential weights, under general assumptions on the distribution of errors and on the functions to aggregate. We show how these results can be applied to derive a sparsity
oracle inequality. 1
- Econometric Theory , 2004
"... We study some methods of combining procedures for forecasting a continuous random variable. Statistical risk bounds under the square error loss are obtained under mild distributional assumptions
on the future given the current outside information and the past observations. The risk bounds show that ..."
Cited by 14 (2 self)
Add to MetaCart
We study some methods of combining procedures for forecasting a continuous random variable. Statistical risk bounds under the square error loss are obtained under mild distributional assumptions on
the future given the current outside information and the past observations. The risk bounds show that the combined forecast automatically achieves the best performance among the candidate procedures
up to a constant factor and an additive penalty term. In term of the rate of convergence, the combined forecast performs as well as if one knew which candidate forecasting procedure is the best in
advance. Empirical studies suggest combining procedures can sometimes improve forecasting accuracy compared to the original procedures. Risk bounds are derived to theoretically quantify the potential
gain and price for linearly combining forecasts for improvement. The result supports the empirical finding that it is not automatically a good idea to combine forecasts. A blind combining can degrade
performance dramatically due to the undesirable large variability in estimating the best combining weights. An automated combining method is shown in theory to achieve a balance between the potential
gain and the complexity penalty (the price for combining); to take advantage (if any) of sparse combining; and to maintain the best performance (in rate) among the candidate forecasting procedures if
linear or sparse combining does not help.
, 2005
"... We consider the problem of adaptation to the margin and to complexity in binary classification. We suggest a learning method with a numerically easy aggregation step. Adaptivity both to the
margin and complexity in classification, usually involves empirical risk minimization or Rademacher complexiti ..."
Cited by 14 (6 self)
Add to MetaCart
We consider the problem of adaptation to the margin and to complexity in binary classification. We suggest a learning method with a numerically easy aggregation step. Adaptivity both to the margin
and complexity in classification, usually involves empirical risk minimization or Rademacher complexities which lead to numerical difficulties. On the other hand there exist classifiers that are easy
to compute and that converge with fast rates but are not adaptive. Combining these classifiers by our aggregation procedure we get numerically realizable adaptive classifiers that converge with fast
- Laboratoire de Probabilités, Université Paris VI, 2004, http://www.proba.jussieu.fr/mathdoc/preprints/index.html# 2004. L. Birgé , 2004
"... Abstract. This paper studies statistical aggregation procedures in regression setting. A motivating factor is the existence of many different methods of estimation, leading to possibly competing
estimators. We consider here three different types of aggregation: model selection (MS) aggregation, conv ..."
Cited by 12 (2 self)
Add to MetaCart
Abstract. This paper studies statistical aggregation procedures in regression setting. A motivating factor is the existence of many different methods of estimation, leading to possibly competing
estimators. We consider here three different types of aggregation: model selection (MS) aggregation, convex (C) aggregation and linear (L) aggregation. The objective of (MS) is to select the optimal
single estimator from the list; that of (C) is to select the optimal convex combination of the given estimators; and that of (L) is to select the optimal linear combination of the given estimators.
We are interested in evaluating the rates of convergence of the excess risks of the estimators obtained by these procedures. Our approach is motivated by recent minimax results in Nemirovski (2000)
and Tsybakov (2003). There exist competing aggregation procedures achieving optimal convergence separately for each one of (MS), (C) and (L) cases. Since the bounds in these results are not directly
comparable with each other, we suggest an alternative solution. We prove that all the three optimal bounds can be nearly achieved via a single “universal ” aggregation procedure. We propose such a
procedure which consists in mixing of the initial estimators with the weights obtained by penalized least squares. Two different penalities are considered: one of them is related to hard thresholding
techniques, the second one is a data dependent L1-type penalty. 1.
- Advances in Neural Information Processing Systems
"... We consider the learning task consisting in predicting as well as the best function in a finite reference set G up to the smallest possible additive term. If R(g) denotes the generalization
error of a prediction function g, under reasonable assumptions on the loss function (typically satisfied by th ..."
Cited by 12 (1 self)
Add to MetaCart
We consider the learning task consisting in predicting as well as the best function in a finite reference set G up to the smallest possible additive term. If R(g) denotes the generalization error of
a prediction function g, under reasonable assumptions on the loss function (typically satisfied by the least square loss when the output is bounded), it is known that the progressive mixture rule ˆg
satisfies ER(ˆg) ≤ ming∈G R(g) + Cst log |G| n, (1) where n denotes the size of the training set, and E denotes the expectation w.r.t. the training set distribution.This work shows that,
surprisingly, for appropriate reference sets G, the deviation convergence rate of the progressive mixture rule is no better than Cst / √ n: it fails to achieve the expected Cst/n. We also provide an
algorithm which does not suffer from this drawback, and which is optimal in both deviation and expectation convergence rates. 1
- Bernoulli , 1999
"... Methods have been proposed to linearly combine candidate regression procedures to improve estimation accuraccy. Applications of these methods in many examples are very succeesful, pointing to
the great potential of combining procedures. A fundamental question regarding combining procedure is: What i ..."
Cited by 11 (2 self)
Add to MetaCart
Methods have been proposed to linearly combine candidate regression procedures to improve estimation accuraccy. Applications of these methods in many examples are very succeesful, pointing to the
great potential of combining procedures. A fundamental question regarding combining procedure is: What is the potential gain and how much one needs to pay for it? A partial answer to this question is
obtained by Juditsky and Nemirovski (1996) for the case when a large number of procedures are to be combined. We attempt to give a more general solution. Under a l 1 constrain on the linear
coefficients, we show that for pursuing the best linear combination over n procedures, in terms of rate of convergence under the squared L 2 loss, one can pay a price of order O \Gamma log n=n 1\
Gamma \Delta when 0 ! ! 1=2 and a price of order O i (log n=n) 1=2 j when 1=2 ! 1. These rates can not be improved or essentially improved in a uniform sense. This result suggests that one should be
- Statistica Sinica
"... : We study a problem of adaptive estimation of a conditional probability function in a pattern recognition setting. In many applications, for more flexibility, one may want to consider various
estimation procedures targeted at different scenarios and/or under different assumptions. For example, when ..."
Cited by 6 (3 self)
Add to MetaCart
: We study a problem of adaptive estimation of a conditional probability function in a pattern recognition setting. In many applications, for more flexibility, one may want to consider various
estimation procedures targeted at different scenarios and/or under different assumptions. For example, when the feature dimension is high, to overcome the familiar curse of dimensionality, one may
seek a good parsimonious model among a number of candidates such as CART, neural nets, additive models, and others. For such a situation, one wishes to have an automated final procedure performing
always as well as the best candidate. In this work, we propose a method to combine a countable collection of procedures for estimating the conditional probability. We show that the combined procedure
has a property that its statistical risk is bounded above by that of any of the procedure being considered plus a small penalty. Thus in an asymptotic sense, the strengths of the different estimation
procedures i...
, 2009
"... We observe a random measure N and aim at estimating its intensity s. This statistical framework allows to deal simultaneously with the problems of estimating a density, the marginals of a
multivariate distribution, the mean of a random vector with nonnegative components and the intensity of a Poiss ..."
Cited by 3 (2 self)
Add to MetaCart
We observe a random measure N and aim at estimating its intensity s. This statistical framework allows to deal simultaneously with the problems of estimating a density, the marginals of a
multivariate distribution, the mean of a random vector with nonnegative components and the intensity of a Poisson process. Our estimation strategy is based on estimator selection. Given a family of
estimators of s based on the observation of N, we propose a selection rule, based on N as well, in view of selecting among these. Little assumption is made on the collection of estimators. The
procedure offers the possibility to perform model selection and also to select among estimators associated to different model selection strategies. Besides, it provides an alternative to the
T-estimators as studied recently in Birgé (2006). For illustration, we consider the problems of estimation and (complete) variable selection in various regression settings.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=319340","timestamp":"2014-04-17T13:55:19Z","content_type":null,"content_length":"40335","record_id":"<urn:uuid:cffa8aa9-37c1-479a-95c4-a04993ec5500>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of dodec-
standard cosmology
' distance and
'proper distance
' are two closely related
distance measures
used by cosmologists to define distances between objects.
Comoving coordinates
While general relativity allows one to formulate the laws of physics using arbitrary coordinates, some coordinate choices are natural choices with which it is easier to work. Comoving coordinates are
an example of such a natural coordinate choice. They assign constant spatial coordinate values to observers who perceive the universe as isotropic. Such observers are called "comoving" observers
because they move along with the Hubble flow.
A comoving observer is the only observer that will perceive the universe, including the cosmic microwave background radiation, to be isotropic. Non-comoving observers will see regions of the sky
systematically blue-shifted or red-shifted. Thus isotropy, particularly isotropy of the cosmic microwave background radiation, defines a special local frame of reference called the comoving frame.
The velocity of an observer relative to the local comoving frame is called the peculiar velocity of the observer. Most large lumps of matter, such as galaxies, are nearly comoving, i.e., their
peculiar velocities are low.
The comoving time coordinate is the elapsed time since the Big Bang according to a clock of a comoving observer and is a measure of cosmological time. The comoving spatial coordinates tell us where
an event occurs while cosmological time tells us when an event occurs. Together, they form a complete coordinate system, giving us both the location and time of an event.
Space in comoving coordinates is (on the average) static, as most bodies are comoving, and comoving bodies have static, unchanging comoving coordinates.
The expanding Universe has an increasing scale factor which explains how constant comoving coordinates are reconciled with distances that increase with time.
Comoving distance
Comoving distance is the distance between two points measured along a path defined at the present cosmological time. For objects moving with the Hubble flow, it is deemed to remain constant in time.
The comoving distance from an observer to a distant object (e.g. galaxy) can be computed by the following formula:
$chi = int_\left\{t_e\right\}^\left\{t\right\} \left\{ c ; mbox\left\{d\right\} t" over a\left(t\text{'}\right)\right\}$
where $a$($t"$) is the scale factor.
$t_e$ is the time of emission of the photons detected by the observer
$t$ is the time "now".
Despite being an integral over time, this does give the distance that
be measured by a hypothetical tape measure at
. For a derivation see
(Davis and Lineweaver, 2003)
"standard relativistic definitions".
Symbols and substitute names for comoving distance
Uses of the comoving distance
Cosmological time
is identical to locally measured time for an observer at a fixed comoving spatial position, that is, in the local
comoving frame
. Comoving distance is also equal to the locally measured distance in the comoving frame for
objects. To measure the comoving distance between two
objects, one imagines that one has many comoving observers in a straight line between the two objects, so that all of the observers are close to each other, and form a chain between the two distant
objects. All of these observers must have the same cosmological time. Each observer measures his distance to the nearest observer in the chain, and the length of the chain, the sum of distances
between nearby observers, is the total comoving distance. It is important to the definition of comoving distance that all observers have the same cosmological age. For instance, if one measured the
distance along a straight line or
between the two points, one would not be correctly measuring comoving distance. Comoving distance is not quite the same concept of distance as the concept of distance in special relativity. This can
be seen by considering the hypothetical case of a nearly empty universe, where both sorts of distance can be measured. In this thought experiment the value of comoving distance is not equal to the
value of the distance as defined by special relativity.
If one divides a comoving distance by the present cosmological time (the age of the universe) and calls this a "velocity", then the resulting "velocities" of "galaxies" near the particle horizon or
further than the horizon can be above the speed of light. This apparent superluminal expansion is not in conflict with special or general relativity, and is a consequence of the particular
definitions used in cosmology. Note that the cosmological definitions used to define the velocities of distant objects are coordinate dependent - there is no general coordinate independent definition
of velocity between distant objects in general relativity (Baez and Bunn, 2006) The issue of how to best describe and popularize the apparent superluminal expansion of the universe has caused a minor
amount of controversy. One viewpoint is presented in (Davis and Lineweaver, 2003)
Proper distance vs. comoving distance from small galaxies to galaxy clusters
Within small distances and short trips, the expansion of the universe during the trip can be ignored. This is because the travel time between any two points for a non-relativistic moving particle
will just be the proper distance (i.e. the comoving distance measured using the scale factor of the universe at the time of the trip rather than the scale factor "now") between those points divided
by the velocity of the particle. If the particle is moving at a relativistic velocity, the usual relativistic corrections for time dilation must be made.
See also
External links
|
{"url":"http://www.reference.com/browse/dodec-","timestamp":"2014-04-20T03:41:00Z","content_type":null,"content_length":"83672","record_id":"<urn:uuid:b3a4782a-a91e-4fd8-a030-f80477d421ac>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
infinitely many linear equations in infinitely many variables
up vote 5 down vote favorite
Let $(a_{mn})_{m,n\in\mathbb{N}}$ and $(b_m)$ be sequences of complex numbers.We say that $(a_{mn})$ and $(b_m)$ constitute an infinite system of linear equations in infinitely many variables if we
seek a sequence $(x_n)$ of complex numbers such that $\forall m\in\mathbb{N}:$ $\sum_{n=1}^{\infty}a_{mn}x_n=b_m$. Note that in general the order of summation matters.
I am sort of a undergraduate student with focus on number theory and have some background in functional analysis (2 semesters functional analysis, 1 semester non-linear functional analysis, 1
semester operator algebras, 2 semesters PDEs), so I am sort of a becoming number-theorist with bias for functional analysis :-) That is also why I am fascinated by the above defined object as a sort
of natural extension of a practical problem from linear algebra.
We have never dealt with this type of objects and I wasn´t able to find much on google that I could start something with, maybe partly because I have searched in the wrong way. That is why I have a
request if you could recommend some introductory literature focused on such infinite systems of linear equations in infinitely many unknowns over $\mathbb{C}$.
Thanks in advance!
fa.functional-analysis reference-request
3 If you are allowing infinite linear combinations, what kid of convergence of the infinite sum are you demanding? My first impression is that your question is too broadly posed, but perhaps other
people disagree. – Yemon Choi Aug 22 '10 at 2:07
3 @EFQ: but once again it seems like you are asking other people to do all the work in thinking up what might be meant by your question. In case I wasn't clear above: I strongly feel that MO should
not be a place for questions of the form "tell me stuff" or "write me a Wikipedia/nLab entry" – Yemon Choi Aug 22 '10 at 2:26
2 Dan: Maybe he'd want it to be "square" (or whatever is the equivalent term for infinite matrices)? Wouldn't "infinitely many equations in finitely many variables" be underdetermined? – J. M. Aug
22 '10 at 2:49
3 "Tell me about infinite systems of linear or non-linear equations over $\mathbb{R}$ or $\mathbb{C}.$ I don't see a meaningful question here and am voting to close. – Victor Protsak Aug 22 '10 at
You may want to look at Dieudonne's "History of functional analysis", which describes the history of this field in terms of solving equations of the type your question asks about. Also, the texts
3 you used in functional analysis almost surely discuss Riesz's theory of compact operators, the Fredholm alternative, and related topics. These are modern outgrowths of people's attempts to come to
grips with the problem of solving $Ax = b$ in an infinite-dimensional context. If you don't see how or why, Dieudonne's book will help to explain it. – Emerton Aug 27 '10 at 3:14
show 4 more comments
3 Answers
active oldest votes
The systems of this kind are fairly common in applications. For example, they naturally appear when solving boundary value problems for linear partial differential equations using the
method of separation of variables.
Predictably, the problem is not meaningful for any sequences {$a_{nm}$}, {$b_m$}, but only for sufficiently well-behaved ones. If, for example, you were to consider systems of the form $$
x_n+\sum_{m=1}^{\infty}a_{nm}x_m=b_n,\quad\mbox{such that}\quad \sum_n\sum_m a_{nm}^2<\infty \quad\mbox{ and }\quad \sum_nb_n^2<\infty, $$ then this system possesses a unique solution in
the Hilbert space $l_2$ such that $\sum_n x_n^2<\infty$ (assuming that the problem is not singular, i.e. that $\det(I+A)\ne0$). These requirements are too restrictive for some
up vote 7 applications, hence there is a body of literature concerned with various kinds of regularity conditions involving {$a_{nm}$} and {$b_m$}, weaker than above, which ensure the well-posedness
down vote of the problem and enable numerical solution of such systems (which is usually done by truncation; see the appropriate accuracy estimates in F. Ursell (1996) "Infinite systems of
accepted equations: the effect of truncation", Quarterly Journal of Mechanics and Applied Mathematics, 49(2), 217--233).
One good old book that discusses these systems in some detail was written by By L. V. Kantorovich and V. I. Krylov and is called "Approximate methods of higher analysis" (New York:
Interscience Publishers, 1958).
add comment
Take a look at Section 6 of Chapter III of Banach's book, which gives a result in the theory of $F$-spaces. The title of the section in the English translation is "Systems of linear
equations in infinitely many unknowns".
up vote 4 down
vote (By coincidence I was reading this recently, and I admit that that is part of the reason I voted to reopen.)
add comment
I would recommend taking a look at Hardy's "Divergent Series" it has quite a lot of nice ideas, in particular, I recall seeing exactly that example of a system of infinite equations
up vote 1 down in infinite unknowns related to fourier series.
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis reference-request or ask your own question.
|
{"url":"http://mathoverflow.net/questions/36348/infinitely-many-linear-equations-in-infinitely-many-variables/42980","timestamp":"2014-04-18T14:09:04Z","content_type":null,"content_length":"68598","record_id":"<urn:uuid:2cd81eba-bf16-41f1-9db7-867710003aca>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Topology of the Reals proofs
February 11th 2009, 10:55 PM
Topology of the Reals proofs
I need help proving the following statements. I've been doing this homework for 6 hours and I am finally burnt out. Any help is greatly appreciated.
Let R denote the set of all reals.
1) Let S be a subset of R and let x be an element of R. Prove that one and only one of the following three conditions holds:
a) x is an element of the set of all interior points S (int S)
b) x is an element of int (R\S)
c) x is an element of the set of all boundary points S (bd S) = bd (R\S)
2) Let S be a bounded infinite set and let x = sup S. Prove: If x is NOT an element of S, then x is an element of S' (set of all accumulation points of S).
3) Prove: bd S = (cl S) intersection [cl (R\S)].
Thank you. Any help would be awesome.
February 12th 2009, 03:12 AM
I need help proving the following statements. I've been doing this homework for 6 hours and I am finally burnt out. Any help is greatly appreciated.
Let R denote the set of all reals.
1) Let S be a subset of R and let x be an element of R. Prove that one and only one of the following three conditions holds:
a) x is an element of the set of all interior points S (int S)
b) x is an element of int (R\S)
c) x is an element of the set of all boundary points S (bd S) = bd (R\S)
3) Prove: bd S = (cl S) intersection [cl (R\S)].
Problems 1 & 3 are really about using the definition of boundary points.
“p is a boundary point of S if and only if each open set containing p contains a point of and a point not in S”.
From that statement it should be clear that $p \in \beta (S)\; \Rightarrow \;p \in \beta (S^c )$ every point is the boundary of S is in the boundary of S complement.
February 12th 2009, 02:30 PM
I need help proving the following statements. I've been doing this homework for 6 hours and I am finally burnt out. Any help is greatly appreciated.
Let R denote the set of all reals.
1) Let S be a subset of R and let x be an element of R. Prove that one and only one of the following three conditions holds:
a) x is an element of the set of all interior points S (int S)
b) x is an element of int (R\S)
c) x is an element of the set of all boundary points S (bd S) = bd (R\S)
2) Let S be a bounded infinite set and let x = sup S. Prove: If x is NOT an element of S, then x is an element of S' (set of all accumulation points of S).
3) Prove: bd S = (cl S) intersection [cl (R\S)].
Thank you. Any help would be awesome.
Another way to show (1) and (3) is that
"int S" is the largest open set contained in S and "int R\S" is the largest open set contained in R\S. Since a union of open sets is an open set, "int S" $\cup$ "int R\S" is an open set.
Now, there is a closed set which is R\("int S" $\cup$ "int R\S"). You need to show that the closed set is bd(S) using R\("int S" $\cup$ "int R\S").
Thus, "int S" , "int R\S", and "bd(S)" are disjoint sets whose union is R.
For (2), the Bolzano-Weierstrass theorem states that "every bounded infinite subset of R has a limit point". If S is a closed set, limit points are in S. If S is an open set, limit points are
either in S or S'.
|
{"url":"http://mathhelpforum.com/calculus/73223-topology-reals-proofs-print.html","timestamp":"2014-04-20T04:27:16Z","content_type":null,"content_length":"8530","record_id":"<urn:uuid:7993ec42-6b54-43a2-8d04-9f1dda7f846a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Little Ferry Calculus Tutors
...I've been tutoring for 8+ years, with students between the ages of 6 and 66, with a focus on the high school student and the high school curriculum. I have also been an adjunct professor at the
College of New Rochelle, Rosa Parks Campus. As for teaching style, I feel that the concept drives the skill.
26 Subjects: including calculus, physics, geometry, statistics
...I have had great success tutoring GMAT both independently and for GMAT prep companies. I've found that for me, it takes about 6-9 weeks on average of working with a student to get to an 80-100
point improvement, and I can work with Quant, Verbal, or both. Background: I have a BS in Electrical Engineering from MIT and an MBA with Distinction from the University of Michigan.
11 Subjects: including calculus, geometry, algebra 1, algebra 2
...The question that must be asked is: How fast can this student thoroughly learn a new concept and build on previous knowledge without becoming confused in the rush to meet standards? I have
helped dozens of students complete their General Education Development tests including Math, Science and t...
15 Subjects: including calculus, reading, algebra 2, algebra 1
My experience in tutoring spans a wide variety of subjects and disciplines. I have degrees in Biology and Mathematics and am comfortable teaching any and all of the subjects in both science and
math. I have personally tutored everything from Algebra to Advanced Calculus and English to AP Biology and everything in between.
22 Subjects: including calculus, reading, English, chemistry
...I will also help students with home other than Math at no extra cost during our math session. This is very easy and simply for me to impart as i am very adept in Algebra 1. I have been helping
students/adults even while in high school.
7 Subjects: including calculus, algebra 1, algebra 2, trigonometry
|
{"url":"http://www.algebrahelp.com/Little_Ferry_calculus_tutors.jsp","timestamp":"2014-04-19T20:49:35Z","content_type":null,"content_length":"25193","record_id":"<urn:uuid:8af327cf-ac1d-4e6d-ab1d-183afb42dc99>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
|
$C^k$ topology of metrics
up vote 3 down vote favorite
Is the space of Riemannian metrics, over a compact manifold, complete when endowed with the $C^k$-topology of metrics?. Is there a good reference for this?
dg.differential-geometry riemannian-geometry
Completeness usually refers to a metric space. For example, a compact manifold, you can put a $C^k$-metric on the space of Riemann metrics. That's a complete metric space provided $k \geq 1$ is
finite. Is that what you're asking about? – Ryan Budney Mar 14 '13 at 2:29
Ryan, I'm not sure what you mean by your comment and how to put a complete metric on the space of $C^k$ Riemannian metrics on a compact manifold. Could you elaborate? – Deane Yang Mar 14 '13 at
3 To get a complete metric on the space of Riemannian metrics, you first need to put a complete metric on the space of positive definite symmetric matrices. Once you do that, the rest is
straightforward. Otherwise, the space is incomplete, because the metric can degenerate into a semi-Riemannian metric. – Deane Yang Mar 14 '13 at 2:50
I think we are referring to the same thing. But just in case let me be more specific. Fix a background metric $g$ and define the distance on the space of Riemannian metrics by $$d_g(g_1,g_2)= \
max_{1 \leq \ell \leq k\}\| \nabla^\ell_g (g_1-g_2) \|_\infty.$$ This distance is independent of the background metric and defines the $C^k$ topology I was referring to. Is the space of Riemannian
metrics complete with this topology (on a compact manifold)?. Where can I find a reference? – Cecilia Mar 14 '13 at 2:53
3 Another comment, in some sense these kind of convergence of metrics are often not the "best" ones from a geometric point of view. The problem is that they are not invariant under diffeomorphisms.
You might be interested in what is called "Cheeger-Gromov convergence", which is "$C^k$ convergence of metrics up to diffeomorphisms". – Thomas Richard Mar 14 '13 at 6:16
show 2 more comments
1 Answer
active oldest votes
No it is not complete. It is an open convex cone of the Banach space (Frechet space if $k=\infty$) $\Gamma_{C^k} S^2T^*M$ of $C^k$-sections of the vector bundle $S^2T^*M$. 0 is
always in the closure of this cone, and many more things. The norm on this Banach space depends on many choices (charts, metric, etc.), but all these norms are equivalent.
You can also put several natural Riemannian metrics on the space of all Riemannian metrics, but none of them is geodesically complete. Natural means: invariant under the
diffeomorphism group. See:
up vote 5 down
vote accepted Martin Bauer, Philipp Harms, Peter W. Michor: Sobolev metrics on the manifold of all Riemannian metrics. 20 pages. To appear in: Journal of Differential Geometry.(pdf)
Check also the references there.
Thank you! Just to make sure: Are you saying that both the space of $C^\infty$ Riemannian metrics and $C^k$ Riemannian metrics over a compact smooth manifold are incomplete with
the $C^k$ topology? – Cecilia Mar 14 '13 at 17:33
Yes. Both are incomplete. Symmetric tensor fields which are $\ge 0$ but not $>0$ somewhere are always in the closure. $C^\infty$ metrics are incomplete also since their completion
contains $C^k$-metrics. – Peter Michor Mar 14 '13 at 18:25
Thank you so much! – Cecilia Mar 14 '13 at 20:58
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry riemannian-geometry or ask your own question.
|
{"url":"https://mathoverflow.net/questions/124473/ck-topology-of-metrics/124492","timestamp":"2014-04-25T08:06:59Z","content_type":null,"content_length":"61341","record_id":"<urn:uuid:d4f64827-064e-403b-941a-652a6cd01611>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Using a string as a command?
3 Answers
You can do that by using 'eval'. When y is the string, try the following command.
2 Comments
This is not recommended.
Why not?
Edited by
Andrei Bobrov
on 7 Mar 2013
No products are associated with this question.
Using a string as a command?
Let's say I have x=linspace(0,10), and y='x.^2', as a string. How do I tell MatLab to apply y to x? What I am asking is, in the command window, I can type yy=x.^2 to get the desired result. But, I am
writing a script and I need to be able to take the string 'x.^2' and tell MatLab yy=x.^2, so I get a yy double that I can plot in the linspace I created. However, simply typing yy=y just creates
another string yy='x.^2'.
0 Comments
str = 'x.^2';
yy = str2func(['@(',char(symvar(str)),')',str]);
x = 0:10;
out = yy(x);
str = 'x^2 - y^2'
k = strcat(symvar(str),',');
k = [k{:}];
yy = str2func(['@(',k(1:end-1),')',vectorize(str)]);
x = 1:10;
y = linspace(5,20,10);
out = yy(x,y);
1 Comment
f = str2func( ['@(x) ' TheString] );
yy = f(x);
0 Comments
|
{"url":"http://www.mathworks.com/matlabcentral/answers/66222","timestamp":"2014-04-19T04:21:58Z","content_type":null,"content_length":"31374","record_id":"<urn:uuid:b84dff82-a28b-40d1-9072-ece96dd1d050>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Five Hats
The Five Hats
Three men are being held prisoner. Let's call them Abel, Baker and Charlie. One day, a guard walks into their cell carrying five hats. Three of the hats are red, and two of the hats are white. The
guard puts one hat on each prisoner's head. They cannot see their own hat, or the two hats the guard has left.
The guard says, "If any one of you can tell me what colour hat you have on your head, you can all go free."
Abel looks at the other two and says, "I don't know."
Baker looks at the other two and says, "I don't know."
Charlie, who is blind, tells the guard what colour hat he has on his head, and explains how he knows.
What colour hat did Charlie have on his head, and how did he know?
|
{"url":"http://members.shaw.ca/high5/01puzl08.html","timestamp":"2014-04-18T19:34:39Z","content_type":null,"content_length":"12006","record_id":"<urn:uuid:018363b5-5ff6-443e-92d5-6fb3d0b12b62>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Intersection of field extensions of torsion points of non-isogenous elliptic curves
up vote 4 down vote favorite
Let $E$ and $E'$ be non-isogenous elliptic curves over a field $k$ (characteristic 0) such that $Gal(k(E[p^{\infty}])/k)=Gal(k(E'[p^{\infty}])/k) = SL_2(\mathbb{Z}_p)$ with $p \geq 5$ (where $E[p^{\
infty}]$ is the set of $p^n$ torsion points of $E$ for all $n$). Then is it true that $k(E[p^{\infty}])\cap k(E'[p^{\infty}]) = k$, or can someone provide a counterexample?
elliptic-curves galois-representations modular-forms gr.group-theory
What assumptions are you making on $k$? This question is vacuous in both of the cases that I would consider natural (global fields and local fields). – Erick Knight Sep 8 '11 at 17:07
Sorry - $k$ has characteristic 0. For example $k$ contains the roots of unity or is a countable algebraically closed field. – Adam Harris Sep 8 '11 at 17:16
1 You can't have $k$ algebraically closed because your condition on the Galois groups won't hold. Your statement has a chance to hold for function fields. – Felipe Voloch Sep 8 '11 at 18:48
Thanks Felipe - I meant $k$ is finitely generated over an algebraically closed field – Adam Harris Sep 8 '11 at 19:05
add comment
2 Answers
active oldest votes
Since both fields $K(E_{l^\infty})$ and $K(E'_{l^\infty})$ contain the $l$-adic cyclotomic extension of $K$, your expectation cannot hold. However, this is almost the only obstruction.
In Propriétés galoisiennes des points d'ordre fini des courbes elliptiques, Invent. Math. 15, 259--331 (1972), J-P. Serre proved the following Theorem (Theorem 6$''$, p. 325).
Let $K$ be a number field, let $K^{\rm cycl}$ be the (cyclotomic) extension of $K$ generated by all roots of unity. Let $E$ and $E'$ be two elliptic curves such that, over $\bar K$,
up vote 8 (i) $E$ and $E'$ have no complex multiplication;
down vote
accepted (ii) $E$ and $E'$ are not isogeneous.
Then, the extensions $K(E_\infty)$ and $K(E^\prime_\infty)$ of $K^{\rm cycl}$ are almost disjoint: $K(E_\infty)\cap K(E'_\infty)$ is finite over $K^{\rm cycl}$.
(By Faltings, hypothesis (ii) is equivalent to the one given by Serre.)
2 Just a remark: the fact that the Galois group is $\text{SL}_2(\mathbb{Z}_p)$ implies that the $p$-adic cyclotomic character on $G_k$ is trivial, and thus must contain the $p$-adic
cyclotoic extension. – Erick Knight Sep 8 '11 at 20:50
and thus $k$ must contain (if someone can edit that in, it would be appreciated) – Erick Knight Sep 8 '11 at 20:52
Apologies - I forgot to put non-CM curves as a hypothesis in my question (but it's probably not good to change this now?) and $p \geq 5$ was just a guess to keep the question more
concise, but the theorem of Serre which ACL quoted is exactly what I wanted so thank you all! – Adam Harris Sep 8 '11 at 22:17
1 I don't quite understand the first line, because there was no assumption on $K$, so $K$ could equal $K(\zeta_{l^{\infty}})$ (not if $K$ is a number field, of course). It also seems a
little strange to phrase the statement of Serre's theorem in the way you do - by far the hardest part of that statement is Faltings contribution (the Tate conjecture). However, you
seemed to have mastered the dark art of divining exactly what the OP wanted to know, rather than what they actually asked! – Michael Sep 8 '11 at 23:16
To Michael: I changed hypothesis (ii) of Serre to the one I gave here, just because it is more natural, and Serre explicitly mentioned that point. (He had been able to prove it for
non-integral $j$-invariants.) To Eric: You're absolutely right! I overlooked that point. – ACL Sep 9 '11 at 6:04
show 3 more comments
By the way, I think that under your hypotheses, your question is really about group theory, not about algebraic geometry. Namely: the action of Galois on E[p^infty] x E'[p^infty] gives
you a homomorphism
G_K -> SL_2(Z_p) x SL_2(Z_p).
Call the image H. By your hypothesis, H projects surjectively onto both copies of SL_2(Z_p). You also know that H is not contained in any conjugate of the diagonal (if it were, E[p^infty]
up vote 5 and E'[p^infty] would be isomorphic Galois representations and I'm presuming you're in a situation where Faltings rules that out -- you'd better be, if you want an affirmative answer to
down vote your question.)
Now what you have to prove is that a subgroup of SL_2(Z_p) x SL_2(Z_p) which projects surjectively onto each direct summand and which is not conjugate to a subgroup of the diagonal must
be finite-index in SL_2(Z_p) x SL_2(Z_p). This is true for SL_2(F_p) by Hall's lemma and I think you can induct from there (but didn't think about it carefully.)
Indeed: such arguments are at the heart of the proof of Serre's Theorem quoted above. – ACL Sep 9 '11 at 6:05
add comment
Not the answer you're looking for? Browse other questions tagged elliptic-curves galois-representations modular-forms gr.group-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/74906/intersection-of-field-extensions-of-torsion-points-of-non-isogenous-elliptic-cur?sort=votes","timestamp":"2014-04-20T11:22:16Z","content_type":null,"content_length":"68341","record_id":"<urn:uuid:60ea54df-ca06-44ac-8987-4fb27c0da8ea>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kakuro Cross Sums
Welcome to the Kakuro Combinations table, Kakuro is also known as Kakro and Cross Sums, this table can also be used for Killer SuDoku
In Kakuro Cross Sums puzzles it is very useful to know what combinations of numbers add up to the target value.
Use this table to find combinations that can be used to help solve a Kakuro Cross Sums game.
First pick a colour that represents the number of boxes you need to fill for your game, then go down the total column to find the value you need to add up to, the colour highlights the number
combinations that add up to that total. All you need to do is put them in the right order.
To print the table you will have to make sure that your browser will print background colours.
Good luck!
|
{"url":"http://www.kevinpluck.net/kakuro/KakuroCombinations.html","timestamp":"2014-04-20T16:38:05Z","content_type":null,"content_length":"32232","record_id":"<urn:uuid:97f3b812-8530-4726-b9e4-db61c7985aa9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
|
integral binomial differential
December 13th 2011, 04:33 AM #1
Junior Member
Mar 2011
integral binomial differential
Hiow to evaluate
$\int_0^1{dx \frac{d^{2l-1}}{dx^{2l-1}} \[ (x^2-1)^{2l} \]$
Using the binomial theorem
$\int_0^1{dx \frac{d^{2l-1}}{dx^{2l-1}} \[ \sum_{k=0}^{2l} \frac{(2l)!}{k!(2l-k)!} x^{4l-2k} (-1)^k \]$
l is a non negative integer.
I don't know how to proceed. Is there another way?
Re: integral binomial differential
Move the differential operator into the sum. You'll need to evaluate
$\frac{d^{2l-1}}{dx^{2l-1}} x^{4l-2k}$
which you can easily do by induction. Some terms of the sum may vanish while doing this, I didn't check. Finally, swap the sum and integral. Evaluate the integral. And you should be done.
December 14th 2011, 02:50 AM #2
Junior Member
Oct 2011
|
{"url":"http://mathhelpforum.com/calculus/194165-integral-binomial-differential.html","timestamp":"2014-04-16T22:13:50Z","content_type":null,"content_length":"32174","record_id":"<urn:uuid:62ca8136-fe64-4617-a3e9-b50f0a00189e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reference needed: Isomorphism on pi_1 and homology gives weak equivalence
up vote 3 down vote favorite
Let $f : X \to Y$ be a map between a connected space $X$ and a space $Y$. If $\pi(f) : \pi_1(X) \to \pi_1(Y)$ is an isomorphism, and $H_n(f) : H_n(X, G) \to H_n(Y, G)$ is an isomorphism for all $n \
ge 1$ and for any local system of coefficients $G$, then $X$ is weakly equivalent to $Y$. Does anyone have a reference (or proof) for this?
homology fundamental-group
add comment
3 Answers
active oldest votes
You need to assume either that the spaces involved are simple (I believe Emmanuel Dror-Farjoun generalized that to nilpotent), or that the map f induces an isomorphism in homology
with local coefficients. There is an exercise in Hatcher's book that discuss this, in Section 4.2 (Ex. 12). You should also look at Peter May's beautiful paper The Dual Whitehead
up vote 4 down Theorems, in Peter Hilton's birthday conference proceedings.
vote accepted
yes, sorry, i forgot the local coefficients condition. thank you! – Joris Weimar Feb 13 '11 at 1:38
add comment
Switzer proves in his Algebraic Topology as theorem 10.28 that statement for $1$-connected [S:spaces:S] CW-complexes.
up vote 5 down vote
add comment
The proof in the simply connected case is well-known and is a consequence of the relative Hurewicz theorem. See Corollary 1, page 79 of Mosher and Tangora's book, Cohomology Operations and
Applications in Homotopy Theory for this case.
up vote 4 If $X$ and $Y$ aren't 1-connected, then $f$ lifts to a map of universal covers $\tilde f: \tilde X \to \tilde Y$ and your assumption about local coefficients implies that $\tilde f$ is a
down vote homology isomorphism. We can therefore apply the previous paragraph to show that $\tilde f$ is a weak equivalence. This implies that $f$ is since $f$ is a $\pi_1$-isomorphism.
So what's a simple counterexample to the problem as stated? – Paul Feb 13 '11 at 4:47
@Paul: A high dimensional knot knot complement with $\pi_1 = \Bbb Z$ will give a counterexample. Let $S^n \subset S^{n+2}$ be a $n$-knotted sphere and $X$ its complement ($n > 2$).
2 Assume $\pi_1(X)$ is $\Bbb Z$ (it is not hard to find such knots). Let $f:X\to S^1$ be a cohomology generator. Then $f$ is a $\pi_1$-isomorphism and a homology isomorphism. But $f$
isn't a weak equivalence, since that would imply the knot is trivial (by Levine). – John Klein Feb 13 '11 at 4:58
add comment
Not the answer you're looking for? Browse other questions tagged homology fundamental-group or ask your own question.
|
{"url":"http://mathoverflow.net/questions/55260/reference-needed-isomorphism-on-pi-1-and-homology-gives-weak-equivalence","timestamp":"2014-04-19T07:33:59Z","content_type":null,"content_length":"61077","record_id":"<urn:uuid:1a6ca804-e591-4402-a47a-68f9e5882f82>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Foundations of Cryptography - Volume 2
The Foundations of Cryptography - Volume 2
Oded Goldreich
Cryptography is concerned with the construction of schemes that should maintain a desired functionality, even under malicious attempts aimed at making them deviate from it. The design of
cryptographic systems has to be based on firm foundations; whereas ad-hoc approaches and heuristics are a very dangerous way to go. This work is aimed at presenting firm foundations for
cryptography. The foundations of cryptography are the paradigms, approaches and techniques used to conceptualize, define and provide solutions to natural ``security concerns''. The emphasis of the
work is on the clarification of fundamental concepts and on demonstrating the feasibility of solving several central cryptographic problems. The current book is the second volume of this work, and
it focuses on the main applications of Cryptography: encryption schemes, signature schemes and secure protocols. The first volume focused on the main tools of Modern Cryptography: computational
difficulty (one-way functions), pseudorandomness and zero-knowledge proofs.
ISBN 0-521-83084-2
Published in US in May 2004.
See purchase information.
Publisher: Cambridge University Press
(see the publisher's page for this volume).
This volume is part of the two-volume work Foundations of Cryptography (see Volume 1).
For a brief introduction to the Foundations of Cryptography, see surveys. Preliminary drafts are also available here.
For further suggestions regarding teaching, see notes.
This page includes the volume's preface, Teaching Tips, Table of Contents, answers to frequently asked questions, and List of Corrections.
See preface to the entire work Foundations of Cryptography. We highlight the following points:
• Cryptography is concerned with the construction of schemes that should maintain a desired functionality, even under malicious attempts aimed at making them deviate from it.
• It makes little sense to make assumptions regarding the specific strategy that the adversary may use. The only assumptions that can be justified refer to the computational abilities of the
• The design of cryptographic systems has to be based on firm foundations; whereas ad-hoc approaches and heuristics are a very dangerous way to go.
• This work is aimed at presenting firm foundations for cryptography. The foundations of cryptography are the paradigms, approaches and techniques used to conceptualize, define and provide
solutions to natural security concerns.
□ We will present some of these paradigms, approaches and techniques as well as some of the fundamental results obtained using them.
□ Our emphasis is on the clarification of fundamental concepts and on demonstrating the feasibility of solving several central cryptographic problems. This is done in a way independent of the
particularities of some popular number theoretic examples.
□ The most relevant background for this work is provided by basic knowledge of algorithms (including randomized ones), computability and elementary probability theory. Background on
(computational) number theory, which is required for specific implementations of certain constructs, is not really required here.
• Most of modern cryptography relies on the ability to generate instances of hard problems. Such ability is captured in the definition of one-way function (see Volume 1).
Preface to the current volume
The current volume is the second part (or volume) of the two-volume work Foundations of Cryptography. The first part (i.e., Volume 1) consists of an introductory chapter [Chapter 1], followed by
chapters on computational difficulty (one-way functions), pseudorandomness and zero-knowledge proofs [Chapters 2-4, respectively]. The current volume consists of three chapters, which treat
encryption schemes [Chapter 5], signature schemes [Chapter 6], and general cryptographic protocols [Chapter 7], respectively. Also included is an appendix that provides corrections and additions to
Volume 1. The high-level structure of the current volume is as follows:
• Chapter 5: Encryption Schemes
The Basic Setting and Definitions of Security (Sec. 5.1 and 5.2)
Constructions of Secure Encryption Schemes (Sec. 5.3)
Advanced material (Sec. 5.4 and 5.5.1--5.5.3)
• Chapter 6: Digital Signature and Message Authentication
Definitional Issues (Sec. 6.1)
Length-restricted signature scheme (Sec. 6.2)
Basic Constructions (Sec. 6.3 and 6.4)
Advanced material (Sec. 6.5 and 6.6.1--6.6.3)
• Chapter 7: General Cryptographic Protocols
A detailed overview (Sec. 7.1)
Advanced material (all the rest):
The two-party case (Sec. 7.2-7.4)
The multi-party case (Sec. 7.5 and 7.6)
• Appendix C: Corrections and Additions to Volume 1
• Bibliography and Index
Historical notes, suggestions for further reading, some open problems and some exercises are provided at the end of each chapter. The exercises are mostly designed to help and test the basic
understanding of the main text, not to test or inspire creativity. The open problems are fairly well-known, still we recommend to check their current status (e.g., in this website).
Using this work
The work is aimed to serve both as a textbook and a reference text. That is, it is aimed at serving both the beginner and the expert. In order to achieve this aim, the presentation of the basic
material is very detailed so to allow a typical CS-undergraduate to follow it. An advanced student (and certainly an expert) will find the pace (in these parts) way too slow. However, an attempt was
made to allow the latter reader to easily skip details obvious to him/her. In particular, proofs are typically presented in a modular way. We start with a high-level sketch of the main ideas, and
only later pass to the technical details. Passage from high-level descriptions to lower level details is typically marked by phrases such as details follow. More advanced material is typically
presented at a faster pace and with less details. Thus, we hope that the attempt to satisfy a wide range of readers will not harm any of them.
(See also Teaching Notes.)
The material presented in this work is, on one hand, way beyond what one may want to cover in a course, and on the other hand falls very short of what one may want to know about Cryptography in
general. To assist these conflicting needs we make a distinction between basic and advanced material, and provide suggestions for further reading (in the last section of each chapter). In particular,
sections, subsections, and subsubsections marked by an asterisk (*) are intended for advanced reading.
This work are supposed to provide all material for a course on Foundations of Cryptography. For a one-semester course, the teacher will definitely need to skip all advanced material (marked by an
asterisk) and maybe even some basic material: see suggestion below. This should allow, depending on the class, to cover the basic material at a reasonable level (i.e., cover all material marked as
``main'' and some of the ``optional''). This work can also serve as textbook for a two-semester course. Either way, the current volume only covers the second half of the material mentioned above. The
first half is covered in Volume 1.
Following is our suggestion for a one-semester course on Foundations of Cryptography. Depending on the class, each lecture consists of 50-90 minutes. Lectures 1-15 are covered by Volume 1, whereas
Lectures 16-28 are covered by the current (second) volume.
• Lecture 1: Introduction, Background, etc (depending on class)
• Lecture 2-5: Computational Difficulty (One-Way Functions)
Main: Definition (Sec. 2.2), Hard-Core Predicates (Sec. 2.5)
Optional: Weak implies Strong (Sec. 2.3), and Sec. 2.4.2-2.4.4
• Lecture 6-10: Pseudorandom Generators
Main: Definitional issues and a construction (Sec. 3.2-3.4)
Optional: Pseudorandom Functions (Sec. 3.6)
• Lecture 11-15: Zero-Knowledge Proofs
Main: Some definitions and a construction (Sec. 4.2.1, 4.3.1, 4.4.1-4.4.3)
Optional: Sec. 4.2.2, 4.3.2, 4.3.3-4.3.4, 4.4.4
• Lecture 16-20: Encryption Schemes
Main: Definitions and constructions (Sec. 5.1, 5.2.1-5.2.4, 5.3.2-5.3.4)
Optional: Beyond passive notions of security (overview, Sec. 5.4.1)
• Lecture 21-24: Signature Schemes
Definitions and constructions (Sec. 6.1, 6.2.1-6.2.2, 6.3.1.1, 6.4.1-6.4.2)
• Lecture 25-28: General Cryptographic Protocols
The definitional approach and a general construction (overview, Sec. 7.1).
A comment regarding the current volume
Writing the first volume was fun. In comparison to the current volume, the definitions, constructions and proofs in the first volume are relatively simple and easy to write. Furthermore, in most
cases, the presentation could safely follow existing texts. Consequently, the writing effort was confined to re-organizing the material, revising existing texts, and augmenting them by additional
explanations and motivations.
Things were quite different with respect to the current volume. Even the simplest notions defined in the current volume are more complex than most notions treated in the first volume (e.g., contrast
secure encryption with one-way functions or secure protocols with zero-knowledge proofs). Consequently, the definitions are more complex, and many of the constructions and proofs are more complex.
Furthermore, in most cases, the presentation could not follow existing texts. Indeed, most effort had to be (and was) devoted to the actual design of constructions and proofs, which were only
inspired by existing texts.
It seems that the fact that writing this volume required so much effort implies that this volume may be very valuable: Even experts may be happy to be spared the hardship of trying to understand this
material based on the original research manuscripts.
Table of Contents
Chapter 5: Encryption Schemes
• 5.1 The Basic Setting
5.1.1 Private-Key versus Public-Key Schemes 5.1.2 The Syntax of Encryption Schemes
• 5.2 Definitions of Security
5.2.1 Semantic Security 5.2.2 Indistinguishability of Encryptions 5.2.3 Equivalence of the Security Definitions 5.2.4 Multiple Messages 5.2.5* A uniform-complexity treatment
• 5.3 Constructions of Secure Encryption Schemes
5.3.1* Stream--Ciphers 5.3.2 Preliminaries: Block--Ciphers 5.3.3 Private-key encryption schemes 5.3.4 Public-key encryption schemes
• 5.4* Beyond eavesdropping security
5.4.1 Overview 5.4.2 Key-dependent passive attacks 5.4.3 Chosen plaintext attack 5.4.4 Chosen ciphertext attack 5.4.5 Non-malleable encryption schemes
• 5.5 Miscellaneous
5.5.1 On Using Encryption Schemes 5.5.2 On Information Theoretic Security 5.5.3 On Some Popular Schemes 5.5.4 Historical Notes 5.5.5 Suggestion for Further Reading 5.5.6 Open Problems 5.5.7
Chapter 6: Digital Signatures and Message Authentication
• 6.1 The Setting and Definitional Issues
6.1.1 The two types of schemes: a brief overview 6.1.2 Introduction to the unified treatment 6.1.3 Basic mechanism 6.1.4 Attacks and security 6.1.5* Variants
• 6.2 Length-restricted signature scheme
6.2.1 Definition 6.2.2 The power of length-restricted signature schemes 6.2.3* Constructing collision-free hashing functions
• 6.3 Constructions of Message Authentication Schemes
6.3.1 Applying a pseudorandom function to the document 6.3.2* More on Hash-and-Hide and state-based MACs
• 6.4 Constructions of Signature Schemes
6.4.1 One-time signature schemes 6.4.2 From one-time signature schemes to general ones 6.4.3* Universal One-Way Hash Functions and using them
• 6.5* Some Additional Properties
6.5.1 Unique signatures 6.5.2 Super-secure signature schemes 6.5.3 Off-line/on-line signing 6.5.4 Incremental signatures 6.5.5 Fail-stop signatures
• 6.6 Miscellaneous
6.6.1 On Using Signature Schemes 6.6.2 On Information Theoretic Security 6.6.3 On Some Popular Schemes 6.6.4 Historical Notes 6.6.5 Suggestion for Further Reading 6.6.6 Open Problems 6.6.7
Chapter 7: General Cryptographic Protocols
• 7.1 Overview
7.1.1 The Definitional Approach and Some Models 7.1.2 Some Known Results 7.1.3 Construction Paradigms
• 7.2* The Two-Party Case: Definitions
7.2.1 The syntactic framework 7.2.2 The semi-honest model 7.2.3 The malicious model
• 7.3* Privately Computing (2-Party) Functionalities
7.3.1 Privacy reductions and a composition theorem 7.3.2 The 1-out-of-k OT protocol - definition and construction 7.3.3 Privately computing a multiplication-gate 7.3.4 The circuit evaluation
• 7.4* Forcing (2-party) Semi-Honest Behavior
7.4.1 The protocol compiler - motivation and overview 7.4.2 Security reductions and a composition theorem 7.4.3 The compiler - functionalities in use 7.4.4 The compiler itself
• 7.5* Extension to the Multi-Party Case
7.5.1 Definitions 7.5.2 Security in the Semi-Honest Model 7.5.3 The Malicious Models - Overview and Preliminaries 7.5.4 The first compiler - Forcing Semi-Honest Behavior 7.5.5 The second
compiler - Effectively Preventing Abort
• 7.6* Perfect Security in the Private Channel Model
7.6.1 Definitions 7.6.2 Security in the Semi-Honest Model 7.6.3 Security in the Malicious Model
• 7.7 Miscellaneous
7.7.1* Three deferred issues 7.7.2* Concurrent Executions 7.7.3 Concluding Remarks 7.7.4 Historical Notes 7.7.5 Suggestion for Further Reading 7.7.6 Open Problems 7.7.7 Exercises
Appendix C: Corrections and Additions to Volume 1
• C.1 Enhanced Trapdoor Permutations
• C.2 On Variants of Pseudorandom Functions
• C.3 On Strong Witness Indistinguishability
C.3.1 On parallel composition C.3.2 On Theorem 4.6.8 and an afterthought C.3.3 Consequences
• C.4 On Non-Interactive Zero-Knowledge
C.4.1 On NIZKs with efficient prover strategies C.4.2 On Unbounded NIZKs C.4.3 On Adaptive NIZKs
• C.5 Some developments regarding zero-knowledge
C.5.1 Composing zero-knowledge protocols C.5.2 Using the adversary's program in the proof of security
• C.6 Additional Corrections and Comments
• C.7 Additional Mottoes
Bibliography (approximately 200 entries)
Index (approximately 200 entries)
Frequently Asked Questions
Is there a solution manual of this book? Unfortunately, the answer is negative, but we believe that the guidelines for the exercises provide sufficient clues.
When will this volume 2 appear? It has already appeared (in Spring 2004).
See also answers to other Frequently Asked Questions.
List of Corrections
• On page 436, Item 1, the descryption fits the private-key case. In the public-key case, $A_1$ should be given input $(e,z)$, rather than $(1^n,z)$. The same holds for Item 1 on page 443. [Rupeng
• On page 443, Item 1: See page 436, Item 1.
• On page 481, Footnote 58, the Decisional Diffie Hellman problem is wrongly stated. One should require instead that $P=2P'+1$, with $P'$ being a prime, and that $g$ is a generator of the set of
quadratic residues mod $P$. [Luca Trevisan; for further details, see Luca's email.]
• Addendum to the guideline of Exer 5.2 (on page 482): To show that the two distributions are statistically far apart, consider the event $S$ such that $(r,s)\in S$ if and only if there exists an
$n$-bit string $K$ such that $D_K(s)=r$.
• On top of page 622, the simulator constructed to demonstrate the point should only output the view $s$, rather than the view coupled with the output $(s,F(s))$. [Carmit Hazay]
• On constructing 1-out-of-k OT (Const. 7.3.5 and Prop 7.3.6). The current description is fine if $k=2$, which actually suffices via additional reductions. For $k>2$ one should either modify
Construction 7.3.5 or make stringer assumptions about the trapdoor permutation used. The simplest modification is to let the sender select $k$ indices of permutations (rather than a single one),
and have the parties use the $i$th index for $x_i$ and $y_i$. [Ron Rothblum]
• On top of page 698, item 1 (in the discussion of the (first) malicious model) refers to a convention made in Sec. 7.2.3 (which can be found in Footnote 17 on page 628). As pointed out by Yuval
Ishai, in the multi-party context, it seems that the special abort-symbol convention (including a modified functionality) has to be used. In retrospect, we prefer this convention also in the
two-party case.
• Regarding the issue of fairness (cf. Sec. 7.2.3 and 7.7.1.1): See an important revisiting of this issue in Complete Fairness in Secure Two-Party Computation by D. Gordon, C. Hazay, J. Katz, and
Y. Lindell (STOC'08, pp. 413-422).
• Regarding Section 7.6: See a full proof of the BGW Protocol for perfectly-secure multiparty computation, provided by Gilad Asharov and Yehuda Lindell, 2011.
• Regarding NIZK and enhanced trapdoor permutations (see Appendices C.1 and C.4.1): Jonathan Katz has pointed out that although the notion of enhanced trapdoor permutations (Def. C.1.1) suffices
for constructing Oblivious Transfer (see Sec. 7.3.2), it does not seem to suffice for constructing a NIZK (as outlined in Remark 4.10.6 and patched in Apdx C.4.1). A seemingly stronger notion
that does suffice (for constructing a NIZK) may be called a doubly enhanced trapdoor permutation and is defined as an enhanced trapdoor permutation for which given an $n$-bit long description of
a permutation it is feasible to generate a random pair $(x,r)$ such that $x$ is the preimage under the permutation of the domain element generated by the domain-sampling algorithm on coins $r$.
For further details, see note [Nov. 2008 (rev'ed Oct. 2009)].
More about enhanced trapdoor permutations: See Ron Rothblum's Taxonomy of Enhanced Trapdoor Permutations, 2010.
Actually, it may be best to start with a later paper by Ron any myself, titled Enhancements of Trapdoor Permutations.
Material available on-line (see Copyright notice below):
Permission is granted for (non-commercial) use of the following drafts:
Also available (older related material, superseeded by the above):
Copyright (C symbol) 2003 by Oded Goldreich. Permission to make digital or hard copies of part or all of the material posted here is currently granted without fee provided that copies are made only
for personal or classroom use, are not distributed for profit or commercial advantage, and that new copies bear this notice and the full citation on the first page. Abstracting with credit is
permitted. This work will be published in 2004 by Cambridge University Press, and the relevant copyrights have been transferred to it. Once published, permission to make digital or hard copies of
part or all of the material posted here will be granted without fee provided that copies are made only for personal use and are not distributed in any way.
Back to the top or to Oded Goldreich's homepage.
|
{"url":"http://www.wisdom.weizmann.ac.il/~oded/foc-vol2.html","timestamp":"2014-04-16T10:14:40Z","content_type":null,"content_length":"23943","record_id":"<urn:uuid:577365db-09d0-48bf-af72-91f4705adaf1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Equation of a Sphere
Date: 3 Jul 1995 23:14:03 -0400
From: Jason Crist
Subject: Equation for the graph of a sphere.
I was wondering what is the equation for a sphere on a graph with the
dimensions x, y, and z? Do you have to use calculus to solve a
system for the intersection of such spheres?
Thank you for your time.
Date: 13 Jul 1995 22:25:32 -0400
From: Dr. Ken
Subject: Re: Equation for the graph of a sphere.
Hey there!
The equation for a sphere of radius 1 with center at the origin is
x^2 + y^2 + z^2 = 1.
In general, the equation for a sphere of radius R with center at (a,b,c) is
(x-a)^2 + (y-b)^2 + (z-c)^2 = R^2.
If you're looking at the intersection of two spheres, the intersection is
always a circle, and once you know that it's usually not too hard to figure
out what the circle is. If, for instance, the two spheres are centered at
(a,b,c) and (A,B,C), then the circle of intersection will be in the plane
(a-A)x + (b-B)y + (c-C)z = k for some constant k. If you find just one
point that's common to both spheres, this will pinpoint k, and then you can
find the intersection of the plane and one of the spheres.
|
{"url":"http://mathforum.org/library/drmath/view/54399.html","timestamp":"2014-04-18T18:50:28Z","content_type":null,"content_length":"5958","record_id":"<urn:uuid:50e6da9e-5a6b-435b-a37e-d3a4a6e9e25d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Scatter Plot on the TI-89 Calculator: How to in Easy Steps
TI-89 for statistics > Scatter Plot on the TI-89
Scatter Plot on the TI-89: Overview
A scatter plot is a type of graph that uses Cartesian coordinates on an x-y plane to display variables for two sets of data. This type of graph can help you visualize the distribution of data and
make predictions about your data. Making a scatter plot is very easy on the TI-89 graphing calculator and involves three phases: Accessing the data matrix editor, inputting your X and Y values and
then graphing the data.
Scatter Plot on the TI-89: Steps:
Sample problem: make a scatter plot for the following data: (1,6), (2,8), (3,9), (4,11), and (5,14).
Accessing the Data Matrix Editor
Step 1: Press APPS
then scroll to the “Data/Matrix” editor, press
ENTER and then select “new.”
Step 2: Scroll down to “Variable” and type in desired name. For example, type “scatterone.” Note: you don’t have to press the ALPHA key to access the alpha keypad. Just type!
Step 3: Press ENTER ENTER
Inputting X and Y Values
Step 1: Enter your X values under the “c1” column. Press ENTER after each entry.
For our list, you would need to press:
1 ENTER
2 ENTER
3 ENTER
4 ENTER
5 ENTER
Step 2: Enter your Y values under the “c2” column (use the arrow keys to scroll to the top of the column). Press ENTER after each entry.
For our list, you would need to press:
6 ENTER
8 ENTER
9 ENTER
11 ENTER
14 ENTER
Graphing the Data
Step 1: Press f2
for Plot Setup.
Step 2: Press f1
Step 3: Select “scatter” next to “plot type”
Step 4: Select “box” next to “mark type”
Step 5: Scroll to the “x” box and then press ALPHA ) 1 to enter “c1″.
Step 6: Scroll to the “y” box and then press ALPHA ) 2 to enter “c2″.
Step 7: Press ENTER ENTER
Step 8: Press ♦ f3 to view your scatter plot.
Step 9: Press f2 and then press 9 so that the scatter plot will be drawn in the correct window for the data.
That’s it!
Check out our YouTube channel for more tips and help!
5 thoughts on “Scatter Plot on the TI-89 Calculator: How to in Easy Steps”
1. jennifer
how do you plot the points, When I go to graph it says window variables domain
2. Andale
The Window variables domain error occurs when the minimum value for x or y is larger than the maximum value…check your inputs carefully.
|
{"url":"http://www.statisticshowto.com/how-to-make-a-scatter-plot-on-the-ti-89-calculator/","timestamp":"2014-04-20T08:29:01Z","content_type":null,"content_length":"24983","record_id":"<urn:uuid:a039a9a6-c7d8-43c2-ac5f-e59f2d05d2af>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Regex for field to contain 13 digits?
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I need a regular expression to check a field is either empty or is exactly 13 digits?
Regards, Francis P.
add comment
Try this (see also on rubular.com):
• ^, $ are beginning and end of string anchors
• \d is the character class for digits
• {13} is exact finite repetition
• ? is "zero-or-one of", i.e. optional
On the definition of empty
The above pattern matches a string of 13 digits, or an empty string, i.e. the string whose length is zero. If by "empty" you mean "blank", i.e. possibly containing nothing but whitespace characters,
then you can use \s* as an alternation. Alternation is, simply speaking, how you match this|that. \s is the character class for whitespace characters, * is "zero-or-more of" repetition.
So perhaps something like this (see also on rubular.com):
Related question
+1 for the explanations.
Xavier Ho Jun 22 '10 at 11:45
Thanks that worked :)
Francis Jun 22 '10 at 12:22
add comment
Not the answer you're looking for? Browse other questions tagged regex or ask your own question.
|
{"url":"http://stackoverflow.com/questions/3092691/regex-for-field-to-contain-13-digits/3092702","timestamp":"2014-04-18T19:48:04Z","content_type":null,"content_length":"66760","record_id":"<urn:uuid:bd7f96d2-ad48-487a-98b7-fc1bc8efd149>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Highbridge, NY Calculus Tutor
Find a Highbridge, NY Calculus Tutor
...Keeping a good attitude can be a key part of mastering physics. As an experienced teacher of high school and college level physics courses, I know what your teachers are looking for and I
bring all the tools you'll need to succeed! Of course, a big part of physics is math, and I am experienced ...
18 Subjects: including calculus, reading, GRE, physics
...I acquired my Bachelor's with high honors (GPA 3.72) in Mathematics and Economics as well as my Master's in Statistics (GPA 4.00) from Rutgers University. I think that anyone can learn and
love mathematics when the material is delivered in a fashion that is conductive to the person's understandi...
18 Subjects: including calculus, statistics, algebra 1, algebra 2
...I played varsity softball in high school as a first baseman and left fielder. I led the league in doubles my senior year and ranked among the top in batting average. I am now playing on the
NYU women's softball team.
19 Subjects: including calculus, geometry, biology, algebra 1
...The TAKS exam is very similar to the upper level ISEE exam. It too consists of verbal and mathematical sections. The math sections also include information about learning sequences.
15 Subjects: including calculus, chemistry, physics, geometry
...Math is a system that makes sense, and once you understand it, homework, quizzes, and tests are easy! Greetings! My name is Matt B., and I've been a tutor for the past 7 years, helping people
overcome their difficulties with math and physics.
12 Subjects: including calculus, physics, MCAT, trigonometry
Related Highbridge, NY Tutors
Highbridge, NY Accounting Tutors
Highbridge, NY ACT Tutors
Highbridge, NY Algebra Tutors
Highbridge, NY Algebra 2 Tutors
Highbridge, NY Calculus Tutors
Highbridge, NY Geometry Tutors
Highbridge, NY Math Tutors
Highbridge, NY Prealgebra Tutors
Highbridge, NY Precalculus Tutors
Highbridge, NY SAT Tutors
Highbridge, NY SAT Math Tutors
Highbridge, NY Science Tutors
Highbridge, NY Statistics Tutors
Highbridge, NY Trigonometry Tutors
Nearby Cities With calculus Tutor
Allerton, NY calculus Tutors
Beechhurst, NY calculus Tutors
Bronx calculus Tutors
Castle Point, NJ calculus Tutors
Fort George, NY calculus Tutors
Fort Lee, NJ calculus Tutors
Hamilton Grange, NY calculus Tutors
Hillside, NY calculus Tutors
Inwood Finance, NY calculus Tutors
Manhattanville, NY calculus Tutors
Morsemere, NJ calculus Tutors
Parkside, NY calculus Tutors
Rochdale Village, NY calculus Tutors
West Englewood, NJ calculus Tutors
West Fort Lee, NJ calculus Tutors
|
{"url":"http://www.purplemath.com/highbridge_ny_calculus_tutors.php","timestamp":"2014-04-17T04:47:18Z","content_type":null,"content_length":"24068","record_id":"<urn:uuid:6a6176d2-1941-4a24-8484-0320a4165aca>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the birthday problem [X'idated]
The birthday problem (i.e. looking at the distribution of the birthdates in a group of n persons, assuming [wrongly] a uniform distribution of the calendar dates of those birthdates) is always a
source of puzzlement [for me]! For instance, here is a recent post on Cross Validated:
I have 360 friends on facebook, and, as expected, the distribution of their birthdays is not uniform at all. I have one day with that has 9 friends with the same birthday. So, given that some
days are more likely for a birthday, I’m assuming the number of 23 is an upperbound.
The figure 9 sounded unlikely, so I ran the following computation:
for (t in 1:10^5){
whose output shown on the above graph. (Actually, I must confess I first forgot the sort in the code, which led me to then believe that 9 was one of the most likely values and post it on Cross
Validated! The error was eventually picked by one administrator. I should know better than trust my own R code!) According to this simulation, observing 9 or more people having the same birthdate has
an approximate probability of 0.00032… Indeed, fairly unlikely!
Incidentally, this question led me to uncover how to print the above on this webpage. And to learn from the X’idated moderator whuber the use of tabulate. Which avoids the above loop:
> system.time(test(10^5)) #my code above
user system elapsed
26.230 0.028 26.411
> system.time(table(replicate(10^5, max(tabulate(sample(1:365,360,rep=TRUE))))))
user system elapsed
5.708 0.044 5.762
12 Responses to “the birthday problem [X'idated]”
1. [...] problem generated by X’validated (on which I spent much too much time!): given an unbiased coin that produced M heads in the first M [...]
2. I am not 100% sure so there is a chance of saying something that is completely wrong but I think that the code has a bug.
Let’s say that:
> x<-sort(sample(1:365,360,rep=TRUE))
gives us sth like
(1, 2, 3, ...,363,363,...,363)
where the last 10 values is 363.
That means that the !duplicated(x) in the last 10 positions will be
T, F, F, …,F (one T and nine F's) where T is True and F is False.
In that case the last value of diff((1:360)[!duplicated(x)]) will not be 10 (as it should) because the last difference it will compute will be from the T of the people who were born on the 363rd
day of the year and the previous T.
□ Ok, thanks! You are right, if the largest number of common birthdates is the last birthdate of the series, the diff will not pick it up. This creates a small downward bias, mostly negligible.
Now, I think the correction of the bug is to add 361 to the list of indices:
3. About computing the probability of observing 9 or more people having the same birthdate using Poisson approximation:
the number of your friends having their birthday on one particular day in the year follows approximately a P(lambda) distribution, where lambda = 360/365; the probability that this number is 9 is
approximately p9=exp(-l) l^9/9!, so the probability that none of the 365 days it 9 friends’ birthday is approximately (1-p9)^365 \approx 1-365p9 (and hardly different for 9 or more).
Hence, the probability your are looking for is approximately 365p9 = 0.00033, quite close to your Monte-Carlo estimate.
Note that all these approximations can be justified (and quantified) using Chen-Stein’s bound for Poisson approximation (which does not require independence).
4. Actually, I think someone commented about the Galton-Watson process in relation with this post, but that I [or wordpress] mistakenly removed the comment. Please comment again!
5. Actually one can assume that (while agreeing with the point Rick Wicklin is making) the distribution of birthdates in a group of n people can be modeled by the uniform distribution. Hence your
sample(1:365,360) code.
What is not uniformly distributed is the number of subjects in a group of n persons sharing the same month and day of birth.
The same happens with a series of coin tosses actually:
table(replicate(10^5, max(tabulate(sample(1:2,100,rep=TRUE)))))
6. I think that you put one parenthesis more at the end of the third line.
It should be rep=TRUE)))])) not rep=TRUE))))])).
That said, I am still trying to figure it out.
□ Thanks, Charalampos, corrected! The !duplicated(…) part is selecting the (ordered) dates that have not yet appeared, then (1:360)[...] translates this into indices, and diff(..) gives the
spread between two different and consecutive birthdates, hence the result…
□ Played around a bit in R and figured it out earlier, but thanks a lot for explaining it.
7. Nice work! Apropos the comment about trusting R code, I would like to share my experience with this problem. It began with a *theoretical calculation* (using independent Poisson distributions for
the counts of individual birthdays) which, although not perfectly accurate (due to lack of independence), quickly provided a good sense of what a correct answer should look like. To support that
calculation I ran (fast, easy) simulations on two different platforms (Mathematica and R). Remember, the software (usually!) works correctly; if it errs, the problem lies in what the programmer
told it to do. The combination of theoretical calculations supported by simulations (rather than either one alone) can provide a high level of confidence and trustworthiness in the results.
□ Well said, Bill! And total respect for your awesome contribution to Cross Validated!
8. To answer the question: If you assume uniformity then the estimates of probability are lower bounds for the (real) case of a nonuniform distribution of birthdays. A readable, elementary, proof is
Munford, 1977, TAS, 31(3), p. 119. For a discussion and presentation of real data for US births in 2002, see http://blogs.sas.com/content/iml/2011/09/09/the-most-likely-birthday-in-the-us/
For an interesting variation, imagine a room that contains k people. What is the probability that two people in the room share the same initials (first and last)? This is described, with a
heatmap graphic, at http://blogs.sas.com/content/iml/2011/01/14/two-letter-initials-which-are-the-most-common/
|
{"url":"http://xianblog.wordpress.com/2012/02/01/the-birthday-problem/","timestamp":"2014-04-18T13:40:15Z","content_type":null,"content_length":"54847","record_id":"<urn:uuid:f71fc149-d7c0-4eb1-babf-847897faef08>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is the solution to the given equation 2x + 4y = 12
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Many solutions.
Best Response
You've already chosen the best response.
Diophantine equation?
Best Response
You've already chosen the best response.
what is the solution of the equation? Given the linear equation
Best Response
You've already chosen the best response.
2x + 4y =12
Best Response
You've already chosen the best response.
do you mean solve for y?
Best Response
You've already chosen the best response.
or x,y intercepts?
Best Response
You've already chosen the best response.
x and y intercepts
Best Response
You've already chosen the best response.
For the \(x\)-intercept, set \(y=0\) and solve for \(x\). For the \(y\)-intercept, set \(x=0\) and solve for \(y\).
Best Response
You've already chosen the best response.
hmm k then so x intercept is when y=0 2x+4y=12 becomes 2x+0=12 2x=12 <========just divided each side by 2 x=6 is the x intercept (6,0) y intercept is when x=0 2x+4y=12 becomes 0+4y=12 <=====just
divided each side by 3 y=3 so the y intercept is at y=3 (0,3)
Best Response
You've already chosen the best response.
thank you:)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50ede4b4e4b0d4a537cd66b2","timestamp":"2014-04-17T07:02:08Z","content_type":null,"content_length":"48974","record_id":"<urn:uuid:c37d022e-1db2-40c9-9f88-7935ccb029f2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Advogato: Blog for Bram
There's a very difficult puzzle in the latest scientific american, it goes as follows:
Three of the nine members of the president's cabinet are leaks. If the president gives a tip to some subset of his advisors and all three interlopers are in that subset, then that tip will get leaked
to the press. The president has decided to determine who the leaks are by selectively giving tips to advisors and seeing which ones leak. How can this be done by giving each tip to three or four
people, having no more than two leaked tips, and using no more than 25 tips total?
There's a solution on the web site, although the one I figured out is very different.
Here's my solution. First, note that if a tip to four people leaks, the leaks can be found using only three more tips, giving each of them to three of the four.
Arrange eight of the nine advisors on the vertices of a 2x2x2 cube. Test each of the 12 subsets which are coplanar, plus the 2 subsets which are all the same color if they're colored like a
checkerboard. If any of those leak, we're done. If not, we've determined that the odd one out must be one of the leaks.
Next, arrange the eight we previously had in a cube formation into a line, and test all 8 configurations of the form the odd one out, x, x+1, and x+3 (modulo 8).
If none of those hit, then the two other leaks must be of the form x and x + 4, which there are four possibilities for. Three of them can be tried directly and if none leak then the last one can be
This leads to a total of 14 + 8 + 3 = 25 tips. It's a very hard problem, it took me about 45 minutes to figure out a solution.
An additional question given is whether increasing the number of advisors included in a tip can reduce the number of trials necessary. The answer is yes. For example, to modify the technique I gave
the first test could be of six of the corners of the cube except for two adjacent ones. This effectively does three tests at once, so the total number of tests needed drops to 23. If the first test
turns up positive, all but one of the 20 subsets of 3 of 6 can be tried individually, and if none of them leak then the last one can be inferred, for a total of one initial tip with to 6 plus 19
others, or 20 total, so leaking on the first tip isn't the limiting case.
|
{"url":"http://www.advogato.org/person/Bram/diary/47.html","timestamp":"2014-04-19T12:37:22Z","content_type":null,"content_length":"5939","record_id":"<urn:uuid:74505da9-de56-4cd5-a968-2975676faf74>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: February 2011 [00526]
[Date Index] [Thread Index] [Author Index]
Re: Color grid with x and y args to visualize effects of 2D
• To: mathgroup at smc.vnet.net
• Subject: [mg116584] Re: Color grid with x and y args to visualize effects of 2D
• From: "Christopher O. Young" <cy56 at comcast.net>
• Date: Mon, 21 Feb 2011 04:19:14 -0500 (EST)
• References: <ijo58j$lkg$1@smc.vnet.net>
Finally got a fast grid plot going, thanks to Heike's suggestion to use
ParameterPlot with two parameters and the right Hue ranges.
The following puts up a window with two sliders for visualizing the effects
of a twist added to a rotation, a kind of "pinwheel" effect.
( {
{Cos[(1 + \[ScriptK] Sqrt[u^2 + v^2]) \[Theta]],
-Sin[(1 + \[ScriptK] Sqrt[u^2 + v^2]) \[Theta]]},
{Sin[(1 + \[ScriptK] Sqrt[u^2 + v^2]) \[Theta]],
Cos[(1 + \[ScriptK] Sqrt[u^2 + v^2]) \[Theta]]}
} ).( {
} ), {v, 0, 1}, {u, 0, 1},
Mesh -> {20, 20},
MeshShading -> Table[
Hue[h, s, 1],
{h, Range[0, 0.85, 0.85/20]},
{s, Range[0.1, 1, 0.9/20]}
MeshStyle -> None,
Axes -> False,
Frame -> False,
BoundaryStyle -> None,
PlotStyle -> {Opacity[0.1]}
{\[Theta], 0, 2 \[Pi]}, {\[ScriptK], 0, 4}
A notebook and a picture and are at:
http://home.comcast.net/~cy56/TwistingRotation.nb and
Wish I could put the rotation into a single function, but then it won't
perform the matrix multiplication correctly.
On 2/19/11 5:15 AM, in article ijo58j$lkg$1 at smc.vnet.net, "Christopher O.
Young" <cy56 at comcast.net> wrote:
> You can see my website at
> http://intumath.org/Math/Geometry/Projective%20geometry/projectivegeomet.htm
> l for an example of the kind of color grid I'm trying to plot. Again, I have
> to be able to transform this via standard matrices, in order to illustrate
> the basics of various transformations. So I need either the x and y
> arguments available, or else I need the t parameter available. Unless
> there's some way in Mathematica to apply transformations in matrix form
> directly to an image.
> Thank you again for any help.
> Chris Young
> cy56 at comcast.net
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2011/Feb/msg00526.html","timestamp":"2014-04-16T18:58:34Z","content_type":null,"content_length":"27332","record_id":"<urn:uuid:766193ad-e3fe-4216-b2c0-ebe94a52c044>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
|
TTT Archives
The Chaos Game: Stimulating Math Curiosity with Interactive Software
by Richard O'Malley, Professor, Department of Mathematical Sciences,
University of Wisconsin-Milwaukee
In the early 1990s, Richard O'Malley was asked to design a math seminar for incoming freshmen at UW-Milwaukee. The seminar had to present math as an area of open inquiry, address the
anxieties of students with little math experience, and sharpen students' critical thinking and writing skills. O'Malley met this challenge in two ways: by clearly defining the course's goals
and his methods of achieving them, and by creating a software program that would engage students visually and intellectually. O'Malley has found the software an effective tool in helping
students understand concepts underlying chaos and fractals. (March 2001)
Mathematics on the Web
Professor Don Piele
Department of Mathematics, UW-Parkside
Don Piele has created an on-line calculus course that allows students to visualize calculus concepts and solve problems using Mathematica, a powerful tool used by both students and
professionals. Prof. Piele provides background on the development of his course, its structure, plus information on software and approximate costs.
Virtual Textbook of College Math
Professor M. Maheswaran
Department of Mathematics, UW-Marathon County
Professor Maheswaran has devised a mathematics courses using hypermedia/web materials and math software, as well as a Virtual Textbook of College Algebra and Geometry, a project of his own
design. These materials serve two important functions. First, they provide valuable reference materials to supplement textbooks used in class and, second, they are helpful as presentation and
discussion tools in the classroom.
|
{"url":"http://www.uwsa.edu/ttt/browse/math.htm","timestamp":"2014-04-18T05:34:28Z","content_type":null,"content_length":"10490","record_id":"<urn:uuid:6d89b5cb-90eb-4c07-bf35-241d60546714>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Packing 10 or 11 Unit Squares in a Square
Let $s(n)$ be the side of the smallest square into which it is possible pack $n$ unit squares. We show that $s(10)=3+\sqrt{1\over 2}\approx3.707$ and that $s(11)\geq2+2\sqrt{4\over 5}\approx3.789$.
We also show that an optimal packing of $11$ unit squares with orientations limited to $0$ degrees or $45$ degrees has side $2+2\sqrt{8\over 9}\approx3.886$. These results prove Martin Gardner's
conjecture that $n=11$ is the first case in which an optimal result requires a non-$45$ degree packing.
Full Text:
|
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v10i1r8","timestamp":"2014-04-21T07:05:26Z","content_type":null,"content_length":"14339","record_id":"<urn:uuid:1c80f807-eaf8-4a5b-ae0c-66dc299bbff6>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
And the winners are...
Plus has opened its temporary head office in Hyderabad! We're here for the International Conference of Women Mathematicians, starting today, and the International Congress of Mathematicians (ICM)
starting on Thursday. The highlight (apart from Plus' presentation on public engagement with maths) will be the award of the Fields Medals for 2010.
The Fields Medal is the most prestigious prize in mathematics, akin to the Nobel Prize. It is awarded to up to four mathematicians at each ICM, which meets every four years. The prize is awarded to
mathematicians under the age of 40 in recognition of their existing work and for the promise of their future achievements. You can read more about the Fields Medal on Plus.
And the Fields medal isn't the only prestigious prize being awarded at the ICM. The Rolf Nevanlinna Prize recognises achievements in mathematical aspects of computer and information science. The Carl
Friedrich Gauss Prize, which was first awarded at the last congress in 2006, is for outstanding mathematical contributions that have found significant applications outside of mathematics. The first
recipient of this prize was the Japanese mathematician Kiyoshi Itô, then aged 90, for his development of stochastic analysis. His work has allowed mathematicians to describe Brownian motion — a
random motion similar to the one you see when you let a particle float in a liquid or gas. Itô's theory applies also to the size of a population of living organisms, to the frequency of a certain
allele within the gene pool of a population, or even more complex biological quantities. It is also now integral to financial trading as it forms the basis of the Black-Scholes formula underlying
almost all financial transactions that involve options or futures. (You can read more about the Black-Scholes formula in A risky business: how to price derivatives on Plus.)
This years ICM also sees the inauguration of a new prize, the Chern Medal, for an individual whose accomplishments warrant the highest level of recognition for outstanding achievements in the field
of mathematics, regardless of their field or occupation. The medal is in memory of the outstanding Chinese mathematician Shiing-Shen Chern. Plus is looking forward to finding out the winners of all
of the prizes at this year's ICM, and more importantly, to learning about their mathematical achievements and how they have contributed to mathematics and society at large. Stay tuned to our news
section, our blog or follow us on Twitter to find out all the news first.
|
{"url":"http://plus.maths.org/content/and-winners-are","timestamp":"2014-04-16T16:30:50Z","content_type":null,"content_length":"26135","record_id":"<urn:uuid:438201ec-01e4-474f-8ae4-7873d21b4bdb>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
October 7th 2008, 07:37 PM #1
Sep 2008
Could you please help me solve these questions? Thanks
1. Use Venn diagrams or membership table to demonstrates that:
A U (B ∩ C)= (A U C) ∩ (A U C) is a true equation for any sets A, B, and C. If you use Venn diagrams , show the three circles overlapping and include a legend pointing out which set is
represented by witch shaded regions.
2. Find the best conclusion for this Lewis Carroll problem . The "best" conclusion is one which uses all of the given information.
A. No one who is going to a party ever fails to brush his hair.
B. No one looks fascinating , if he is untidy.
C. Opium -eaters have no self-command.
D.Everyone who has brushed his hair looks fascinating.
E.No one wears white kid gloves unless he is going to a party.
F. A person is always untidy if he/she has no self -command.
Assign letters to the statements , write the premises using logic symbols , write a valid argument which leads to the best conclusion , then write the conclusion in words.
3. Prove this statement : For all X ε { 2,4,6,8,12,19}, 3x +12 is an even number.
Please help !!! Thanks
I would use the membership table to solve the 1st problem. Set A, B and C with value 1 and 0 and you will be able to prove whether the 2 equations are logically equivalent or not. And i think it
is not. Please correct me if i am wrong learning in the process.
A B C-B ^ C--A U ( B ^ C )-A U C--(A U C) ^ (A U C)
0 0 0---0---------0----------0------------0
0 0 1---0---------0----------1------------1
0 1 0---0---------0----------0------------0
0 1 1---1---------1----------1------------1
1 0 0---0---------1----------1------------1
1 0 1---0---------1----------1------------1
1 1 0---0---------1----------1------------1
1 1 1---1---------1----------1------------1
October 10th 2008, 05:19 AM #2
Oct 2008
|
{"url":"http://mathhelpforum.com/discrete-math/52547-please-help.html","timestamp":"2014-04-19T08:48:40Z","content_type":null,"content_length":"28369","record_id":"<urn:uuid:56c14c0b-e015-4140-b3f2-aa7d3687ad33>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dacula Geometry Tutor
Find a Dacula Geometry Tutor
...I will be graduating in 2015. I have been doing private math tutoring since I was a sophomore in high school. I believe in guiding students to the answers through prompt questions.
9 Subjects: including geometry, algebra 1, algebra 2, precalculus
...I also make things easier to learn by getting into the mind of the student to know how they relate to the subject and finding the right way to communicate the knowledge.I have a Master's
degree in Management Information Systems which relies heavily on math. I had an overall GPA of 3.75 throughou...
29 Subjects: including geometry, English, reading, writing
...Elementary math is just that -- elementary -- once you learn the connections/relationship between one concept and another. For example, to find an answer for a division problem, look for the
multiple of the numbers. This type of technique gives students the confidence they need to tackle any math problems by breaking it up into smaller components that they recognize.
26 Subjects: including geometry, English, reading, algebra 2
I am a Georgia Certified Math Teacher 6th-12th grade. I have an undergraduate degree in Math/Science Education (BS degree) from Clemson University and a master's degree in Business (MBA) in
Finance from Georgia State University. I taught high school math before going into the business world and sales and sales management.
6 Subjects: including geometry, algebra 1, GED, algebra 2
...I am an economics major with experience in Ph.D. level courses in macro-economics and econometrics. I am very familiar with all the subjects commonly taught in an undergraduate econometrics
course. I majored in mathematics in my undergraduate career.
30 Subjects: including geometry, English, reading, ESL/ESOL
|
{"url":"http://www.purplemath.com/dacula_geometry_tutors.php","timestamp":"2014-04-20T19:34:09Z","content_type":null,"content_length":"23690","record_id":"<urn:uuid:7f406c1d-60f4-4314-8d45-5e7b4c1062ca>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Lindsay on Sunday, September 30, 2007 at 12:34pm.
A model rocket is launched straight upward with an initial speed of 50 m/s. It accelerates with a constant upward acceleration of 2.00 m/s^2 until its engines stop at an altitude of 150 m.
a) What is the max. height reached by the rocket?
b) When does the rocket reach max. height?
c) How long is the rocket in the air?
I really need help with which equation to use for this problem. I'm unsure of even where to begin...!
• Physics - drwls, Sunday, September 30, 2007 at 1:02pm
a) First, calculate the velocity Vmax attained during acceleration at rate a. (They already tell you the altitude there, H = 150 m). Then compute how huch higher it "coasts" before reaching
maximum altitude.
Vmax = sqrt(2aH)= 24.5 m/s
To have the velocity decrease to zero, the additional time of flight t' is given by
g t' = 24.5 m/s, so
t' = 24.5/9.8 = 2.5 s
The average speed while coasting to zero velocity is Vmax/2 = 12.25 m/s.
Additionl altitude gained = (2.5)(12.25) = 30.63 m
Maximum altitude = Hmax = 150 + 30.63 = 180.63 m
b) it attains maximu height 2.5 seconds after accelerqation stopped. We already calculated that. Add that to the tiome spent accelerating for the total time after launch. The time t spent
accelerating is given by
(1/2) a t^2 = 150 m
t = sqrt (2*150/a)= 12.25 s
Total time to reach maximum altitude - t + t' = ?
c) Add to the last answer the time that it takes to fall back down. Call this time t"
(1/2) g t"^2 = Hmax
Solve for t" and add it to the time in the previous answer
• Physics - Stephanie, Sunday, September 30, 2007 at 2:45pm
For part a, my book says the answer is 310 m. I'm still really confused on how they got this answer.
Related Questions
physics - A model rocket is launched straight upward with an initial speed of 50...
Aberdeen - A model rocket is launched straight upward with an initial speed of ...
Science - A model rocket is launched straight upward with an initial speed of 30...
physics - model rocket is launched straight upward with an initial speed of 50....
physics - A model rocket is launched straight upward with an initial speed of 47...
physics - A model rocket is launched straight upward with an initial speed of ...
Physics - A model rocket is launched straight upward with an initial speed of 47...
physics - A model rocket is launched straight upward with an initial speed of 45...
Physics - A model rocket is launched straight upward with an initial speed of 58...
physics - A model rocket is launched straight upward with an initial speed of 60...
|
{"url":"http://www.jiskha.com/display.cgi?id=1191170060","timestamp":"2014-04-19T01:58:54Z","content_type":null,"content_length":"9901","record_id":"<urn:uuid:a9851ff4-fefb-45dd-a6a9-422788473dad>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by GREG
Total # Posts: 397
Math Accounting
Determine the proposal s appropriateness and economic viability. For all scenarios, assume spending occurs on the first day of each year and benefits or savings occurs on the last day. Assume the
discount rate or weighted average cost of capital is 10%. Ignore taxes and d...
In right triangle, the side opposite to right angle is called what? ANSWER FAST PLZ!!! THX
alg 2
TRIG help quickly please
Mark any solutions to the equation 2cos2x - 1 = 0. The answer can be more than one. Thank you π/8 -7π/4 15π/4 3π/4
Science check answers please and quickly
1. Jim is doing a physics experiment on the rate of cooling of solid objects in air. Which of the following is a variable that is not directly part of his experiment, but should be controlled in his
experiment? (Points : 1) initial temperature to which he heats the solid objec...
Check Answers Quick -Physics
What causes the centripetal acceleration of an electron in a hydrogen atom? electrical attraction gravity friction <-- tension Question 2. Which of the following terms might a jet pilot use to
describe a force that presses him against his seat during a high-speed turn? radi...
It is spelled physic
A/P Biology
The Echinopsis mamillosa is a flowering cactus native to Bolivia. A high-rise building is built near a particular Echinopsis, filtering the amount of direct sunlight it receives. Which of the
following processes is negatively affected first by the reduction in sunlight? Flower...
Math 8th grade
Type the missing number. -25 -23 -22 -21
math - calc
A conical water tank with vertex down has a radius of 12 feet at the top and is 26 feet high. If water flows into the tank at a rate of 30 {\rm ft}^3{\rm /min}, how fast is the depth of the water
increasing when the water is 12 feet deep?
math - calc
A conical water tank with vertex down has a radius of 12 feet at the top and is 26 feet high. If water flows into the tank at a rate of 30 {\rm ft}^3{\rm /min}, how fast is the depth of the water
increasing when the water is 12 feet deep?
chemistry science
substrate concentration= 3.5mM (c2) buffer=0.4 ml 10mM PNPP stock= ?ml (V1) water=?ml enzyme= 0.2ml total volume =1ml (V2) I need help with a dilution, I understand that you would be using c1v1=c2v2
but im confused as to what we need to find the amount of water and PNPP stock ...
substrate concentration= 3.5mM (c2) buffer=0.4 ml 10mM PNPP stock= ?ml (V1) water=?ml enzyme= 0.2ml total volume =1ml (V2) I need help with a dilution, I understand that you would be using c1v1=c2v2
but im confused as to what we need to find the amount of water and PNPP stock ...
Math- factoring
Show all work and factor completely. 6a^2+5a-4 (6a^2+8a)-(3a-4) 2a(3a+4)-1(3a+4) (3a+4)(2a+1) Is this correct?
The Empire Carpet Company orders merchandise for $17,700, including $550 in shipping charges, from Mohawk Carpet Mills on May 4. Carpets valued at $1,390 will be returned because they are damaged.
The terms of sale are 2/10, n30 ROG. The shipment arrives on May 26 and Empire ...
if a plane is traveling at an altitude of 11.9 miles and is 46 miles from the airport and is descending at a 12 degree angle, what is the mph to the airport?
Algebra 2
A pump removes 1000 gal of water from a pool at a constant rate of 50 gal/min. A.write an equation to find the amount of water y in the pool after t minutes B. graph the equation and interpret the t-
and y-intercepts.
Vector C has a magnitude 25.8 m and is in the direction of the negative y axis. Vectors A and B are at angles á = 41.9° and â = 26.7° up from the x axis respectively. If the vector sum A B C = 0,
what are the magnitudes of A and B?
Finite Math
A sinking fund is the accumulated amount to be realized at some future date (the end of the term) when a fixed number of periodic payments are paid into an account earning interest at the rate of i
per period.
250 people on a plane; 120 speak english;150 speak spanish; 30 speak neither. what probability speak English and Spanish ?
Suppose the quantity demanded per week of a certain dress is related to price by the demand equation p=√75-2x. In order to maximize revenue, how many dresses should be made and sold each week?
Suppose the quantity demanded per week of a certain dress is related to price by the demand equation p=√75-2x. In order to maximize revenue, how many dresses should be made and sold each week?
Physics...NEED HELP
Physics help. what are the steps to solving the following problems? Attack on the ice fortress! A comic book villain has taken shelter in an ice fortress in an attempt to hide from the Avengers.
Apparently, though, only the Hulk and some guy with a cannon are available to try ...
If the probability of an event happening is 2/5, then the probability of the event not happening is 3/5? This is False. The answer is 1/3. Explain why the answer is 1/3
is correct to say "the patient was brought to the operating room." ? "was brought" seems awkward. Instead, "the patient was taken to the OR."
computer science
I need help one how create a spreadsheet that will give an accurate launch profile for a vertical liftoff for the SpaceX Falcon 9 rocket. the information they give me is: Start with a rocket with a
mass, M1, moving with a speed, v1 The rocket expels some small mass...
volumes of prisms
How do the volume of prisms compare as the number of faces in the prisms increases? Does the volume remain the same? Explain Please!
It costs $35 for 5 friends to go it for lunch. How much is the total bill including 8% tax? How much will each person pay if everyone pays an equal amount?
For parts A-C: /^Vo | 2M | velocity into pg. ->R )2m ) ------ top view rear view A car of mass, 2m, is going around a turn at a constant radius, R, with velocity, v0. The turn is not banked it is a
flat surface. Assume the car and the road are on Earth s surface...
How many kilojoules of energy are needed to convert 104 g of ice at -14.6 to water at 25.8°C? (The specific heat of ice at -14.6 is 2.01 J/g°C.)
Given that Ka for HClO is 4.0 × 10-8 at 25 °C, what is the value of Kb for ClO at 25 °C? Given that Kb for CH3CH2NH2 is 6.3 × 10-4 at 25 °C, what is the value of Ka for CH3CH2NH3 at 25 °C?
I know how to do the ice chart and set up i just dont know how to get the x value that you are saying that i know...
I dont understand what Y and X are supposed to be
Hi, I am an 11th grade student beginning to look into some college options. I am interested in pursuing a career in proctology and have been reading some preliminary books on the topic. Can anyone
recommend a college or pre-med program that would offer a strong proctology prog...
Critical Points, Etc.
find the values for a, b, and c such that the function f(x)= x^3 + ax^2 + bx+ c has a critical point at (1,5) and an inflection point at (2,3). a= -6 b= 3 c= 7 i got those, but they're wrong. i'm not
sure why. :/
Tangent Lines (2)-Calculus
find an equation of the line tangent to y= sqrt(25-x^2) at the point (3,4) i got y= x/8 + 29/8 this is wrong. but im not sure why. can you explain thank you
Tangent Lines-Calculus
a line is tangent to the curve y= (x^2 + 3)/ (x+3)^1/2 at the point where x=1. write the equation of this line i get y= 3x-1 yet it's wrong. please explain thank you
IM SORRY!!!
Quantum Physics
The March 9 comment is right.
Linear Function graphing. y=-x ???
Can someone Please Help
Can someone Please Help
You decide to establish a new temperature scale on which the melting point of ammonia (-77.75 C ) is 0 A, and the boiling point of ammonia (-33.35 C ) is 200 A What would be the temperature of
absolute zero in A ?
You decide to establish a new temperature scale on which the melting point of ammonia (-77.75 C ) is 0 A, and the boiling point of ammonia (-33.35 C ) is 200 A What would be the temperature of
absolute zero in A ?
Chem Help!!!!
thank you
I used slope to find the answer to the first part which was correct... the second part is what I need help with
Where did you get 44.4 from
Where did you get 44.4 from
Can someone pls help me!! You decide to establish a new temperature scale on which the melting point of ammonia (-77.75 C ) is 0 A, and the boiling point of ammonia (-33.35 C ) is 200 A. What would
be the boiling point of water in A ? What would be the temperature of absolute ...
Chem Help!!
How many inches is the section of the bar
You decide to establish a new temperature scale on which the melting point of ammonia (-77.75 C ) is 0 A, and the boiling point of ammonia (-33.35 C ) is 200 A. What would be the boiling point of
water in A ? What would be the temperature of absolute zero in A ?
Suppose that Cot(theta) = c and 0<theta<pi/2. what is a formula for tan (theta) in terms of c? Cot = 1/tan I know that, but I don't understand what the question is really asking for.
Calculate Directly Log (base 4) 2 + log (base 16) 2
Write fraction in simplest form. 25x2/35x
i just need help to set them up
10. If the calcium oxide were obtained by the heating of calcium hydroxide, how much hydroxide would be needed to obtain the 15.0 g? 11. If the 15.0 g of calcium oxide in the above problems were
allowed to react with water, what amount of calcium hydroxide would be produced? 1...
Solve sin2x + sin4x = cos2x + cos 4x for x in the interval[0,2pi) Hint: the following substitution should come in handy: sin3x= cos3x . tan3x
Toby is making a scale model of the battlefield at Fredericksburg. The area he wants to model measures about 11 mi by 7.5mi. He plans to put the model on a 3.25 ft by 3.25ft square table. On each
side of the model he wants to leave at least 3 inches between the model and the t...
Toby is making a scale model of the battlefield at Fredericksburg. The area he wants to model measures about 11 mi by 7.5mi. He plans to put the model on a 3.25 ft by 3.25ft square table. On each
side of the model he wants to leave at least 3 inches between the model and the t...
soc 305
In American society, those least likely to be living in poverty are ____________.
A thin, uniform rod is hinged at its midpoint. To begin with, one half of the rod is bent upward and is perpendicular to the other half. This bent object is rotating at an angular velocity of 6.8 rad
/s about an axis that is perpendicular to the left end of the rod and parallel...
Social Studies
I was wondering if I could check my answers with someone. I'm a bit confused with interest. I put stars next to the answers I chose. Thank you. An interest rate is a special type of (1 point) loan.
**price. bank. service. 2. How does a compound interest rate differ from a ...
what conjecture could be made and proven from the following? sin(3x)/sin(x) - cos(3x)/cos(x) I put it on my calculator and based on the graph I say that as it crosses the y axis the values get
Difficult Trig Word Problem
sorry y(t) =4e(then the exponent -3t) all times sin(2pi*t)
Difficult Trig Word Problem
The displacement of a spring vibrating in damped harmonic motion is given by y(t) = 4e^-3t sin(2pi*t) where y = displacement and t = time with t greater than/equal to zero. Find the time(s) when the
spring is at its equilibrium position (y=0). The number "e" is Euler...
Precalculus 2
Similiarly to a question I asked previously I took the same approach. the problem is: what value(s) of theta solve the following equation? cos(theta)sin(theta) - 2 cos(theta) = 0? I let Cos theta = X
and sin theta = X unfortunately I end up with something completely odd. Can a...
solve any triangles satisfying alpha = 20 degrees b = 10 c = 16 b and c are 2 side values and if I use the pythagorean theorem and I get 18.18? Did I do this right?
What value(s) of theta solve the following equation? cos^2(theta)-cos(theta)-6=0? I try plugging in different numbers but I am not sure exactly what I am looking for in order to solve?
Sorry but I meant to say LS = cos^2x-sin^2y
Prove or disprove cos(x+y)cos(x-y)=cos squared (x) - Sin squared (x) I dstributed the cosines and attempted to cancel out terms but I can't get the signs right. Any help on what I am missing?
A certain light truck can go around a flat curve having a radius of 150 m with a maximum speed of 29.0 m/s. With what maximum speed can it go around a curve having a radius of 88.0 m?
An air puck of mass 0.23 kg is tied to a string and allowed to revolve in a circle of radius 1.0 m on a frictionless horizontal table. The other end of the string passes through a hole in the center
of the table, and a mass of 0.9 kg is tied to it. The suspended mass remains i...
A race car starts from rest on a circular track of radius 445 m. The car's speed increases at the constant rate of 0.380 m/s2. At the point where the magnitudes of the centripetal and tangential
accelerations are equal, find the following. (a) the speed of the race car (b)...
A rotating wheel requires 8.00 s to rotate 29.0 revolutions. Its angular velocity at the end of the 8.00 s interval is 95.0 rad/s. What is the constant angular acceleration of the wheel? (Do not
assume that the wheel starts at rest.)
Draw and solve any triangles satisfying alpha = 29 degrees a = 7 c = 14 I realize I need to draw a non-right triangle and label the sides but I am completely confused one what to do after I draw the
A 7.45-g bullet is moving horizontally with a velocity of +348 m/s, where the sign + indicates that it is moving to the right (see part a of the drawing). The bullet is approaching two blocks resting
on a horizontal frictionless surface. Air resistance is negligible. The bulle...
A 7.45-g bullet is moving horizontally with a velocity of +348 m/s, where the sign + indicates that it is moving to the right (see part a of the drawing). The bullet is approaching two blocks resting
on a horizontal frictionless surface. Air resistance is negligible. The bulle...
If i have a string with a word 'today' in it. If a user inputs a char (lets say b). how do I right the array to make it see if the char "b" is in the string today. LOOP??? int i; for(i=0; i<SIZE;
i++) { guessedChar[i]; /*or*/ wordString[i]; }
int a = 10; int b = 5; int total = 0; printf("%d + %d = %d\n", a, b, total); printf("%d - %d = %d\n", a, b, total); printf("%d * %d = %d\n", a, b, total); printf("%d / %d = %d\n", a, b, total); /
*Without an array*/ if(var == 1) { return ...
The drawing shows two boxes resting on frictionless ramps. One box is relatively light and sits on a steep ramp. The other box is heavier and rests on a ramp that is less steep. The boxes are
released from rest at A and allowed to slide down the ramps. The two boxes have masse...
C programming
I'll help you but I'm not gonna right your code for you. What are you lost on?
carpel tunnel syndrome? is that what your asking?
phi 103
"If that wasn't illegal, then it wouldn't be against the law" may commit which fallacy?
The drawing shows two boxes resting on frictionless ramps. One box is relatively light and sits on a steep ramp. The other box is heavier and rests on a ramp that is less steep. The boxes are
released from rest at A and allowed to slide down the ramps. The two boxes have masse
Calculus L'Hopital Rule
lim x->0 (x)sin(x)/1-cos(x). They both go to 0 so 0/0. now take L'H and product rule of top? lim x->0 (1)sin(x)+ cos(x)(x)/1-cos(x) what next? how do i solve from here?
f(4)= 4^4 e^-4 = 4.6888
f(x)=x^4e^-x f(0)=0^4e^0 = 0 f(-4)=-4^4e^4 = 13977.13 I feel like i didnt do that right. It seems like a weird number for a critcal point.
Sub 0 and -4 into f(x). or f'(x)?
Would the derivative be e^-x (x^4-4x^3)?
Find the critical points of f(x)=x^4e^-x. You would use the product rule. Then what?
The side of a square box is increasing at a rate of 4mm/s and its height is increasing at a rate of 2mm/s. How fast is the volume increasing when the side is 40mm and the height is 100mm. Can someone
just tell me what formula to use?
y=sqrt(x)^1/x. Find dy/dx That is to the power of 1/x So to start this do I need to take Ln on both sides and then use a logarithm property? Ln(y)=lnx^1/2x?
The complete combustion of 1.283g of cinnamaldeyde (C9H8O, one of the compounds in cinnamon) in a bomb calorimeter (Ccalorimeter=3.841 kJ/C) produced an increase in temperature of 130.32 C. Calculate
the molar enthalpy of combustion of cinnamaldehydy (delta H comb) (in kilojou...
how would you separate a mixture of NaCl, Fe pellets and water
how would you separate a mixture of NaCl, Fe pellets and water
Complete the electron dot structure below to show how beryllium flouride (BeF2) is formed. F F Be+ -> Be F F
What is the coordination number of the ions in a face-centered cubic crystal? In a simple cubic crystal?
Write the symbol and electron configuration for each ion and name the noble gas with the same configuration. a. nitride _________________________ b. oxide ___________________________ c. sulfide
_________________________ d. bromide _________________________
How many electrons will each element gain in forming an ion? a. Nitrogen ______ b. Oxygen ________ c. Sulfur _______ d. bromine _______
Why do beryllium and flouine combine in a 1:2 ratio?
Pages: 1 | 2 | 3 | 4 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=GREG","timestamp":"2014-04-20T06:44:08Z","content_type":null,"content_length":"29714","record_id":"<urn:uuid:0dffb24e-853b-47d6-8c49-53e4b7e5d21e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Descriptive Statistics
26.1 Descriptive Statistics
One principal goal of descriptive statistics is to represent the essence of a large data set concisely. Octave provides the mean, median, and mode functions which all summarize a data set with just a
single number corresponding to the central tendency of the data.
Compute the mean of the elements of the vector x.
mean (x) = SUM_i x(i) / N
If x is a matrix, compute the mean for each column and return them in a row vector.
The optional argument opt selects the type of mean to compute. The following options are recognized:
Compute the (ordinary) arithmetic mean. [default]
Compute the geometric mean.
Compute the harmonic mean.
If the optional argument dim is given, operate along this dimension.
Both dim and opt are optional. If both are supplied, either may appear first.
See also: median, mode.
Compute the median value of the elements of the vector x. If the elements of x are sorted, the median is defined as
x(ceil(N/2)) N odd
median (x) =
(x(N/2) + x((N/2)+1))/2 N even
If x is a matrix, compute the median value for each column and return them in a row vector. If the optional dim argument is given, operate along this dimension.
See also: mean, mode.
Compute the most frequently occurring value in a dataset (mode). mode determines the frequency of values along the first non-singleton dimension and returns the value with the highest frequency.
If two, or more, values have the same frequency mode returns the smallest.
If the optional argument dim is given, operate along this dimension.
The return variable f is the number of occurrences of the mode in in the dataset. The cell array c contains all of the elements with the maximum frequency.
See also: mean, median.
Using just one number, such as the mean, to represent an entire data set may not give an accurate picture of the data. One way to characterize the fit is to measure the dispersion of the data. Octave
provides several functions for measuring dispersion.
Return the range, i.e., the difference between the maximum and the minimum of the input data. If x is a vector, the range is calculated over the elements of x. If x is a matrix, the range is
calculated over each column of x.
If the optional argument dim is given, operate along this dimension.
The range is a quickly computed measure of the dispersion of a data set, but is less accurate than iqr if there are outlying data points.
See also: iqr, std.
Return the interquartile range, i.e., the difference between the upper and lower quartile of the input data. If x is a matrix, do the above for first non-singleton dimension of x.
If the optional argument dim is given, operate along this dimension.
As a measure of dispersion, the interquartile range is less affected by outliers than either range or std.
See also: range, std.
Compute the mean square of the elements of the vector x.
std (x) = 1/N SUM_i x(i)^2
For matrix arguments, return a row vector containing the mean square of each column.
If the optional argument dim is given, operate along this dimension.
See also: var, std, moment.
Compute the standard deviation of the elements of the vector x.
std (x) = sqrt ( 1/(N-1) SUM_i (x(i) - mean(x))^2 )
where N is the number of elements.
If x is a matrix, compute the standard deviation for each column and return them in a row vector.
The argument opt determines the type of normalization to use. Valid values are
normalize with N-1, provides the square root of the best unbiased estimator of the variance [default]
normalize with N, this provides the square root of the second moment around the mean
If the optional argument dim is given, operate along this dimension.
See also: var, range, iqr, mean, median.
In addition to knowing the size of a dispersion it is useful to know the shape of the data set. For example, are data points massed to the left or right of the mean? Octave provides several common
measures to describe the shape of the data set. Octave can also calculate moments allowing arbitrary shape measures to be developed.
Compute the variance of the elements of the vector x.
var (x) = 1/(N-1) SUM_i (x(i) - mean(x))^2
If x is a matrix, compute the variance for each column and return them in a row vector.
The argument opt determines the type of normalization to use. Valid values are
normalize with N-1, provides the best unbiased estimator of the variance [default]
normalizes with N, this provides the second moment around the mean
If N==1 the value of opt is ignored and normalization by N is used.
If the optional argument dim is given, operate along this dimension.
See also: cov, std, skewness, kurtosis, moment.
Compute the sample skewness of the elements of x:
mean ((x - mean (x)).^3)
skewness (X) = ------------------------.
std (x).^3
The optional argument flag controls which normalization is used. If flag is equal to 1 (default value, used when flag is omitted or empty), return the sample skewness as defined above. If flag is
equal to 0, return the adjusted skewness coefficient instead:
sqrt (N*(N-1)) mean ((x - mean (x)).^3)
skewness (X, 0) = -------------- * ------------------------.
(N - 2) std (x).^3
The adjusted skewness coefficient is obtained by replacing the sample second and third central moments by their bias-corrected versions.
If x is a matrix, or more generally a multi-dimensional array, return the skewness along the first non-singleton dimension. If the optional dim argument is given, operate along this dimension.
See also: var, kurtosis, moment.
Compute the sample kurtosis of the elements of x:
mean ((x - mean (x)).^4)
k1 = ------------------------
std (x).^4
The optional argument flag controls which normalization is used. If flag is equal to 1 (default value, used when flag is omitted or empty), return the sample kurtosis as defined above. If flag is
equal to 0, return the "bias-corrected" kurtosis coefficient instead:
N - 1
k0 = 3 + -------------- * ((N + 1) * k1 - 3 * (N - 1))
(N - 2)(N - 3)
The bias-corrected kurtosis coefficient is obtained by replacing the sample second and fourth central moments by their unbiased versions. It is an unbiased estimate of the population kurtosis for
normal populations.
If x is a matrix, or more generally a multi-dimensional array, return the kurtosis along the first non-singleton dimension. If the optional dim argument is given, operate along this dimension.
See also: var, skewness, moment.
Compute the p-th central moment of the vector x.
1/N SUM_i (x(i) - mean(x))^p
If x is a matrix, return the row vector containing the p-th central moment of each column.
The optional string type specifies the type of moment to be computed. Valid options are:
Central Moment (default).
Absolute Central Moment. The moment about the mean ignoring sign defined as
1/N SUM_i (abs (x(i) - mean(x)))^p
Raw Moment. The moment about zero defined as
moment (x) = 1/N SUM_i x(i)^p
Absolute Raw Moment. The moment about zero ignoring sign defined as
1/N SUM_i ( abs (x(i)) )^p
If the optional argument dim is given, operate along this dimension.
If both type and dim are given they may appear in any order.
See also: var, skewness, kurtosis.
For a sample, x, calculate the quantiles, q, corresponding to the cumulative probability values in p. All non-numeric values (NaNs) of x are ignored.
If x is a matrix, compute the quantiles for each column and return them in a matrix, such that the i-th row of q contains the p(i)th quantiles of each column of x.
If p is unspecified, return the quantiles for [0.00 0.25 0.50 0.75 1.00]. The optional argument dim determines the dimension along which the quantiles are calculated. If dim is omitted, and x is
a vector or matrix, it defaults to 1 (column-wise quantiles). If x is an N-D array, dim defaults to the first non-singleton dimension.
The methods available to calculate sample quantiles are the nine methods used by R (http://www.r-project.org/). The default value is METHOD = 5.
Discontinuous sample quantile methods 1, 2, and 3
1. Method 1: Inverse of empirical distribution function.
2. Method 2: Similar to method 1 but with averaging at discontinuities.
3. Method 3: SAS definition: nearest even order statistic.
Continuous sample quantile methods 4 through 9, where p(k) is the linear interpolation function respecting each methods’ representative cdf.
1. Method 4: p(k) = k / n. That is, linear interpolation of the empirical cdf.
2. Method 5: p(k) = (k - 0.5) / n. That is a piecewise linear function where the knots are the values midway through the steps of the empirical cdf.
3. Method 6: p(k) = k / (n + 1).
4. Method 7: p(k) = (k - 1) / (n - 1).
5. Method 8: p(k) = (k - 1/3) / (n + 1/3). The resulting quantile estimates are approximately median-unbiased regardless of the distribution of x.
6. Method 9: p(k) = (k - 3/8) / (n + 1/4). The resulting quantile estimates are approximately unbiased for the expected order statistics if x is normally distributed.
Hyndman and Fan (1996) recommend method 8. Maxima, S, and R (versions prior to 2.0.0) use 7 as their default. Minitab and SPSS use method 6. MATLAB uses method 5.
□ Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
□ Hyndman, R. J. and Fan, Y. (1996) Sample quantiles in statistical packages, American Statistician, 50, 361–365.
□ R: A Language and Environment for Statistical Computing; http://cran.r-project.org/doc/manuals/fullrefman.pdf.
x = randi (1000, [10, 1]); # Create empirical data in range 1-1000
q = quantile (x, [0, 1]); # Return minimum, maximum of distribution
q = quantile (x, [0.25 0.5 0.75]); # Return quartiles of distribution
See also: prctile.
For a sample x, compute the quantiles, q, corresponding to the cumulative probability values, p, in percent. All non-numeric values (NaNs) of x are ignored.
If x is a matrix, compute the percentiles for each column and return them in a matrix, such that the i-th row of y contains the p(i)th percentiles of each column of x.
If p is unspecified, return the quantiles for [0 25 50 75 100]. The optional argument dim determines the dimension along which the percentiles are calculated. If dim is omitted, and x is a vector
or matrix, it defaults to 1 (column-wise quantiles). When x is an N-D array, dim defaults to the first non-singleton dimension.
See also: quantile.
A summary view of a data set can be generated quickly with the statistics function.
Return a vector with the minimum, first quartile, median, third quartile, maximum, mean, standard deviation, skewness, and kurtosis of the elements of the vector x.
If x is a matrix, calculate statistics over the first non-singleton dimension. If the optional argument dim is given, operate along this dimension.
See also: min, max, median, mean, std, skewness, kurtosis.
|
{"url":"http://www.gnu.org/software/octave/doc/interpreter/Descriptive-Statistics.html","timestamp":"2014-04-17T17:05:41Z","content_type":null,"content_length":"25463","record_id":"<urn:uuid:19e333c7-4338-4130-8c71-80ecfa0d30f4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 811.05068
Autor: Erdös, Paul; Gyárfás, András; Luczak, Tomasz
Title: Independent transversals in sparse partite hypergraphs. (In English)
Source: Comb. Probab. Comput. 3, No.3, 293-296 (1994).
Review: An [n, k, r]-hypergraph is any hypergraph with a kn-element set of vertices and whose edges can be obtained in the following manner: (1) partition the set of vertices into n k-element sets V
[1], ..., V[n]; (2) for any r-element subfamily {V[i[1]],..., V[i[r]]} form an edge by picking from each V[i[j]] exactly one element. An independent transversal is a set of vertices which meets each
V[i] in exactly one point and does not contain any edge. The paper provides estimates for the function f[r](k) the largest n for which any [n,k,r]-hypergraph has an independent transversal. In
particular, in the case when r = 2, it is proved that (1+o(1))(2e)^-1 k^2 < f[2](k) < (1+o(1))k^2. For those values of k for which an affine plane of order k+1 exists, it is shown that f[2](k) <
(k+1)^2. Asymptotics for f[r](k), in several cases when k is small compared to r, are also given.
Reviewer: M.Truszczynski (Lexington)
Classif.: * 05D15 Transversal (matching) theory
05C65 Hypergraphs
Keywords: sparse partite hypergraphs; hypergraphics; probabilistic method; independent transversal; affine plane
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
|
{"url":"http://www.emis.de/classics/Erdos/cit/81105068.htm","timestamp":"2014-04-20T01:04:30Z","content_type":null,"content_length":"4527","record_id":"<urn:uuid:1df15025-5e73-4fee-a6ee-aef679e18c4d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Θ-classes to arrange functions?
March 22nd 2010, 05:14 AM #1
Junior Member
Feb 2010
Θ-classes to arrange functions?
Use the rules and definition for ordering Θ-classes to arrange the following in order from lowest to highest:
lg n ; (lg n)^6 ; lg(n^n) ; n^4 ; n^(1/2) ; 5n^3+n^2 ; 1^n ; (1.001)^n ; 5^n
I used the rules as I understood them and ended very uncertainly with the following order:
1^n (1 to the power of anything is)
lg n
lg (n^n)
(lg n)^6
I'm really not sure about the order so if someone could please check and tell me where and why I went wrong I would really appreciate it.
Thanks in advance.
The general rule is that for every $a_1,a_2>1$ and $\beta_1,\beta_2,\beta_3>0$, for the functions $f_1(n)=(\log_{a_1} n)^{\beta_1}$, $f_2(n)=n^{\beta_2}$ and $f_3(n)={a_2}^{{\beta_3} n}$, we have
$\lim_{n\to\infty}f_1(n)/f_2(n)=0$ and $\lim_{n\to\infty}f_2(n)/f_3(n)=0$. So you answer is correct except for the place of $(\lg n)^6$.
By the way, did you realize that for $n=1000\,000$, $n^4=10^{24}$ (the order of the Earth's mass in kilograms), while $(\lg n)^6<50\,000$ (decimal logarithm)?
March 22nd 2010, 05:33 AM #2
MHF Contributor
Oct 2009
|
{"url":"http://mathhelpforum.com/discrete-math/135021-classes-arrange-functions.html","timestamp":"2014-04-21T11:17:38Z","content_type":null,"content_length":"34282","record_id":"<urn:uuid:e4825a93-c675-4a18-9479-d5ea8656ef9c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Avoiding lift with Monad Transformers
up vote 24 down vote favorite
I have a problem that fits very well using a stack of MTs (or even one MT) over IO. Everything is good except that using lift before every action is terribly annoying! I suspect there's really
nothing to do about this, but I thought I'd ask anyways.
I am aware of lifting entire blocks, but what if the code is really of mixed types? Would it not be nice if GHC throw in some syntactic sugar (for example, <-$ = <- lift)?
haskell monads monad-transformers
add comment
2 Answers
active oldest votes
For all the standard mtl monads, you don't need lift at all. get, put, ask, tell — they all work in any monad with the right transformer somewhere in the stack. The missing piece
is IO, and even there liftIO lifts an arbitrary IO action down an arbitrary number of layers.
This is done with typeclasses for each "effect" on offer: for example, MonadState provides get and put. If you want to create your own newtype wrapper around a transformer stack,
you can do deriving (..., MonadState MyState, ...) with the GeneralizedNewtypeDeriving extension, or roll your own instance:
instance MonadState MyState MyMonad where
up vote 30 down get = MyMonad get
vote accepted put s = MyMonad (put s)
You can use this to selectively expose or hide components of your combined transformer, by defining some instances and not others.
(You can easily extend this approach to all-new monadic effects you define yourself, by defining your own typeclass and providing boilerplate instances for the standard
transformers, but all-new monads are rare; most of the time, you'll get by simply composing the standard set offered by mtl.)
Oh I think I feel stupid, you mentioned that in one of your previous answers, I could not understand it at the time. Now, I do thanks! – aelguindy Jan 29 '12 at 16:51
add comment
You can make your functions monad-agnostic by using typeclasses instead of concrete monad stacks.
Let's say that you have this function, for example:
bangMe :: State String ()
bangMe = do
str <- get
put $ str ++ "!"
-- or just modify (++"!")
Of course, you realize that it works as a transformer as well, so one could write:
bangMe :: Monad m => StateT String m ()
However, if you have a function that uses a different stack, let's say ReaderT [String] (StateT String IO) () or whatever, you'll have to use the dreaded lift function! So how is that
up vote 22 The trick is to make the function signature even more generic, so that it says that the State monad can appear anywhere in the monad stack. This is done like this:
down vote
bangMe :: MonadState String m => m ()
This forces m to be a monad that supports state (virtually) anywhere in the monad stack, and the function will thus work without lifting for any such stack.
There's one problem, though; since IO isn't part of the mtl, it doesn't have a transformer (e.g. IOT) nor a handy type class per default. So what should you do when you want to lift IO
actions arbitrarily?
To the rescue comes MonadIO! It behaves almost identically to MonadState, MonadReader etc, the only difference being that it has a slightly different lifting mechanism. It works like
this: you can take any IO action, and use liftIO to turn it into a monad agnostic version. So:
action :: IO ()
liftIO action :: MonadIO m => m ()
By transforming all of the monadic actions you wish to use in this way, you can intertwine monads as much as you want without any tedious lifting.
Thanks for the detailed answer! Beaten in timing by ehird though ;) – aelguindy Jan 29 '12 at 16:52
2 Me and ehird provide somewhat different solutions to this problem. It might be worth reading both of the responses to understand the alternatives you have :) – dflemstr Jan 29 '12
at 16:54
add comment
Not the answer you're looking for? Browse other questions tagged haskell monads monad-transformers or ask your own question.
|
{"url":"http://stackoverflow.com/questions/9054731/avoiding-lift-with-monad-transformers","timestamp":"2014-04-18T15:39:28Z","content_type":null,"content_length":"72626","record_id":"<urn:uuid:92a17e6f-4378-45ac-8e0a-7381614a7944>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Human hearing beats FFT - Hydrogenaudio Forums
EST is a new transform that can explain the results of the article.
Fourier-related transforms, like FFT, are just one way to find frequencies, and clearly not the best possible.
EST derives frequencies from samples and is unrelated to Fourier/FFT.
The process of EST is deterministic, does not use non-linear equations, and can handle noise.
In the ideal case of a noiseless signal composed of n sinusoids, the frequencies, amplitudes and phases are precisely recovered from 3n
equally spaced real samples.
A noisy signal will require more samples, depending on noise level.
Other than the minimum for the ideal case, accuracy does not depend on the number of samples (time). The additional samples for a noisy signal
are needed to handle noise.
EST can also transform samples into increasing/decreasing sinusoids, which is a better way to model audio. In such a case, for a noiseless
signal, 4 samples are required per increasing/decreasing sinusoid, and more for a noisy signal.
EST can be evaluated using a demo program that implements it. There is also a paper that details the transform and its mathematical basis.
Those interested to see the paper and/or the demo program, can email me at gringya atsign gmail dot com.
QUOTE (Yaakov Gringeler @ Apr 1 2013, 17:03)
Fourier-related transforms, like FFT, are just one way to find frequencies, and clearly not the best possible.
Which, of course, depends entirely on your definition of "Frequency", something that itself is trickier than some seem to realize.
EST derives frequencies from samples and is unrelated to Fourier/FFT.
What does "EST" stand for, in the first place. Does it use a complex exponential or a representation of a complex exponential?
The process of EST is deterministic, does not use non-linear equations, and can handle noise.
Which is true of the Fourier Transform, as well.
In the ideal case of a noiseless signal composed of n sinusoids, the frequencies, amplitudes and phases are precisely recovered from 3n
equally spaced real samples.
Sounds pretty good. What's the basis set you're using? Sounds a lot like a * sin (b *t +c) where a,b,c are the 3 samples. Not sure what "equally spaced" means here, unless you're referring to the
fact you can characterize a sine wave with 3 non-degenerate points.
A noisy signal will require more samples, depending on noise level.
No surprise.
Other than the minimum for the ideal case, accuracy does not depend on the number of samples (time). The additional samples for a noisy signal
are needed to handle noise.
EST can also transform samples into increasing/decreasing sinusoids, which is a better way to model audio. In such a case, for a noiseless
signal, 4 samples are required per increasing/decreasing sinusoid, and more for a noisy signal.
So it's Laplace-based instead of Fourier based, then?
Instead of bombarding us with a bunch of not-very-specific qualities, why not just tell us what the basis set is, and how the analysis works?
I am aware of approximately infinite (well, literally infinite but obviously I haven't generated them all!) numbers of basis sets, many of which this could describe.
Yaakov, also check out the Reassigned spectrogram mode in iZotope RX. It “beats FFT” in terms of time and frequency resolution: it can precisely localize impulsive events in time and precisely
display frequencies of harmonics, assuming that they do not overlap in FFT spectrum.
EST stands for Exponential Sum Transform and it uses complex exponentials.
The basis is sigma(c*b^t) where b and c are non-zero complex numbers and the set of b is distinct. If all b are on the unit circle, then it is simply a spectrum.
When all b are on the unit circle and the samples are real, this becomes sigma(a*cos(b*t+c))
The samples must be equally space, not just non-degenerate.
It clearly looks more like Laplace than Fourier, but a specific relation, if exists, is not known to me.
As for describing the analysis, I offered to send the detailed paper. Do you prefer an informal description?
I think a lot of us here would be interested in a formal description, myself included. I think from what you've just said that we'll get it puzzled out though.
QUOTE (Canar @ Apr 3 2013, 05:27)
I think a lot of us here would be interested in a formal description, myself included. I think from what you've just said that we'll get it puzzled out though.
If I understand you correctly, you prefer a formal description of the process, and only that.
If I may guess, I think he means that this site has a significant number of users who would appreciate detailed descriptions. However, that is not to stop you from providing less technical
information (i.e. ‘layman’s terms’) if you want to; there are probably other users who would like that, too.
I think I could very well use a formula or two ... point seven eighteen twentyeight ...
QUOTE (Yaakov Gringeler @ Apr 3 2013, 02:42)
As for describing the analysis, I offered to send the detailed paper. Do you prefer an informal description?
I think I just got one that was a bit too rough
This post has been edited by Porcus: Apr 3 2013, 20:37
The following link:
is to a short document that describes the EST process for real noiseless samples.
Hm. Define "noiseless". Most instruments have a chaotic part of their performance that in fact is noiselike in that it does not repeat, is not entirely stationary, depends on technique, and so on.
So, I'm not quite sure I know what you mean by noiseless.
The paper described the mathematical basis of EST, which uses the ideal case of perfect increasing/decreasing sinusoids.
For realistic data, EST uses different processes, that expect noise.
For audio, the EST process is as follows.
1. Find linear prediction coefficients, preferably using the covariance method and not the auto-correlation method.
2. Create the linear prediction polynomial.
3. Find the roots of the linear prediction polynomial to establish the basis set of an exponential sum function, as described in the paper.
4. Use the samples and the basis set to find the coefficients of the function.
The key point is that linear prediction coefficients and an exponential sum function, are equivalent, with the exponential sum function having the distinct advantage of being an analytic function
with a useful structure. The mathematical basis proves this equivalence.
Due to the equivalence, an exponential sum function models an audio signal with the same quality as linear prediction.
You may note that the best lossless audio compressors, like OptimFROG, use linear prediction. This is a strong indication of the power of linear prediction to model audio.
Since EST generates an analytic function, it is suitable for lossy audio compression, as well as other audio applications.
Once EST generated an exponential sum function, you can do the following:
Identify noise elements, using frequency and/or amplitude, and remove them.
Identify inaudible elements, and remove them.
Quantize the coefficients.
Resample the audio signal, both sample rate and sample depth.
And various other things.
Unlike Fourier related methods, which use a predefined basis, EST uses a basis derived from the data.
In short, EST for audio combines the flexibility and usefulness of an analytic function with the modeling power of linear prediction.
QUOTE (Yaakov Gringeler @ Apr 11 2013, 11:33)
Unlike Fourier related methods, which use a predefined basis, EST uses a basis derived from the data.
In short, EST for audio combines the flexibility and usefulness of an analytic function with the modeling power of linear prediction.
Try applying EST to the first 30 seconds of the track "We Shall Be Happy" by Ry Cooder off the album titled "Jazz". Let me know how big your covariance matrix is, too, ok?
QUOTE (Woodinville @ Apr 11 2013, 20:36)
QUOTE (Yaakov Gringeler @ Apr 11 2013, 11:33)
Unlike Fourier related methods, which use a predefined basis, EST uses a basis derived from the data.
In short, EST for audio combines the flexibility and usefulness of an analytic function with the modeling power of linear prediction.
Try applying EST to the first 30 seconds of the track "We Shall Be Happy" by Ry Cooder off the album titled "Jazz". Let me know how big your covariance matrix is, too, ok?
In a practical implementation the samples will be broken into blocks and there will be a chosen matrix size for that block size.
The size of the matrix and the block size will determine accuracy and an accuracy-speed trade-off.
This is also the way it is done when using linear prediction for lossless audio compression or for speech compression. The difference is that EST returns an analytic function.
30 senconds of audio will therefore be broken into many smaller blocks, and not treated as a single block.
QUOTE (Yaakov Gringeler @ Apr 11 2013, 13:32)
QUOTE (Woodinville @ Apr 11 2013, 20:36)
QUOTE (Yaakov Gringeler @ Apr 11 2013, 11:33)
Unlike Fourier related methods, which use a predefined basis, EST uses a basis derived from the data.
In short, EST for audio combines the flexibility and usefulness of an analytic function with the modeling power of linear prediction.
Try applying EST to the first 30 seconds of the track "We Shall Be Happy" by Ry Cooder off the album titled "Jazz". Let me know how big your covariance matrix is, too, ok?
In a practical implementation the samples will be broken into blocks and there will be a chosen matrix size for that block size.
The size of the matrix and the block size will determine accuracy and an accuracy-speed trade-off.
This is also the way it is done when using linear prediction for lossless audio compression or for speech compression. The difference is that EST returns an analytic function.
30 senconds of audio will therefore be broken into many smaller blocks, and not treated as a single block.
I do know how coders work, so try your EST basis on We Shall Be Happy and get back to me, ok? And tell me how many basis functions you need for that one, too. And how many are orthogonal. And then
how many of those you have to code.
Over 10 years ago, for my master thesis, I wrote an algorithm that determines nearly exact frequency values from an FFT transform - it can find any frequency as long as they are far enough away from
each other and constant in tone and level.
The method is pretty simple:
1. Create an FFT using a window that's a lot bigger than the block of audio that you use
2. Find the highest peak in the FFT domain. This is an estimation of the loudest frequency present.
3. Write down the found frequency, phase and amplitude
4. Generate an FFT based on the found freq, phase, amp (this can be optimized for speed, since it's only a single tone).
5. Subtract a small percentage of this (I found that 5-10% works well) from the original FFT from step 1.
6. Go back to step 2.
This gives you a whole lot of values, next you need to combine all the values that have approximately the same frequency. This can be done as follows:
- If a frequency is new (no data within 0.5 FFT bin size), this is a new frequency that we haven't seen before.
- Otherwise combine this new measurement with the measurement closest to it.
Tones that are 1 bin apart will not be found perfectly (frequency and amplitude might be very slightly wrong), but they still clearly show up as separate signals. Tones that are 2 or more bins apart
show up nearly perfectly.
Test tones:
Real signal (voice):
Signal and it's peak data:
This post has been edited by Specy: Aug 17 2013, 11:59
Several months ago, in posts in this topic, I provided some information about my transform, EST.
I now have a document with better explanations, actual results, and charts.
The link to the document is:
Please note that viewing the document online will only display the text, and not the charts. It has to be downloaded to be fully viewed.
As a reminder, this topic followed an article that showed that human hearing performance in finding frequencies exceeds the Fourier uncertainty limit.
EST finds frequencies using a deterministic algorithm unrelated to Fourier transforms and not bound by the Fourier uncertainty principle.
This shows that the results of the article are not surprising.
|
{"url":"http://www.hydrogenaudio.org/forums/index.php?s=0b24980537001e338f54fe25b57870c9&showtopic=99371&st=50&p=829874","timestamp":"2014-04-21T05:18:53Z","content_type":null,"content_length":"111231","record_id":"<urn:uuid:23fa15b8-9c54-487e-8020-1f2dd6a18ae8>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sensitivity of the inverse problem to size-class selection.
ASA 130th Meeting - St. Louis, MO - 1995 Nov 27 .. Dec 01
2pAO6. Sensitivity of the inverse problem to size-class selection.
J. Michael Jech
Great Lakes Ctr., Buffalo State College, 1300 Elmwood Ave., Buffalo, NY 14222-1095
John K. Horne
Buffalo State College, Buffalo, NY 14222-1095
Denise M. Schael
Univ. of Zululand, KwaZulu/Natal, South Africa
Fisheries sonar systems typically operate at discrete frequencies between 38 and 420 kHz. Can length-frequency distributions of aggregated fish be accurately estimated using available frequencies and
the inverse problem? The inverse problem requires measured fish lengths and realistic scattering models. The size distribution of threadfin shad (Dorosoma petenense) in Lake Nornman was estimated
using 120-, 200-, and 420-kHz data, and a recently developed scattering model [Jech et al., J. Acoust. Soc. Am. (in press 1995)]. Size distribution estimates were compared to length frequency
measurements from purse seine catches. Fits of probability density functions (PDF's) using the inversion technique to those using length frequency measures were sensitive to the choice of fish size
classes. Preliminary results indicate that estimation of length frequencies using multi- frequency data and the inverse problem appears dependent on the shape of measured length-frequency
distributions. [Work supported by NOAA Coastal Ocean Program (NA16RGO 492-01) and NSF (OCE-9115740).]
|
{"url":"http://www.auditory.org/asamtgs/asa95stl/2pAO/2pAO6.html","timestamp":"2014-04-19T15:14:10Z","content_type":null,"content_length":"1915","record_id":"<urn:uuid:d8455791-3b7b-433c-9aea-d6dab4ad1e1e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
for Reachability Analysis and
Tree Automata Calculations
What is Timbuk?
Timbuk is a collection of tools for achieving proofs of reachability over Term Rewriting Systems and for manipulating Tree Automata (bottom-up non-deterministic finite tree automata)
Timbuk and reachability analysis can be used for program verification. For instance, Timbuk is currently used to verify Cryptographic Protocols (see some papers and in the examples of the
Timbuk 3 is a fully new version of the tree automata completion engine used for reachability analysis. Older Timbuk distributions (2.2) provide three standalone tools and a bunch of Objective Caml
functions for basic manipulation on Tree Automata, alphabets, terms, Term Rewriting Systems, etc. The available tools are:
• Timbuk: a tree automata completion engine for reachability analysis over Term Rewriting Systems. See distributions below.
• Tree Automata Completion Checker: this checker certifies that a tree automaton, obtained by tree automata completion, recognizes a superset of reachable terms. It consists of Ocaml functions
extracter from a Coq specification of tree automata completion. The principle is describe in the IJCAR paper that can be found here, the code can be downloaded here.
• TimbukCEGAR: This is a version of Timbuk taking advantage of CounterExample Guided Abstraction Refinement (a.k.a. CEGAR) for finding counterexamples or automatically refining approximations.
TimbukCEGAR alpha is available in source form. Note that installing Buddy-2.4 BDD library is necessary before compiling TimbukCEGAR. The archive also contains several simple test examples as well
as a Term Rewriting System produced from a Java program using Copster.
• TimbukLTA: This is a version of Timbuk using Lattice Tree Automata to represent (efficiently) built-in values such as integers, strings, etc. This version is a first instance and deals with
integer arithmetic. TimbukLTA is available in source form. Details can be found in this paper.
• TimbukStrat (alpha) is available! This is an (alpha) version of Timbuk computing approximation of reachable terms for the innermost rewriting strategy. TimbukStrat is available in source form.
Note that installing Buddy-2.4 BDD library is necessary before compiling TimbukStrat.The archive also contains several simple test examples. Details can be found in this report.
• Taml (Version 2.2 only): an Ocaml toplevel with basic functions on tree automata:
□ boolean operations: intersection, union, inversion
□ emptyness decision, inclusion decision
□ cleaning, renaming
□ determinisation
□ matching of terms over tree automata
□ producing the automaton recognizing terms irreducible w.r.t. a Term Rewriting System
□ normalization of transitions
□ parsing, pretty printing
□ reading and writing automata to disk
□ and some more
• Tabi (Version 2.2 only): a Tree Automata Browsing Interface for visual automata inspection. This tool permits to build interactively and graphically some representatives of the language
recognized by a tree automaton. Like this
Note that if you plan to use Tabi, you need a working Tcl/Tk library on your system (see README file for details)
Timbuk 1.0 to 2.2 : Copyright (C) 2000-2003 Thomas Genet and Valérie Viet Triem Tong
Timbuk 3.0 to 3.1 : Copyright (C) 2009 Thomas Genet and Yohan Boichut
TimbukCEGAR : Copyright (C) 2011 Thomas Genet, Yohan Boichut and Benoît Boyer
TimbukLTA : Copyright (C) 2012 Thomas Genet, Valérie Murat and Tristan Le Gall
TimbukStrat : Copyright (C) 2013 Thomas Genet and Yann Salmon
Taml: Copyright (C) 2003 Thomas Genet
Checker: Copyright (C) 2009 Benoît Boyer
Tabi: Copyright (C) 2003 Thomas Genet and [Boinet Matthieu, Brouard Robert, Cudennec Loic, Durieux David, Gandia Sebastien, Gillet David, Halna Frederic, Le Gall Gilles, Le Nay Judicael, Le Roux
Luka, Mallah Mohamad-Tarek, Marchais Sebastien, Martin Morgane, Minier François, Stute Mathieu] -- aavisu project team for french "maitrise" level (4th University year) 2002-2003 at IFSIC/Universite
de Rennes 1.
All these tools are freely available, under the terms of the GNU LIBRARY GENERAL PUBLIC LICENSE.
• Version 3.1 is now available in source form here. This version does no longer include the tree automata library but focuses on reachability analysis and equational approximations. If you plan to
perform basic tree automata manipulation, use Taml or Tabi, you need version 2.2 instead. Old version 3.0 is here.
• Version 2.2 is distributed in two different forms:
□ Ocaml source: Timbuk 2.2 for OCaml 3.11 (or higher), sent by Ondreij Lengal. There is also a source distribution for older versions of OCaml. Those distributions include all the Ocaml code,
Makefile for Unix, some (dirty) Makefile .bat scripts for Windows, documentation and examples.
□ A binary distribution (also with documentation and examples) for Windows in two forms:
• The Tree Automata Completion Checker is now available. This checker certifies that a tree automaton, obtained by tree automata completion, recognizes a superset of reachable terms. It consists of
Ocaml functions extracter from a Coq specification of tree automata completion. The principle is described in the IJCAR paper that can be found here, the code can be downloaded here.
• TimbukCEGAR alpha is a prototype version available in source form. Note that installing Buddy-2.4 BDD library is necessary before compiling TimbukCEGAR.
• TimbukLTA alpha is a prototype version available in source form. Details can be found in this report.
Papers about Timbuk
[GLLGM13] (pdf)
T. Genet, T. Le Gall, A. Legay and V. Murat.A Completion Algorithm for Lattice Tree Automata. In Proceedings of CIAA'13, volume 7982 of Lecture Notes in Computer Science, pages 134-145.
Springer-Verlag, 2013.
[BBGL12] (pdf)
Y. Boichut, B. Boyer, T. Genet and A. Legay. Equational Abstraction Refinement for Certified Tree Regular Model Checking. In Proceedings of ICFEM'12, volume 7635 of Lecture Notes in Computer
Science, pages 299-315. Springer-Verlag, 2012.
[GR10] (pdf)
T. Genet and V. Rusu. Equational Tree Automata Completion. In Journal of Symbolic Computation. Volume 45. Elsevier, 2010.
[Gen09] (pdf)
Thomas Genet. Reachability analysis of rewriting for software verification. Habilitation à diriger des recherches, Université de Rennes 1, 2009.
[BGJ08] (pdf)
B. Boyer, T. Genet and T. Jensen. Certifying a Tree Automata Completion Checker. In Proceedings of IJCAR'08, volume 5195 of Lecture Notes in Computer Science. Springer-Verlag, 2008.
[FGT03] (gzipped or draft in PDF format)
Guillaume Feuillade, Thomas Genet and Valérie Viet Triem Tong. Reachability Analysis of Term Rewriting Systems. Technical Report RR-4970, INRIA, 2003. (Or the corrected version To be published in
Journal of Automated Reasoning, 2004)
[GT01] (gzipped or not)
T. Genet and V. Viet Triem Tong. Reachability Analysis of Term Rewriting Systems with Timbuk. In Proceedings 8th International Conference on Logic for Programming, Artificial Intelligence, and
Reasoning, Havana (Cuba), volume 2250 of Lecture Notes in Artificial Intelligence. Springer-Verlag, 2001.
[Gen98a] ( zipped or not)
T. Genet Decidable approximations of sets of descendants and sets of normal forms. In T. Nipkow, editor, Proceedings 9th International Conference on Rewriting Techniques and Applications, Tsukuba
(Japan), volume 1379 of Lecture Notes in Computer Science, pages 151-165. Springer-Verlag, 1998.
Related papers
[BGJL07] (pdf)
Y. Boichut, T. Genet, T. Jensen and L. Le Roux. Rewriting Approximations for Fast Prototyping of Static Analyzers. In Proceedings of 18th International Conference on Rewriting Techniques and
Applications, volume 4533 of Lecture Notes in Computer Science. Springer-Verlag, 2007.
[GTTT03] (gzipped)
T. Genet, Y.-M. Tang-Talpin, and V. Viet Triem Tong. Verification of copy-protection cryptographic protocol using approximations of term rewriting systems. In Proc. of WITS'03, Workshop on Issues
in the Theory of Security, 2003.
[OCKS02] (gzipped)
F. Oehl, G. Gécé, O. Kouchnarenko, and D. Sinclair. Automatic Approximation for the Verification of Cryptographic Protocols. In Proceedings of Formal Aspects of Security (FASec'02), to appear in
Lecture Notes in Computer Science, Springer-Verlag.
[OS02] (gzipped)
F. Oehl and D. Sinclair. Combining ISABELLE and Timbuk for Cryptographic Protocol Verification. In Proceedings of Workshop Sécurité des Communications sur Internet (SECI 2002).
[GK00] (gzipped or not)
T. Genet and F. Klay. Rewriting for Cryptographic Protocol Verification. In Proceedings 17th International Conference on Automated Deduction, Pittsburgh (Pen., USA), volume 1831 of Lecture Notes
in Artificial Intelligence. Springer-Verlag, 2000.
[Gen98b] ( zipped, gzipped or not)
T. Genet. Contraintes d'ordre et automates d'arbres pour les preuves de terminaison. Thèse de Doctorat d'Université, Université Henri Poincaré - Nancy 1, 1998.
Related links
The Tree Automata Techniques and Application (TATA) home page.
Timbuk is used as a back-end prover in TA4SP -- a cryptographic protocol verification tool -- developped by Yohan Boichut.
Bug report and comments
Do not hesitate to report any bug in the code. Please report comments, improvements and bugs to Thomas.Genet@irisa.fr
Last Modified: 2012/03/7
|
{"url":"http://www.irisa.fr/celtique/genet/timbuk/","timestamp":"2014-04-17T18:27:30Z","content_type":null,"content_length":"21884","record_id":"<urn:uuid:13e7d1b8-d19f-41a8-8b41-cae9f5601f9c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The distance across a circle
Name: Douglas
Who is asking: Other Level: All
If you know how far around a circle is (say earth) 25000 miles how do you calculate the distance across?
Hi Douglas,
This follow from the wonderful property of circles. No matter how large or small a circle you have, if you divide the distance around (the circumference) by the distance across (the diameter) the
answer is always p
^circumference/[diameter] =
The number ^25000/[diameter] = ^25000/[] which is approximately 7958 miles.
Go to Math Central
|
{"url":"http://mathcentral.uregina.ca/QQ/database/QQ.09.01/douglas1.html","timestamp":"2014-04-18T23:15:57Z","content_type":null,"content_length":"2257","record_id":"<urn:uuid:cbad080d-1180-4047-a3a3-5de49d9cbbf7>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] Derivative() usage?
Robert Kern rkern at ucsd.edu
Thu Oct 21 12:15:03 CDT 2004
Francesc Alted wrote:
> Hi,
> I'm trying to figure out how to compute the derivatives of a function with
> scipy, but the documentation is a bit terse for me:
> def derivative(func,x0,dx=1.0,n=1,args=(),order=3):
> """Given a function, use an N-point central differenece
> formula with spacing dx to compute the nth derivative at
> x0, where N is the value of order and must be odd.
> Warning: Decreasing the step size too small can result in
> round-off error.
> """
> I would like to compute a derivative of an arbitrary order, but by reading
> the docs, I'm not sure what the n and order parameters exactly means.
> Someone smarter than me can help me?
'n' as in d^n/dx^n .
Somewhat confusingly, 'N' is being used in the docstring as a synonym
for 'order' and is the number of discrete points used to evaluate the
numerical derivative. I'm going to fix that.
For example, when n=2, and order=3, one is computing the second central
derivative using 3 points [x0-dx, x0, x0+dx].
Robert Kern
rkern at ucsd.edu
"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2004-October/003508.html","timestamp":"2014-04-18T23:35:42Z","content_type":null,"content_length":"3851","record_id":"<urn:uuid:026c1691-6005-40d8-a70e-a8fcdb377f17>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the definition of constant
a number, value, or object that has a fixed magnitude, physically or abstractly, as a part of a specific operation or discussion. In mathematics the term refers to a quantity (often represented by a
symbol-e.g., pi, the ratio of a circle's circumference to its diameter) that does not change in a certain discussion or operation, or to a variable that can assume only one value. In logic it is a
term with an invariant denotation (any symbol with a fixed designation, such as a connective or quantifier).
Learn more about constant with a free trial on Britannica.com.
|
{"url":"http://dictionary.reference.com/browse/constant","timestamp":"2014-04-19T07:46:21Z","content_type":null,"content_length":"118276","record_id":"<urn:uuid:c75788ee-9b26-4e90-b2af-07a9907ca30d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Roots of the term "gammatone" ("Richard F. Lyon" )
Subject: Re: Roots of the term "gammatone"
From: "Richard F. Lyon" <DickLyon@xxxxxxxx>
Date: Wed, 28 Mar 2012 12:22:41 -0700
And while we're on coincident roots ...
I looked a long time for uses of gammatone-like
filters earlier and outside the hearing field.
It seemed like such a simple structure (cascade
of N resonators) and simple parameterization was
sure to have been investigated before, and the
connection to gamma distributions or gamma
functions would likely have been noticed.
I eventually found some. Back in the day, before
the use of impulse response was standard, people
used to analyze filters by their step response
(like when Karl Kupfmuller first described the
ideal lowpass filter by its step response, the
"integralsinus", as opposed to its modern
description by its sinc function impulse
In 1945, Eaglesfield observed that the envelope
step response of the cascade of N identical
resonators is the "incomplete gamma function",
which is the integral of the gamma distribution.
So he shows the gamma distribution t^(N-1)
exp(-t/tau) as an integrand; this envelope step
response is implicitly multiplying a tone. So he
had a gammatone filter, just not the name.
Tamas, I'll send you a copy. Also a paper by
Tucker who investigated such filters, 1946, and
quoted Eaglesfield on the "incomplete gamma
function"; he also showed a fascinating
alternative way to describe the envelope step
response in an N-term closed form without the
the refs:
author = {C. C. Eaglesfield},
title = {Carrier Frequency Amplifiers: The Unit
Step Response of Amplifiers with Single and
Double Circuits},
year = {1945},
journal = {Wireless Engineer},
volume = {22},
pages = {523--532}
author = {D. G. Tucker},
title = {Transient Response of Tuned-Circuit Cascades},
year = {1946},
journal = {Wireless Engineer},
volume = {23},
pages = {250--258}
At 8:36 AM -0700 3/28/12, Richard F. Lyon wrote:
>I recounted some of those mis-attributions in
>and concluded:
> Aertsen and Johannesma [AJ80] appear to have coined the catchy name;
> referring to the envelope, they said:
> The form m(t) appears both as the integrand in the definition
> of the Gamma function $\Gamma(g)$ and as the density function
> of the Gamma distribution, therefore we propose to use ... the
> term "Gamma-tone" or "$\gamma$-tone."
> ... The non-hyphenated "gammatone," as an adjective modifying
> "filter," appears to be due to Patterson et al. [P88].
>At 10:05 AM +0200 3/28/12, Tamas Harczos wrote:
>>Dear List,
>>I am looking for the first time use of the term "gammatone". Flanagan
>>('65), Johannesma and de Boer ('72,'75) did not use that term. Patterson
>>et al. write in their '88 APU report "An efficient auditory filterbank
>>based on the gammatone function" that "Johannesma (1972) used this
>>function to summarize revcor data, although he did not refer to it as
>>the gammatone function, and the function was not fitted to revcor data.
>>The name appears to have been adopted by de Boer and de Jongh (1978)".
>>However, I am not able to find the term "gammatone" in the de Boer and
>>de Jongh paper "On cochlear encoding: Potentialities and limitations of
>>the reverse-correlation technique" JASA 63(1), 1978.
>>Any ideas?
>>Dipl.-Ing. Tamás Harczos
>>PhD Student
>>Institute for Media Technology
>>Faculty of Electr. Eng. and Inf. Techn.
>>Ilmenau University of Technology
>>Tel.: +49 3677 467 225
>>Fax.: +49 3677 467 4225
>>E-Mail: tamas.harczos@xxxxxxxx
This message came from the mail archive
|
{"url":"http://www.auditory.org/postings/2012/271.html","timestamp":"2014-04-18T10:36:15Z","content_type":null,"content_length":"5124","record_id":"<urn:uuid:2c8ac27d-7012-4ce9-ac7e-cc617a85a024>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Avoiding lift with Monad Transformers
up vote 24 down vote favorite
I have a problem that fits very well using a stack of MTs (or even one MT) over IO. Everything is good except that using lift before every action is terribly annoying! I suspect there's really
nothing to do about this, but I thought I'd ask anyways.
I am aware of lifting entire blocks, but what if the code is really of mixed types? Would it not be nice if GHC throw in some syntactic sugar (for example, <-$ = <- lift)?
haskell monads monad-transformers
add comment
2 Answers
active oldest votes
For all the standard mtl monads, you don't need lift at all. get, put, ask, tell — they all work in any monad with the right transformer somewhere in the stack. The missing piece
is IO, and even there liftIO lifts an arbitrary IO action down an arbitrary number of layers.
This is done with typeclasses for each "effect" on offer: for example, MonadState provides get and put. If you want to create your own newtype wrapper around a transformer stack,
you can do deriving (..., MonadState MyState, ...) with the GeneralizedNewtypeDeriving extension, or roll your own instance:
instance MonadState MyState MyMonad where
up vote 30 down get = MyMonad get
vote accepted put s = MyMonad (put s)
You can use this to selectively expose or hide components of your combined transformer, by defining some instances and not others.
(You can easily extend this approach to all-new monadic effects you define yourself, by defining your own typeclass and providing boilerplate instances for the standard
transformers, but all-new monads are rare; most of the time, you'll get by simply composing the standard set offered by mtl.)
Oh I think I feel stupid, you mentioned that in one of your previous answers, I could not understand it at the time. Now, I do thanks! – aelguindy Jan 29 '12 at 16:51
add comment
You can make your functions monad-agnostic by using typeclasses instead of concrete monad stacks.
Let's say that you have this function, for example:
bangMe :: State String ()
bangMe = do
str <- get
put $ str ++ "!"
-- or just modify (++"!")
Of course, you realize that it works as a transformer as well, so one could write:
bangMe :: Monad m => StateT String m ()
However, if you have a function that uses a different stack, let's say ReaderT [String] (StateT String IO) () or whatever, you'll have to use the dreaded lift function! So how is that
up vote 22 The trick is to make the function signature even more generic, so that it says that the State monad can appear anywhere in the monad stack. This is done like this:
down vote
bangMe :: MonadState String m => m ()
This forces m to be a monad that supports state (virtually) anywhere in the monad stack, and the function will thus work without lifting for any such stack.
There's one problem, though; since IO isn't part of the mtl, it doesn't have a transformer (e.g. IOT) nor a handy type class per default. So what should you do when you want to lift IO
actions arbitrarily?
To the rescue comes MonadIO! It behaves almost identically to MonadState, MonadReader etc, the only difference being that it has a slightly different lifting mechanism. It works like
this: you can take any IO action, and use liftIO to turn it into a monad agnostic version. So:
action :: IO ()
liftIO action :: MonadIO m => m ()
By transforming all of the monadic actions you wish to use in this way, you can intertwine monads as much as you want without any tedious lifting.
Thanks for the detailed answer! Beaten in timing by ehird though ;) – aelguindy Jan 29 '12 at 16:52
2 Me and ehird provide somewhat different solutions to this problem. It might be worth reading both of the responses to understand the alternatives you have :) – dflemstr Jan 29 '12
at 16:54
add comment
Not the answer you're looking for? Browse other questions tagged haskell monads monad-transformers or ask your own question.
|
{"url":"http://stackoverflow.com/questions/9054731/avoiding-lift-with-monad-transformers","timestamp":"2014-04-18T15:39:28Z","content_type":null,"content_length":"72626","record_id":"<urn:uuid:92a17e6f-4378-45ac-8e0a-7381614a7944>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: September 2007 [00867]
[Date Index] [Thread Index] [Author Index]
Re: Interval, Range of a function
• To: mathgroup at smc.vnet.net
• Subject: [mg81624] Re: [mg81612] Interval, Range of a function
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Sat, 29 Sep 2007 06:26:22 -0400 (EDT)
• References: <200709290629.CAA07794@smc.vnet.net>
On 29 Sep 2007, at 15:29, janos wrote:
> If you ask Mathematica what is the range of Sin restricted to e.g.
> [0,Pi/4]
> you will get a correct answer.
> This is not always the case, however. Let
> f[x_]:=5Exp[-2x]-3Exp[-x]
> f[Interval[{-Infinity,+Infinity}]]
> Then the answer Interval[{-Infinity,+Infinity}] is different from what
> you see on the figure of the function.
> Is there a bug here?
> Thanks, Janos
This is not a bug but just the way "Interval arithmetic" works.
Applying a function to an interval does not give you the range of the
function: it gives you an interval which will contains the range of
the function. The purpose of interval arithmetic is error estimation,
and it tends to "over estimate". This is, indeed, its weakest point,
but that is how it works. If you ever get an answer that does not
contain the range of the function that you will certainly have found
a bug.
There are other methods of error estimation that do not suffer from
this problem to such an extent (so called "affine arithmetic" is one)
but they all overestimate (the principle that "it is better to be
safe them sorry")
To see how this overestimation comes about note that if:
g[x_] := 5/E^(2*x); h[x_] := 3/E^x;
Through[{g, h}[Interval[{-Infinity, +Infinity}]]]
{Interval[{0, =B0}], Interval[{0, =B0}]}
which indeed give the correct ranges. But to compute the image of the
difference Mathematica simply computes:
Subtract @@ %
Interval[{-Infinity, Infinity}]
which is, of course, much larger than the true range.
Andrzej Kozlowski
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2007/Sep/msg00867.html","timestamp":"2014-04-19T02:13:39Z","content_type":null,"content_length":"35637","record_id":"<urn:uuid:d5a3e1bb-5adf-41f8-9d99-76765d6d647b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Shannon Capacity of a Graph, 1
Posted by Tom Leinster
You’ve probably had the experience where you spell your name or address down the phone, and the person on the other end mistakes one letter for another that sounds similar: P sounds like B, which is
easily mistaken for V, and so on. Under those circumstances, how can one most efficiently communicate without risk of error?
Shannon found the answer, but this isn’t part of his most famous work on communication down a noisy channel. There, the alphabet has no structure: any substitution of letters is as likely as any
other. Here, the alphabet has the structure of a graph.
Today, I’ll explain Shannon’s answer. Next time, I’ll explain how it’s connected to the concept of maximum diversity, and thereby to magnitude.
First, let’s work through the example shown. We have some message that we want to communicate down the phone, and for some reason, we have to encode it using just the letters P, B, V, D and T. But P
could be confused with B, B with V, V with D, D with T, and T with P. (I admit, T probably wouldn’t be confused with P, but I wanted to get a 5-cycle. So let’s go with it.) Our aim is to eliminate
all possibility of confusion.
One way to do this would be to only communicate using P and V — or any other pair of non-adjacent vertices on the graph. This would certainly work, but it seems a shame: we have five letters at our
disposal, and we’re not managing to do any better than if we had just two. In other words, we’re only communicating one bit of information per letter transmitted. That’s not very efficient.
There’s something smarter we can do. Let’s think not about single letters, but pairs of letters. How many non-confusable ordered pairs are there? Certainly, none of the pairs
(P, P), (P, V), (V, P), (V, V)
could be mistaken for any of the others. That’s still no improvement on a two-letter alphabet, though. But more sneakily, we could use
(P, P), (B, V), (V, T), (D, B), (T, D).
These are mutually distinguishable. To explain why, it will probably make life easier if I switch from letters to numbers, so that the graph becomes
and the five pairs listed are
(0, 0), (1, 2), (2, 4), (3, 1), (4, 3).
For instance, if I transmit (1, 2) then you might hear (1, 1), or (0, 3), or in fact any of nine possibilities; but since the only one of them that’s on the list is (1, 2), you’d know you’d misheard.
(In other words, you’d detect the error. You wouldn’t be able to correct it: for instance, (0, 3) could equally be a mishearing of (4, 3).)
What this means is that using two letters, we can transmit one of five possibilities. And that’s an improvement on binary. It’s as if we were using an alphabet with $\sqrt{5} = 2.23\ldots$ reliably
distinguishable letters.
Obviously, we could take this further and use blocks of $n$ letters rather than $2$, hoping for further savings in efficiency. In this particular example, it turns out that there’s nothing more to be
gained: the five-pair coding is the best there is. In the jargon, the ‘Shannon capacity’ of our graph is $\sqrt{5}$.
All I’ll do in the rest of the post is state the general definition of Shannon capacity. I’ll need to take a bit of a run-up.
For the purposes of this post, a graph is a finite, undirected, reflexive graph without multiple edges. ‘Reflexive’ means that every vertex has a loop on it (that is, an edge from it to itself).
Typically, graph theorists deal with irreflexive graphs, in which no vertex has a loop on it.
In a slightly comical way, reflexive and irreflexive graphs are in canonical one-to-one correspondence. (Simply adjoin or delete loops everywhere.) But there are two reasons why I want to buck the
convention and use reflexive graphs:
1. As in the example, the vertices of the graph are supposed to be thought of as letters, and an edge between two letters means that one could be heard as the other. On that basis, every vertex
should have a loop on it: if I say P, you might very well hear P!
2. The homomorphisms are different. A graph homomorphism is a map of the underlying vertex-set that preserves the edge-relation: if $x$ is adjacent to $y$ then $f(x)$ is adjacent to $f(y)$. If $G$
and $H$ are two irreflexive graphs, and $G_R$ and $H_R$ the corresponding reflexive graphs, then $Hom(G, H) \subseteq Hom(G_R, H_R)$, and in general it’s a proper subset. In other words, there is
a proper inclusion of subcategories
$(irreflexive graphs) \subset (reflexive graphs)$
which is bijective on objects. As we’ll see, products in the two categories are different, and that will matter.
Coming back to communication down a noisy phone line, a set of letters is ‘independent’ if none of them can be mistaken for any of the others. Formally, given a graph $G$, an independent set in $G$
is a subset $S$ of the vertices of $G$ such that if $s_1, s_2 \in S$ are adjacent then $s_1 = s_2$.
The independence number $\alpha(G)$ of a graph $G$ is the largest cardinality of an independent set in $G$. For example, when $G$ is the 5-cycle above, $\{P, V\}$ is an independent set and $\alpha(G)
= 2$. It’s the “effective size of the alphabet” if you’re taking things one letter at a time.
To upgrade to the superior, multi-letter approach, we need to think about tuples of vertices — in other words, products of graphs.
In graph theory, there are various products of interest, and there’s a sort of hilarious convention for the symbol used. It goes as follows. Let $I$ denote the graph o—o consisting of two vertices
joined by an edge, and suppose we have some kind of product of graphs. Then the convention is that the symbol used to denote that product will be a picture of the product of $I$ with itself. For
example, this is the case for the products $\Box$, $\times$, and $\boxtimes$.
The last one, $\boxtimes$, is called the strong product, and it’s what we want here. It’s simply the product in the category of reflexive graphs. Explicitly, the vertex-set of $G \boxtimes H$ is the
product of the vertex-sets of $G$ and $H$, and $(g_1, h_1)$ is adjacent to $(g_2, h_2)$ iff $g_1$ is adjacent to $g_2$ and $h_1$ is adjacent to $h_2$. You can check that $I \boxtimes I$ is the graph
that looks like $\boxtimes$.
(If we’d worked in the category of irreflexive graphs, we’d have got a different product: it’s the one denoted by $\times$.)
Let’s check that the strong product is really what we want for our application. Two vertices are supposed to be adjacent if one could be confused with the other. Put another way, two vertices are non
-adjacent if they can be reliably distinguished from each other. We can distinguish $(g_1, h_1)$ from $(g_2, h_2)$ if either we can distinguish $g_1$ from $g_2$or we can distinguish $h_1$ from $h_2$.
So, $(g_1, h_1)$ and $(g_2, h_2)$ should be non-adjacent if either $g_1$ and $g_2$ are non-adjacent or $h_1$ and $h_2$ are non-adjacent. And that’s just what happens in the strong product.
For instance, let $C_5$ be the 5-cycle shown above. Then the vertices
(0, 0), (1, 2), (2, 4), (3, 1), (4, 3).
of $C_5 \boxtimes C_5$ form an independent set of cardinality 5. It’s not hard to see that there’s no larger independent set in $C_5 \boxtimes C_5$, so
$\alpha(C_5 \boxtimes C_5) = 5$
—that is, $C_5 \boxtimes C_5$ has independence number $5$.
For trivial reasons, if $S$ is an independent set in $G$ and $T$ is an independent set in $H$ then $S \times T$ is an independent set in $G \boxtimes H$. We saw this in the case $G = H = C_5$ and $S
= T = \{P, V\}$. Hence
$\alpha(G \boxtimes H) \geq \alpha(G) \cdot \alpha(H).$
In particular,
$\alpha\bigl(G^{\boxtimes 2}\bigr) \geq \alpha(G)^2$
or equivalently,
$\alpha\bigl(G^{\boxtimes 2}\bigr)^{1/2} \geq \alpha(G).$
In our example, this says that $\alpha\bigl(C_5^{\boxtimes 2}\bigr)^{1/2} \geq 2$. In fact, we know that $\alpha\bigl(C_5^{\boxtimes 2}\bigr)^{1/2} = \sqrt{5} = 2.23\ldots$, which represents an
improvement: more economical communication.
The Shannon capacity measures how economically one can communicate without ambiguity, allowing the use of letter blocks of arbitrary size. Formally, the Shannon capacity of a graph $G$ is
$\Theta(G) := \sup_{k \geq 1} \alpha\bigl(G^{\boxtimes k}\bigr)^{1/k}.$
For instance, we saw that $\Theta(C_5) = \sqrt{5}$. This means that if you want to communicate using vertices of $C_5$, avoiding ambiguity, then you can do it as fast as if you were communicating
with an alphabet containing $\sqrt{5}$ completely distinguishable letters. You might call it the “effective number of vertices” in the graph… a phrase I’ll come back to next time.
Shannon also showed that
$\Theta(G) = \lim_{k \to \infty} \alpha\bigl(G^{\boxtimes k}\bigr)^{1/k}.$
I don’t know the proof. Also, I don’t know whether $\alpha\bigl(G^{\boxtimes k}\bigr)^{1/k}$ is actually increasing in $k$, which would of course imply this. (Update: Tobias Fritz answers both
questions in a comment below.)
Update A few minutes after posting this, I discovered a coincidence: the Shannon capacity of a graph was also the subject of a blog post by Kenneth Regan at Buffalo, just a month ago. He also
discusses the Lovász number of a graph, a quantity related to the Shannon capacity which I’d planned to mention next time.
The Shannon capacity is very hard to compute. Regan mentions that even the Shannon capacity of the 7-cycle $C_7$ is unknown!
Next time: How this connects to the concepts of maximum diversity and magnitude.
Posted at August 10, 2013 3:10 PM UTC
Re: The Shannon Capacity of a Graph, 1
Here is some more information; I’ve recently been looking at the Shannon capacity of graphs in the context of quantum contextuality, so I can say a little bit more.
The sequence $a_k:=\alpha(G^{\boxtimes k})^{1/k}$ is not monotonically increasing or non-decreasing in general. I think that $G=C_5$ itself is an example of this, where $a_3=\alpha(G^{\boxtimes 3})^
{1/3}=10^{1/3}\lt\sqrt{5}$. Other surprising behaviors of such “independence sequences” have been found in a paper of Alon and Lubetzky.
That the independence sequence $a_k$ nevertheless converges is due to $\alpha(G^{\boxtimes(j+k)})\geq\alpha(G^{\boxtimes j})\alpha(G^{\boxtimes k})$, which holds since the cartesian product of an
independent set in $G^{\boxtimes j}$ with an independent set in $G^{\boxtimes k}$ is an independent set in $G^{\boxtimes (j+k)}$. This inequality implies convergence of the sequence $\frac{\log \
alpha(G^{\boxtimes k})}{k}=\log a_k$ by virtue of Fekete’s lemma for superadditive sequences.
Looking forward to see how Shannon capacity relates to diversity and magnitude!
Posted by: Tobias Fritz on August 10, 2013 4:29 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Great — thanks for setting me straight. You may have caught some last-minute edits of that paragraph: I saw the claim in Kenneth Regan’s post that the sequence was increasing, did a quick calculation
that I thought confirmed it, realized a few minutes later that my calculation was wrong, and then started to doubt whether the claim was true at all. Your comment settles it.
(To be clear: when I say “increasing”, I mean it in the weak sense. I know lots of people like to say “non-decreasing”, but that’s long struck me as a bit bonkers.)
As your post made me realize, it can’t possibly be the case that both (i) $\Theta(C_5) = \sqrt{5}$, and (ii) the sequence $(a_k) = \Bigl(\alpha(C_5^{\boxtimes k})^{1/k}\Bigr)$ is increasing. For
since $a_2 = \sqrt{5}$, (i) and (ii) would together imply that $a_3 = \sqrt{5}$. But $a_3$ is some positive integer to the power of $1/3$, so $\sqrt{5}^3$ would have to be an integer, which it isn’t.
Posted by: Tom Leinster on August 10, 2013 4:51 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Oh, now I understand your email and that it actually was intended for me. Thanks! And I see that you’ve already commented on Kenneth Regan’s blog as well.
Noticing that $\sqrt{5}^3$ is not integer is a very neat argument for showing that the independence sequence is not increasing! I didn’t know that one could put it that way without actually computing
any further independence numbers.
Posted by: Tobias Fritz on August 10, 2013 5:09 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
I know lots of people like to say “non-decreasing”, but that’s long struck me as a bit bonkers.
I don’t like to say “nondecreasing”, but if I always say “nondecreasing”/”strictly increasing” then I never have to go back and clarify what I meant by “increasing”. I similarly try always to say
“nonnegative”/”strictly positive”. It can sound silly, but at least it’s unambiguous, regardless of the audience’s background.
Posted by: Mark Meckes on August 11, 2013 6:11 AM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Posted by: Tom Ellis on August 11, 2013 10:33 AM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Posted by: Tom Ellis on August 11, 2013 10:34 AM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
That solution works well in a written medium, but probably isn’t any less awkward when speaking out loud.
Posted by: Mark Meckes on August 11, 2013 12:09 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
I agree, Mark, that certainly has its advantages. Something I do myself which I think is a bit bonkers is to use “nonnegative” to mean “$\geq 0$”. It’s pretty odd to describe a thing by saying what
it’s not. (Though at least “nonnegative” really does mean “not negative”, while “non-decreasing” doesn’t mean “not decreasing”.) I think the French have it right in using positif to mean $\geq 0$ and
strictement positif for $\gt 0$.
Posted by: Tom Leinster on August 11, 2013 12:14 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
It’s pretty odd to describe a thing by saying what it’s not.
Odd perhaps, but not so uncommon that it isn’t a mainstay of not a few examples of writing which can’t be parsed without difficulty.
“non-decreasing” doesn’t mean “not decreasing”.
You can almost make that work by interpreting it as “not decreasing on any subset”. But since “decreasing” alone means “decreasing on every subset” you end up applying “not” to only part of
I think the French have it right in using positif to mean $\ge 0$ and strictement positif for > $0$.
Probably. But anglophones are strangely inconsistent on this point, with usage correlating with subfield, and in any case part of the differences in audience background we have to deal with is
different linguistic backgrounds. I’m reminded of my favorite quote about mathematicians, by Goethe:
Die Mathematiker sind eine Art Franzosen; redet man mit ihnen, so übersetzen sie es in ihre Sprache, und dann ist es alsobald ganz etwas anderes.
Mathematicians are a species of Frenchman; when one speaks with them, they translate it into their language, and from then on it is something completely different.
Posted by: Mark Meckes on August 11, 2013 1:39 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
There are English-speaking mathematicians who say “positive” to mean “nonnegative”??? Do they also say “greater” to mean “greater than or equal to”?
Posted by: Mike Shulman on August 11, 2013 5:39 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
There are English-speaking mathematicians who say “positive” to mean “nonnegative”???
In certain parts of analysis (French-influenced subfields?) that’s quite common, but it’s not a very self-consistent practice. I don’t think I’ve ever heard anyone call $0$ a “positive integer”
(though we could waste a lot of time and energy on the question of whether it’s a “natural number”), but $0$ is sometimes a “positive number”, and a “positive function” is usually allowed to take on
the value $0$. With more complicated objects it becomes even more common to use “positive” in a weak sense. For example, “positive operator” typically means positive semidefinite, not positive
It all gets to be quite a mess, of course. I think the root of it is that people favor the simplest term for the idea they work with most often. But since different people work more often with
different things…
Do they also say “greater” to mean “greater than or equal to”?
Colloquially, yes. In writing, I’m not sure.
Posted by: Mark Meckes on August 11, 2013 6:13 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
In operator (algebra) theory, positive operators generalize *non-negative* numbers: a multiple of the identity operator is positive if and only if it is a non-negative multiple. One can understand
this use of “positive” as shorthand for “positive semidefinite”.
This seems very sensible terminology and might be an example of what Mark refers to. I think there is also a notion of “strictly positive operator”, but I don’t know what exactly it means.
Posted by: Tobias Fritz on August 11, 2013 9:35 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Analysts, of course, have to distinguish between $\forall x : X, f(x) \gt 0$ and $\forall x : \beta X, \lim_x f \gt 0$ which is more-or-less the same as $\exists \varepsilon \gt 0 \forall x : X, f(x)
\gt \varepsilon$
Posted by: Jesse McKeown on August 11, 2013 11:02 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Hmm, that seems to me like poorly chosen terminology. If there are two related-but-distinct concepts X and Y, and you have a more general concept which generalizes X, then you shouldn’t call it a Y.
(I see historically how it might have happened, from “positive” to “positive definite” to “positive semidefinite” to “positive”, but that doesn’t make the end result sensible.)
Posted by: Mike Shulman on August 12, 2013 6:27 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
I’m speaking beyond my knowledge here, but I imagine it’s the case that positive operators (in the non-strict sense that Tobias mentioned) are much more interesting and important than strictly
positive operators. And if they’re the central object of study, it’s intolerable to keep calling them by some name such as “nonnegative” or “non-strictly positive” or “positive semidefinite”. You
want something short and snappy, like “positive”.
In Practical Foundations of Mathematics, Paul Taylor explains why (unusually for someone with a background in category theory) he uses $\subset$ to mean what I’d call $\subseteq$. His argument is
that being a possibly-improper subset is the more important and common condition than being a proper subset — and as the primary concept, it should get a simpler symbol. I have some sympathy with
I’m pretty sure he doesn’t go as far as saying that because “less than or equal to” is a more important and common condition than “strictly less than”, he’s going to use $\lt$ for what everyone else
calls $\leq$. That would be… taxing. But again, I’d have some sympathy for such a position.
Posted by: Tom Leinster on August 12, 2013 8:17 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
I agree that important concepts need short and snappy names, but I don’t think that that justifies appropriating a word or symbol that already has a different meaning. But I guess I’m familiar with
the fact that some people feel differently.
Posted by: Mike Shulman on August 13, 2013 1:18 AM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
If you want to see some really unfortunate terminology along these lines, consider the following: a function $f:G \to \mathbb{C}$ on an abelian group $G$ is called “positive definite” if, for each
finite subset $A \subseteq G$, the matrix $[f(a-b)]_{a,b \in A}$ is positive semidefinite; if for each $A$ the matrix is positive definite, then $f$ is called strictly positive definite. “Positive
semidefinite function” is not in common use.
Then there’s “nonnegative definite” (matrix, operator, bilinear form, etc.), which is sometimes used as a synonym for “positive semidefinite”, even though such a thing is not in general “definite”.
Posted by: Mark Meckes on August 12, 2013 11:20 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
What exactly is “definite” about something “positive definite”? I’ve never understood that terminology (and “nonnegative definite” seems to me like a much more obvious generalization of it than
“positive semidefinite”).
Posted by: Mike Shulman on August 13, 2013 6:40 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
If you use the definitions at [[inner product space#definite]], then ‘positive’, ‘semidefinite’, and ‘definite’ all have independent meanings, and ‘positive definite’ literally means both positive
and definite. (Also, ‘positive semidefinite’ is redundant, but I advocate its use to avoid confusion.)
Posted by: Toby Bartels on September 15, 2013 6:43 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Well, what I’ve heard said before is that the “definite” part of “positive definite” is supposed to refer to the nondegeneracy condition. But looking around a bit now, I find that it’s more standard
to say that it refers to the fact that (say, for a bilinear form) $\langle x, x \rangle$ takes on only positive values for nonzero $x$. So “nonnegative definite” certainly is a more logical
generalization, and I retract the second paragraph of my last comment.
Posted by: Mark Meckes on August 13, 2013 8:08 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Come to think of it, that’s also my favorite quote about Frenchman.
And by the way: sorry, Tom, for hijacking this comment thread. I am interested in the post itself, and I’m looking forward to the sequel.
Posted by: Mark Meckes on August 11, 2013 2:25 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
No apology necessary — after all, I was the one who used the provocative phrase “a bit bonkers”, and who doesn’t like to air their opinions on terminlogy?
(I like the Goethe quote too.)
Posted by: Tom Leinster on August 11, 2013 10:55 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
I know lots of people like to say “non-decreasing”, but that’s long struck me as a bit bonkers.
You can also do ‘nowhere decreasing’ or ‘never decreasing’ (depending on whether you prefer a spatial or temporal metaphor for your variables, and with a hyphen if desired). This even works for
partial orders if you add ‘nohow’ and interpret $x \leq y$ as that $x$ has a smaller (or equal) value than $y$ has in each of various (possibly interacting) ways. (I don't really advocate this, but
it can be done.)
Posted by: Toby Bartels on September 15, 2013 7:08 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Jamie Vicary explained Shannon capacity to me last summer, using the same example of the 5-cycle $C_5$ and its ‘strong square’ $C_5^{\boxtimes 2}$. But he presented it in the form of a puzzle: find
as many pairwise nonadjacent vertices as you can in $C_5^{\boxtimes 2}$.
The puzzle was fun and not too hard because he drew $C_5^{\boxtimes 2}$—which you, alas, but quite forgivably, did not. You can draw it as a little grid of squares with the convention that opposite
edges are identified. But then you can replace the vertices of this grid by the squares in this grid, and the puzzle becomes:
How many kings can you put on a $5 \times 5$ wraparound chessboard so that no king can capture another?
And then the answer involves knight moves, and the relation to
$\sqrt{5} = \sqrt{2^2 + 1^2}$
becomes quite beautifully visible!
I eagerly await future posts, since we never got too terribly far doing anything with the idea of Shannon capacity.
Posted by: John Baez on August 11, 2013 3:36 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
he drew $C_5^{\boxtimes 2}$—which you, alas, but quite forgivably, did not.
Heh. Yes, I spent a good half-minute contemplating the horror of drawing a kind of uncomfortable-to-sit-on torus representing $C_5 \boxtimes C_5$. The grid solution is much better!
But I don’t fully understand your comment. I can see that the five kings are all a knight’s move apart, and that the Euclidean length of a knight’s move is $\sqrt{5} = \sqrt{2^2 + 1^2}$. However, I
don’t see why that’s the same as the “other” $\sqrt{5}$ — that is, the square root of the independence number of $C_5^{\boxtimes 2}$. Can you give me a clue?
Posted by: Tom Leinster on August 11, 2013 11:02 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Posted by: John Baez on August 12, 2013 3:49 AM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Posted by: Tom Leinster on August 12, 2013 11:47 AM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Is that what you meant?
(Don’t know why one vertical and one horizontal line have come out heavier than the others; that’s just an accident.)
Posted by: Tom Leinster on August 13, 2013 12:14 AM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Yes, that’s exactly the picture I was hinting at!
You’re taking a square with sides of length 5 and neatly chopping into 5 squares with length $\sqrt{5}$, using knight moves… so you get to code for 5 different expressions using two letters without
ambiguity, getting a Shannon capacity of $\sqrt{5}$.
The two $\sqrt{5}$’s here have opposite, or reciprocal, meanings: one says how far apart the code words need to be, while the other says how much capacity your code has… but they’re related.
Posted by: John Baez on August 13, 2013 10:38 AM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
In the first comment, Tobias linked to a paper of his:
Tobias Fritz, Anthony Leverrier, Ana Belén Sainz, A combinatorial approach to nonlocality and contextuality, arXiv:1212.4084.
I’ve just been skimming through it, and I’d like to highlight their Conjecture A.2.1. It’s actually several related conjectures.
They say that a graph $G$ is single-shot if $\alpha(G) = \Theta(G)$. Remember that $\alpha(G)$ is the independence number of $G$ (the largest cardinality of a set of unrelated vertices), and that the
Shannon capacity $\Theta(G)$ is given by
$\Theta(G) = \sup_k \alpha(G^{\boxtimes k})^{1/k} = \lim_{k \to \infty} \alpha(G^{\boxtimes k})^{1/k}.$
In particular, $\alpha(G) \leq \Theta(G)$. “Single-shot” means that taking higher powers of your graph gives you no efficiency savings at all: for instance, the 5-cycle $C_5$ is not single-shot,
because we were able to make a clever saving by considering pairs of vertices.
Their conjecture says that if $G_1$ and $G_2$ are single-shot, then:
1. $\alpha(G_1 \boxtimes G_2) = \alpha(G_1) \alpha(G_2)$;
2. $\Theta(G_1 \boxtimes G_2) = \Theta(G_1) \Theta(G_2)$;
3. $\Theta(G_1 + G_2) = \Theta(G_1) + \Theta(G_2)$,
where in (3), $+$ means disjoint union.
These seem very simple statements not to be known!
Posted by: Tom Leinster on August 12, 2013 2:00 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
By now, all of these conjectures have been disproven! There are counterexamples to Conjecture 7.5.1(c) found by our collaborator Antonio Acín. By virtue of the implications displayed in Figure 12,
these translate into counterexamples to the conjectures mentioned by Tom. We are currently preparing a follow-up paper containing these results.
The counterexamples rely crucially on graphs whose Lovász number is strictly greater than their Shannon capacity; such graphs have been found in Haemers’ paper “An upper bound for the Shannon
capacity of a graph”.
Haemers’ bound is well worth knowing due to its sheer simplicity: let $B$ be a matrix over some field with rows and columns indexed by the vertices of the graph $G$, such that $B_{vw}=0$ if two
distinct vertices $v$ and $w$ are adjacent, while $B_{vv}eq 0$ on the diagonal. Then every independent set of vertices induces a submatrix which is diagonal of full rank, and hence $\mathrm{rank}(B)\
geq\alpha(G)$. Since $B^{\otimes n}$ satisfies the analogous properties for $G^{\boxtimes n}$, we have $\alpha(G^{\boxtimes n})\leq \mathrm{rank}(B^{\otimes n})=\mathrm{rank}(B)^n$. This implies $\
Theta(G)\leq \mathrm{rank}(B)$.
(The conditions on $B$ are curiously contrary to Tom’s philosophy of working with reflexive graphs…)
Posted by: Tobias Fritz on August 13, 2013 2:01 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Oh well! Good that you’ve uncovered the truth.
Something seems to be wrong in the conditions on $B$ that you list, otherwise it won’t be true that
every independent set of vertices induces a submatrix which is diagonal.
The paper of Haemers that you link to agrees with what you wrote, but the Wikipedia article on Shannon capacity changes
$B_{v w} = 0$ if two distinct vertices $v$ and $w$ are adjacent
$B_{v w} = 0$ if two distinct vertices $v$ and $w$ are not adjacent
(my paraphrasing), and cites that same paper of Haemers with the note that they’re correcting a typo in it. I think that’s got to be right.
So if we view $G$ as reflexive, the conditions on $B$ are that for vertices $v$ and $w$,
$v = w \quad \implies \quad B_{v w} eq 0 \quad \implies \quad \{v,w\} \in E(G)$
where $E(G)$ is the set of edges. That seems to suggest that it’s more convenient here to view $G$ as reflexive than irreflexive (not that I have a global preference for one over the other).
Posted by: Tom Leinster on August 13, 2013 3:53 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
By the way, it should probably be mentioned that counterexamples to the analogous conjectures for general graphs (as opposed to single-shot) have been known for quite some time.
If I recally correctly, both $\Theta(G_1\boxtimes G_2)=\Theta(G_1)\Theta(G_2)$ and $\Theta(G_1+G_2)=\Theta(G_1)+\Theta(G_2)$ were disproved in Haemers 1979 paper “On some problems of Lovász
concerning the Shannon capacity of a graph”. (I currently can’t find any full-text version.)
Posted by: Tobias Fritz on August 13, 2013 2:12 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
You wrote “we saw that $\Theta(C_5) = 2$”, which must be a typo since above you explain that $\Theta(C_5) \ge \sqrt{5} \gt 2$. In fact, it is true that $\Theta(C_5) = \sqrt{5}$ but it takes quite a
bit of cleverness to show. Shannon managed to compute the capacity of all graphs with 5 vertices or fewer except for $C_5$, for which he showed that $\sqrt{5} \le \Theta(C_5) \le \frac{5}{2}$. More
than 20 years after Shannon’s work, Laszlo Lovász proved that $\Theta(C_5) = \sqrt{5}$ by what seems to me to be a very clever strategy.
Lovász’s paper is On the Shannon capacity of a graph, IEEE Trans. Information Theory 25 (1979), 1-7. I read his proof in the charming Proofs from the book by Martin Aigner and Günter M. Ziegler.
Posted by: Omar Antolín-Camarena on August 16, 2013 7:14 PM | Permalink | Reply to this
Re: The Shannon Capacity of a Graph, 1
Oops, thanks. Typo fixed now.
I’m actually in the middle of writing Part 2, which mentions Lovász’s proof that $\Theta(C_5) = \sqrt{5}$. I’d guessed, but didn’t know for sure, that Lovász was the first person to prove this.
Thanks for confirming it and for explaining the history.
Posted by: Tom Leinster on August 17, 2013 6:31 PM | Permalink | Reply to this
|
{"url":"http://golem.ph.utexas.edu/category/2013/08/the_shannon_capacity_of_a_grap.html","timestamp":"2014-04-18T08:02:50Z","content_type":null,"content_length":"128141","record_id":"<urn:uuid:036e5e69-72c8-4622-b9e0-2a733cd06b03>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Articles Tagged ‘quantum’ - Less Wrong Discussion
Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Let's play around with the quantum measure some more. Specifically, let's posit a theory T that claims that the quantum measure of our universe is increasing - say by 50% each day. Why could this be
happening? Well, here's a quasi-justification for it: imagine there are lots and lots of of universes, most of them in chaotic random states, jumping around to other chaotic random states, in
accordance with the usual laws of quantum mechanics. Occasionally, one of them will partially tunnel, by chance, into the same state our universe is in - and then will evolve forwards in time exactly
as our universe is. Over time, we'll accumulate an ever-growing measure.
That theory sounds pretty unlikely, no matter what feeble attempts are made to justify it. But T is observationally indistinguishable from our own universe, and has a non-zero probability of being
true. It's the reverse of the (more likely) theory presented here, in which the quantum measure was being constantly diminished. And it's very bad news for theories that treat the quantum measure
(squared) as akin to a probability, without ever renormalising. It implies that one must continually sacrifice for the long-term: any pleasure today is wasted, as that pleasure will be weighted so
much more tomorrow, next week, next year, next century... A slight fleeting smile on the face of the last human is worth more than all the ecstasy of the previous trillions.
One solution to the "quantum measure is continually diminishing" problem was to note that as the measure of the universe diminished, it would eventually get so low that that any alternative,
non-measure diminishing theory, not matter how initially unlikely, would predominate. But that solution is not available here - indeed, that argument runs in reverse, and makes the situation worse.
No matter how initially unlikely the "quantum measure is continually increasing" theory is, eventually, the measure will become so high that it completely dominates all other theories.
Child, I'm sorry to tell you that the world is about to end. Most likely. You see, this madwoman has designed a doomsday machine that will end all life as we know it - painlessly and immediately. It
is attached to a supercomputer that will calculate the 10^100th digit of pi - if that digit is zero, we're safe. If not, we're doomed and dead.
However, there is one thing you are allowed to do - switchout the logical trigger and replaced it by a quantum trigger, that instead generates a quantum event that will prevent the bomb from
triggering with 1/10th measure squared (in the other cases, the bomb goes off). You ok paying €5 to replace the triggers like this?
If you treat quantum measure squared exactly as probability, then you shouldn't see any reason to replace the trigger. But if you believed in many worlds quantum mechanics (or think that MWI is
possibly correct with non-zero probability), you might be tempted to accept the deal - after all, everyone will survive in one branch. But strict total utilitarians may still reject the deal. Unless
they refuse to treat quantum measure as akin to probability in the first place (meaning they would accept all quantum suicide arguments), they tend to see a universe with a tenth of measure-squared
as exactly equally valued to a 10% chance of a universe with full measure. And they'd even do the reverse, replace a quantum trigger with a logical one, if you paid them €5 to do so.
Still, most people, in practice, would choose to change the logical bomb for a quantum bomb, if only because they were slightly uncertain about their total utilitarian values. It would seem self
evident that risking the total destruction of humanity is much worse than reducing its measure by a factor of 10 - a process that would be undetectable to everyone.
Of course, once you agree with that, we can start squeezing. What if the quantum trigger only has 1/20 measured-squared "chance" of saving us? 1/000? 1/10000? If you don't want to fully accept the
quantum immortality arguments, you need to stop - but at what point?
Hello rationality friends! I have a question that I bet some of you have thought about...
I hear lots of people saying that classical coin flips are not "quantum random events", because the outcome is very nearly determined by thumb movement when I flip the coin. More precisely, one can
stay that the state of my thumb and the state of the landed coin are strongly entangled, such that, say, 99% of the quantum measure of the coin flips outcomes my post-flip thumb observes all land
First of all, I've never actually seen an order of magnitude estimate to support this claim, and would love it if someone here can provide or link to one!
Second, I'm not sure how strongly entangled my thumb movement is with my subjective experience, i.e., with the parts of my brain that consciously process the decision to flip and the outcome. So
even if the coin outcome is almost perfectly determined by my thumb, it might not be almost perfectly determined by my decision to flip the coin.
For example, while the thumb movement happens, a lot of calibration goes on between my thumb, my motor cortex, and my cerebellum (which certainly affects but does not seem to directly process
conscious experience), precisely because my motor cortex is unable to send, on its own, a precise and accurate enough signal to my thumb that achieves the flicking motion that we eventually learn to
do in order to flip coins. Some of this inability is due to small differences in environmental factors during each flip that the motor cortex does not itself process directly, but is processed by
the cerebellum instead. Perhaps some of this inability also comes directly from quantum variation in neuron action potentials being reached, or perhaps some of the aforementioned environmental
factors arise from quantum variation.
Anyway, I'm altogether not *that* convinced that the outcome of a coin flip is sufficiently dependent on my decision to flip as to be considered "not a quantum random event" by my conscious brain.
Can anyone provide me with some order of magnitude estimates to convince me either way about this? I'd really appreciate it!
ETA: I am not asking if coin flips are "random enough" in some strange, undefined sense. I am actually asking about quantum entanglement here. In particular, when your PFC decides for planning
reasons to flip a coin, does the evolution of the wave function produce a world that is in a superposition of states (coin landed heads)⊗(you observed heads) + (coin landed tails)⊗(you observed
tails)? Or does a monomial state result, either (coin landed heads)⊗(you observed heads) or (coin landed tails)⊗(you observed tails) depending on the instance?
At present, despite having been told many times that coin flips are not "in superpositions" relative to "us", I'm not convinced that there is enough mutual information connecting my frontal lobe and
the coin for the state of the coin to be entangled with me (i.e. not "in a superposed state") before I observe it. I realize this is somewhat testable, e.g., if the state amplitudes of the coin can
be forced to have complex arguments differing in a predictable way so as to produce expected and measurable interference patterns. This is what we have failed to produce at a macroscopic level in
attempts to produce visible superpositions. But I don't know if we fail to produce messier, less-visibly-self-interfering superpositions, which is why I am still wondering about this...
Any help / links / fermi estimates on this will be greatly appreciated!
Imagine that the universe is approximately as it appears to be (I know, this is a controversial proposition, but bear with me!). Further imagine that the many worlds interpretation of Quantum
mechanics is true (I'm really moving out of Less Wrong's comfort zone here, aren't I?).
Now assume that our universe is in a situation of false vacuum - the universe is not in its lowest energy configuration. Somewhere, at some point, our universe may tunnel into true vacuum, resulting
in a expanding bubble of destruction that will eat the entire universe at high speed, destroying all matter and life. In many worlds, such a collapse need not be terminal: life could go one on a
branch of lower measure. In fact, anthropically, life will go on somewhere, no matter how unstable the false vacuum is.
So now assume that the false vacuum we're in is highly unstable - the measure of the branch in which our universe survives goes down by a factor of a trillion every second. We only exist because
we're in the branch of measure a trillionth of a trillionth of a trillionth of... all the way back to the Big Bang.
None of these assumptions make any difference to what we'd expect to see observationally: only a good enough theory can say that they're right or wrong. You may notice that this setup transforms the
whole universe into a quantum suicide situation.
The question is, how do you go about maximising expected utility in this situation? I can think of a few different approaches:
1. Gnaw on the bullet: take the quantum measure as a probability. This means that you now have a discount factor of a trillion every second. You have to rush out and get/do all the good stuff as
fast as possible: a delay of a second costs you a reduction in utility of a trillion. If you are a negative utilitarian, you also have to rush to minimise the bad stuff, but you can also take
comfort in the fact that the potential for negative utility across the universe is going down fast.
2. Use relative measures: care about the relative proportion of good worlds versus bad worlds, while assigning zero to those worlds where the vacuum has collapsed. This requires a natural zero to
make sense, and can be seen as quite arbitrary: what would you do about entangled worlds, or about the non-zero probability that the vacuum-collapsed worlds may have worthwhile life in them?
Would the relative measure user also put zero value to worlds that were empty of life for other reasons than vacuum collapse? For instance, would they be in favour of programming an AI's
friendliness using random quantum bits, if it could be reassured that if friendliness fails, the AI would kill everyone immediately?
3. Deny the measure: construct a meta ethical theory where only classical probabilities (or classical uncertainties) count as probabilities. Quantum measures do not: you care about the sum total of
all branches of the universe. Universes in which the photon went through the top slit, went through the bottom slit, or was in an entangled state that went through both slits... to you, there are
three completely separate universes, and you can assign totally unrelated utilities to each one. This seems quite arbitrary, though: how are you going to construct these preferences across the
whole of the quantum universe, when forged your current preferences on a single branch?
4. Cheat: note that nothing in life is certain. Even if we have the strongest evidence imaginable about vacuum collapse, there's always a tiny chance that the evidence is wrong. After a few seconds,
that probability will be dwarfed by the discount factor of the collapsing universe. So go about your business as usual, knowing that most of the measure/probability mass remains in the
non-collapsing universe. This can get tricky if, for instance the vacuum collapsed more slowly that a factor of a trillion a second. Would you be in a situation where you should behave as if you
believed vacuum collapse for another decade, say, and then switch to a behaviour that assumed non-collapse afterwards? Also, would you take seemingly stupid bets, like bets at a trillion trillion
trillion to one that the next piece of evidence will show no collapse (if you lose, you're likely in the low measure universe anyway, so the loss is minute)?
Quantum field theory (QFT) is the basic framework of particle physics. Particles arise from the quantized energy levels of field oscillations; Feynman diagrams are the simple tool for approximating
their interactions. The "standard model", the success of which is capped by the recent observation of a Higgs boson lookalike, is a quantum field theory.
But just like everything mathematical, quantum field theory has hidden depths. For the past decade, new pictures of the quantum scattering process (in which particles come together, interact, and
then fly apart) have incrementally been developed, and they presage a transformation in the understanding of what a QFT describes.
At the center of this evolution is "N=4 super-Yang-Mills theory", the maximally supersymmetric QFT in four dimensions. I want to emphasize that from a standard QFT perspective, this theory contains
nothing but scalar particles (like the Higgs), spin-1/2 fermions (like electrons or quarks), and spin-1 "gauge fields" (like photons and gluons). The ingredients aren't something alien to real
physics. What distinguishes an N=4 theory is that the particle spectrum and the interactions are arranged so as to produce a highly extended form of supersymmetry, in which particles have multiple
partners (so many LWers should be comfortable with the notion).
In 1997, Juan Maldacena discovered that the N=4 theory is equivalent to a type of string theory in a particular higher-dimensional space. In 2003, Edward Witten discovered that it is also equivalent
to a different type of string theory in a supersymmetric version of Roger Penrose's twistor space. Those insights didn't come from nowhere, they explained algebraic facts that had been known for many
years; and they have led to a still-accumulating stockpile of discoveries about the properties of N=4 field theory.
What we can say is that the physical processes appearing in the theory can be understood as taking place in either of two dual space-time descriptions. Each space-time has its own version of a
particular large symmetry, "superconformal symmetry", and the superconformal symmetry of one space-time is invisible in the other. And now it is becoming apparent that there is a third description,
which does not involve space-time at all, in which both superconformal symmetries are manifest, but in which space-time locality and quantum unitarity are not "visible" - that is, they are not
manifest in the equations that define the theory in this third picture.
I cannot provide an authoritative account of how the new picture works. But here is my impression. In the third picture, the scattering processes of the space-time picture become a complex of
polytopes - higher-dimensional polyhedra, joined at their faces - and the quantum measure becomes the volume of these polyhedra. Where you previously had particles, you now just have the dimensions
of the polytopes; and the fact that in general, an n-dimensional space doesn't have n special directions suggests to me that multi-particle entanglements can be something more fundamental than the
separate particles that we resolve them into.
It will be especially interesting to see whether this polytope combinatorics, that can give back the scattering probabilities calculated with Feynman diagrams in the usual picture, can work solely
with ordinary probabilities. That was Penrose's objective, almost fifty years ago, when he developed the theory of "spin networks" as a new language for the angular momentum calculations of quantum
theory, and which was a step towards the twistor variables now playing an essential role in these new developments. If the probability calculus of quantum mechanics can be obtained from conventional
probability theory applied to these "structures" that may underlie familiar space-time, then that would mean that superposition does not need to be regarded as ontological.
I'm talking about this now because a group of researchers around Nima Arkani-Hamed, who are among the leaders in this area, released their first paper in a year this week. It's very new, and so
arcane that, among physics bloggers, only Lubos Motl has talked about it.
This is still just one step in a journey. Not only does the paper focus on the N=4 theory - which is not the theory of the real world - but the results only apply to part of the N=4 theory, the
so-called "planar" part, described by Feynman diagrams with a planar topology. (For an impressionistic glimpse of what might lie ahead, you could try this paper, whose author has been shouting from
the wilderness for years that categorical knot theory is the missing piece of the puzzle.)
The N=4 theory is not reality, but the new perspective should generalize. Present-day calculations in QCD already employ truncated versions of the N=4 theory; and Arkani-Hamed et al specifically
mention another supersymmetric field theory (known as ABJM after the initials of its authors), a deformation of which is holographically dual to a theory-of-everything candidate from 1983.
When it comes to seeing reality in this new way, we still only have, at best, a fruitful chaos of ideas and possibilities. But the solid results - the mathematical equivalences - will continue to
pile up, and the end product really ought to be nothing less than a new conception of how physics works.
If the many worlds of the Many Worlds Interpretation of quantum mechanics are real, there's at least a good chance that Quantum Immortality is real as well: All conscious beings should expect to
experience the next moment in at least one Everett branch even if they stop existing in all other branches, and the moment after that in at least one other branch, and so on forever.
However, the transition from life to death isn't usually a binary change. For most people it happens slowly as your brain and the rest of your body deteriorates, often painfully.
Doesn't it follow that each of us should expect to keep living in this state of constant degradation and suffering for a very, very long time, perhaps forever?
I don't know much about quantum mechanics, so I don't have anything to contribute to this discussion. I'm just terrified, and I'd like, not to be reassured by well-meaning lies, but to know the
truth. How likely is it that Quantum Torment is real?
Doing some insomniac reading of the Quantum Sequence, I think that I've gotten a reasonable grasp of the principles of decoherence, non-interacting bundles of amplitude, etc. I then tried to put that
knowledge to work by comparing it with my understanding of virtual particles (whose rate of creation in any area is essentially equivalent to the electromagnetic field), and I had a thought I can't
seem to find mentioned elsewhere.
If I understand decoherence right, then quantum events which can't be differentiated from each other get summed together into the same blob of amplitude. Most virtual particles which appear and
rapidly disappear do so in ways that can't be detected, let alone distinguished. This seems as if it could potentially imply that the extreme evenness of a vacuum might have to do more with the
overall blob of amplitude of the vacuum being smeared out among all the equally-likely vacuum fluctuations, than it does directly with the evenness of the rate of vacuum fluctuations themselves. It
also seems possible that there could be some clever way to test for an overall background smear of amplitude, though I'm not awake enough to figure one out just now. (My imagination has thrown out
the phrase 'collapse of the vacuum state', but I'm betting that that's just unrelated quantum buzzword bingo.)
Does anything similar to what I've just described have any correlation with actual quantum theory, or will I awaken to discover all my points have been voted away due to this being complete and utter
This article should really be called "Patching the argumentative flaw in the Sequences created by the Quantum Physics Sequence".
There's only one big thing wrong with that Sequence: the central factual claim is wrong. I don't mean the claim that the Many Worlds interpretation is correct; I mean the claim that the Many Worlds
interpretation is obviously correct. I don't agree with the ontological claim either, but I especially don't agree with the epistemological claim. It's a strawman which reduces the quantum debate to
Everett versus Bohr - well, it's not really Bohr, since Bohr didn't believe wavefunctions were physical entities. Everett versus Collapse, then.
I've complained about this from the beginning, simply because I've also studied the topic and profoundly disagree with Eliezer's assessment. What I would like to see discussed on this occasion is not
the physics, but rather how to patch the arguments in the Sequences that depend on this wrong sub-argument. To my eyes, this is a highly visible flaw, but it's not a deep one. It's a detail, a bug.
Surely it affects nothing of substance.
However, before I proceed, I'd better back up my criticism. So: consider the existence of single-world retrocausal interpretations of quantum mechanics, such as John Cramer's transactional
interpretation, which is descended from Wheeler-Feynman absorber theory. There are no superpositions, only causal chains running forward in time and backward in time. The calculus of complex-valued
probability amplitudes is supposed to arise from this.
The existence of the retrocausal tradition already shows that the debate has been represented incorrectly; it should at least be Everett versus Bohr versus Cramer. I would also argue that when you
look at the details, many-worlds has no discernible edge over single-world retrocausality:
• Relativity isn't an issue for the transactional interpretation: causality forwards and causality backwards are both local, it's the existence of loops in time which create the appearance of
• Retrocausal interpretations don't have an exact derivation of the Born rule, but neither does many-worlds.
• Many-worlds finds hope of such a derivation in a property of the quantum formalism: the resemblance of density matrix entries to probabilities. But single-world retrocausality finds such hope
too: the Born probabilities can be obtained from the product of ψ with ψ*, its complex conjugate, and ψ* is the time reverse of ψ.
• Loops in time just fundamentally bug some people, but splitting worlds have the same effect on others.
I am not especially an advocate of retrocausal interpretations. They are among the possibilities; they deserve consideration and they get it. Retrocausality may or may not be an element of the real
explanation of why quantum mechanics works. Progress towards the discovery of the truth requires exploration on many fronts, that's happening, we'll get there eventually. I have focused on
retrocausal interpretations here just because they offer the clearest evidence that the big picture offered by the Sequence is wrong.
It's hopeless to suggest rewriting the Sequence, I don't think that would be a good use of anyone's time. But what I would like to have, is a clear idea of the role that "the winner is ... Many
Worlds!" plays in the overall flow of argument, in the great meta-sequence that is Less Wrong's foundational text; and I would also like to have a clear idea of how to patch the argument, so that it
routes around this flaw.
In the wiki, it states that "Cleaning up the old confusion about QM is used to introduce basic issues in rationality (such as the technical version of Occam's Razor), epistemology, reductionism,
naturalism, and philosophy of science." So there we have it - a synopsis of the function that this Sequence is supposed to perform. Perhaps we need a working group that will identify each of the
individual arguments, and come up with a substitute for each one.
Interpreting quantum mechanics throws an interesting wrench into utility calculation.
Utility functions, according to the interpretation typical in these parts, are a function of the state of the world, and an agent with consistent goals acts to maximize the expected value of their
utility function. Within the many-worlds interpretation (MWI) of quantum mechanics (QM), things become interesting because "the state of the world" refers to a wavefunction which contains all
possibilities, merely in differing amounts. With an inherently probabilistic interpretation of QM, flipping a quantum coin has to be treated linearly by our rational agent - that is, when calculating
expected utility, they have to average the expected utilities from each half. But if flipping a quantum coin is just an operation on the state of the world, then you can use any function you want
when calculating expected utility.
And all coins, when you get down to it, are quantum. At the extreme, this leads to the possible rationality of quantum suicide - since you're alive in the quantum state somewhere, just claim that
your utility function non-linearly focuses on the part where you're alive.
As you may have heard, there have been several papers in the quantum mechanics literature that claim to recover ordinary rules for calculating expected utility in MWI - how does that work?
Well, when they're not simply wrong (for example, by replacing a state labeled by the number a+b with the state |a> + |b>), they usually go about it with the Von Neumann-Morgenstern axioms, modified
to refer to quantum mechanics:
1. Completeness: Every state can be compared to every other, preferencewise.
2. Transitivity: If you prefer |A> to |B> and |B> to |C>, you also prefer |A> to |C>.
3. Continuity: If you prefer |A> to |B> and |B> to |C>, there's some quantum-mechanical measure (note that this is a change from "probability") X such that you're indifferent between (1-X)|A> + X|C>
and |B>.
4. Independence: If you prefer |A> to |B>, then you also prefer (1-X)|A> + X|C> to (1-X)|B> + X|C>, where |C> can be anything and X isn't 1.
In classical cases, these four axioms are easy to accept, and lead directly to utility functions with X as a probability. In quantum mechanical cases, the axioms are harder to accept, but the only
measure available is indeed the ordinary amplitude-squared measure (this last fact features prominently in Everett's original paper). This gives you back the traditional rule for calculating expected
For an example of why these axioms are weird in quantum mechanics, consider the case of light. Linearly polarized light is actually the same thing as an equal superposition of right-handed and
left-handed circularly polarized light. This has the interesting consequence that even when light is linearly polarized, if you shine it on atoms, those atoms will change their spins - they'll just
change half right and half left. Or if you take circularly polarized light and shine it on a linear polarizer, half of it will go through. So anyhow, we can make axiom 4 read "If you are indifferent
between left-polarized light and right-polarized light, then you must also be indifferent between linearly polarized light (i.e. left+right) and circularly polarized light (right+right)." But...
can't a guy just want circularly polarized light?
Under what sort of conditions does the independence axiom make intuitive sense? Ones where something more complicated than a photon is being considered. Something like you. If MWI is correct and you
measure the polarization of linearly polarized light vs. circularly polarized light, this puts your brain in a superposition of linear vs. circular. But nobody says "boy, I really want a circularly
polarized brain."
A key factor, as is often the case when talking about recovering classical behavior from quantum mechanics, is decoherence. If you carefully prepare your brain in a circularly polarized state, and
you interact with an enormous random system (like by breathing air, or emitting thermal radiation), your carefully prepared brain-state is going to get shredded. It's a fascinating property of
quantum mechanics that once you "leak" information to the outside, things are qualitatively different. If we have a pair of entangled particles and a classical phone line, I can send you an exact
quantum state - it's called quantum teleportation, and it's sweet. But if one of our particles leaks even the tiniest bit, even if we just end up with three particles entangled instead of two, our
ability to transmit quantum states is gone completely.
In essence, the states we started with were "close together" in the space where quantum mechanics lives (Hilbert space), and so they could interact via quantum mechanics. Interacting with the outside
even a little scattered our entangled particles farther apart.
Any virus, dust speck, or human being is constantly interacting with the outside world. States that are far enough apart to be perceptibly different to us aren't just "one parallel world away," like
would make a good story - they are cracked wide open, spread out in the atmosphere as soon as you breathe it, spread by the Earth as soon as you push on it with your weight. If we were photons, one
could easily connect with their "other selves" - if you try to change your polarization, whether you succeed or fail will depend on the orientation of your oppositely-polarized "other self"! But once
you've interacted with the Earth, this quantum interference becomes negligible - so negligible that we seem to neglect it. When we make a plan, we don't worry that our nega-self might plan the
opposite and we'll cancel each other out.
Does this sort of separation explain an approximate independence axiom, which is necessary for the usual rules for expected utility? Yes.
Because of decoherence, non-classical interactions are totally invisible to unaided primates, so it's expected that our morality neglects them. And if the states we are comparing are noticeably
different, they're never going to interact, so independence is much more intuitive than in the case of a single photon. Taken together with the other axioms, which still make a lot of sense, this
defines expected utility maximization with the Born rule.
So this is my take on utility functions in quantum mechanics - any living thing big enough to have a goal system will also be big enough to neglect interaction between noticeably different states,
and thus make decisions as if the amplitude squared was a probability. With the help of technology, we can create systems where the independence axiom breaks down, but these systems are things like
photons or small loops of superconducting wire, not humans.
E.T. Jaynes had a brief exchange of correspondence with Hugh Everett in 1957. The exchange was initiated by Everett, who commented on recently published works by Jaynes. Jaynes responded to Everett's
comments, and finally sent Everett a letter reviewing a short version of Everett's thesis published that year.
Jaynes reaction was extremely positive at first: "It seems fair to say that your theory is the logical completion of quantum theory, in exactly the same sense that relativity was the logical
completion of classical theory." High praise. But Jaynes swiftly follows up the praise with fundamental objections: "This is just the fundamental cause of Einstein's most serious objections to
quantum theory, and it seems to me that the things that worried Einstein still cause trouble in your theory, but in an entirely new way." His letter goes on to detail his concerns, and insist, wtih
Bohm, that "Einstein's objections to quantum theory have never been satisfactorily answered.
The Collected Works of Everett has some narrative about their interaction:
Hugh Everett marginal notes on page from E. T. Jaynes' "Information Theory and Statistical Mechanics"
Hugh Everett handwritten draft letter to E.T. Jaynes, 15-May-1957
Hugh Everett letter to E. T. Jaynes, 11-June-1957
E.T. Jaynes letter to Hugh Everett, 15-October-1957 - Never before published
Directory at Google site with all the links and docs above. Also links to Washington University at St. Louis copyright form for this doc, Everett's thesis, long and short forms, and Jaynes' paper
(the papers they were discussing in their correspondence). I hope to be adding the final letter in this exchange, Jaynes to Hewitt 17-June-1957, within a couple of weeks. , and maybe some documents
from the Yahoo Group ETJaynesStudy as well.
For perspective on Jaynes more recent thoughts on quantum theory:
Jaynes paper on EPR and Bell's Theorem: http://bayes.wustl.edu/etj/articles/cmystery.pdf
Jaynes speculations on quantum theory: http://bayes.wustl.edu/etj/articles/scattering.by.free.pdf
Timeless physics is what you end up with if you take MWI, assume the universe is a standing wave, and remove the extraneous variables. From what I understand, for the most part you can take a
standing wave and add a time-reversed version, you end up with a standing wave that only uses real numbers. The problem with this is that the universe isn't quite time symmetric.
If I ignore that complex numbers ever were used in quantum physics, it seems unlikely that complex numbers is the correct solution. Is there another one? Should I be reversing charge and parity as
well as time when I make the standing real-only wave?
Synopsis: The brain is a quantum computer and the self is a tensor factor in it - or at least, the truth lies more in that direction than in the classical direction - and we won't get Friendly AI
right unless we get the ontology of consciousness right.
Followed by: Does functionalism imply dualism?
Sixteen months ago, I made a post seeking funding for personal research. There was no separate Discussion forum then, and the post was comprehensively downvoted. I did manage to keep going at it,
full-time, for the next sixteen months. Perhaps I'll get to continue; it's for the sake of that possibility that I'll risk another breach of etiquette. You never know who's reading these words and
what resources they have. Also, there has been progress.
I think the best place to start is with what orthonormal said in response to the original post: "I don't think anyone should be funding a Penrose-esque qualia mysterian to study string theory." If I
now took my full agenda to someone out in the real world, they might say: "I don't think it's worth funding a study of 'the ontological problem of consciousness in the context of Friendly AI'."
That's my dilemma. The pure scientists who might be interested in basic conceptual progress are not engaged with the race towards technological singularity, and the apocalyptic AI activists gathered
in this place are trying to fit consciousness into an ontology that doesn't have room for it. In the end, if I have to choose between working on conventional topics in Friendly AI, and on the
ontology of quantum mind theories, then I have to choose the latter, because we need to get the ontology of consciousness right, and it's possible that a breakthrough could occur in the world outside
the FAI-aware subculture and filter through; but as things stand, the truth about consciousness would never be discovered by employing the methods and assumptions that prevail inside the FAI
Perhaps I should pause to spell out why the nature of consciousness matters for Friendly AI. The reason is that the value system of a Friendly AI must make reference to certain states of conscious
beings - e.g. "pain is bad" - so, in order to make correct judgments in real life, at a minimum it must be able to tell which entities are people and which are not. Is an AI a person? Is a digital
copy of a human person, itself a person? Is a human body with a completely prosthetic brain still a person?
I see two ways in which people concerned with FAI hope to answer such questions. One is simply to arrive at the right computational, functionalist definition of personhood. That is, we assume the
paradigm according to which the mind is a computational state machine inhabiting the brain, with states that are coarse-grainings (equivalence classes) of exact microphysical states. Another physical
system which admits the same coarse-graining - which embodies the same state machine at some macroscopic level, even though the microscopic details of its causality are different - is said to embody
another instance of the same mind.
An example of the other way to approach this question is the idea of simulating a group of consciousness theorists for 500 subjective years, until they arrive at a consensus on the nature of
consciousness. I think it's rather unlikely that anyone will ever get to solve FAI-relevant problems in that way. The level of software and hardware power implied by the capacity to do reliable
whole-brain simulations means you're already on the threshold of singularity: if you can simulate whole brains, you can simulate part brains, and you can also modify the parts, optimize them with
genetic algorithms, and put them together into nonhuman AI. Uploads won't come first.
But the idea of explaining consciousness this way, by simulating Daniel Dennett and David Chalmers until they agree, is just a cartoon version of similar but more subtle methods. What these methods
have in common is that they propose to outsource the problem to a computational process using input from cognitive neuroscience. Simulating a whole human being and asking it questions is an extreme
example of this (the simulation is the "computational process", and the brain scan it uses as a model is the "input from cognitive neuroscience"). A more subtle method is to have your baby AI act as
an artificial neuroscientist, use its streamlined general-purpose problem-solving algorithms to make a causal model of a generic human brain, and then to somehow extract from that, the criteria which
the human brain uses to identify the correct scope of the concept "person". It's similar to the idea of extrapolated volition, except that we're just extrapolating concepts.
It might sound a lot simpler to just get human neuroscientists to solve these questions. Humans may be individually unreliable, but they have lots of cognitive tricks - heuristics - and they are
capable of agreeing that something is verifiably true, once one of them does stumble on the truth. The main reason one would even consider the extra complication involved in figuring out how to turn
a general-purpose seed AI into an artificial neuroscientist, capable of extracting the essence of the human decision-making cognitive architecture and then reflectively idealizing it according to its
own inherent criteria, is shortage of time: one wishes to develop friendly AI before someone else inadvertently develops unfriendly AI. If we stumble into a situation where a powerful self-enhancing
algorithm with arbitrary utility function has been discovered, it would be desirable to have, ready to go, a schema for the discovery of a friendly utility function via such computational
Now, jumping ahead to a later stage of the argument, I argue that it is extremely likely that distinctively quantum processes play a fundamental role in conscious cognition, because the model of
thought as distributed classical computation actually leads to an outlandish sort of dualism. If we don't concern ourselves with the merits of my argument for the moment, and just ask whether an AI
neuroscientist might somehow overlook the existence of this alleged secret ingredient of the mind, in the course of its studies, I do think it's possible. The obvious noninvasive way to form
state-machine models of human brains is to repeatedly scan them at maximum resolution using fMRI, and to form state-machine models of the individual voxels on the basis of this data, and then to
couple these voxel-models to produce a state-machine model of the whole brain. This is a modeling protocol which assumes that everything which matters is physically localized at the voxel scale or
smaller. Essentially we are asking, is it possible to mistake a quantum computer for a classical computer by performing this sort of analysis? The answer is definitely yes if the analytic process
intrinsically assumes that the object under study is a classical computer. If I try to fit a set of points with a line, there will always be a line of best fit, even if the fit is absolutely
terrible. So yes, one really can describe a protocol for AI neuroscience which would be unable to discover that the brain is quantum in its workings, and which would even produce a specific classical
model on the basis of which it could then attempt conceptual and volitional extrapolation.
Clearly you can try to circumvent comparably wrong outcomes, by adding reality checks and second opinions to your protocol for FAI development. At a more down to earth level, these exact mistakes
could also be made by human neuroscientists, for the exact same reasons, so it's not as if we're talking about flaws peculiar to a hypothetical "automated neuroscientist". But I don't want to go on
about this forever. I think I've made the point that wrong assumptions and lax verification can lead to FAI failure. The example of mistaking a quantum computer for a classical computer may even have
a neat illustrative value. But is it plausible that the brain is actually quantum in any significant way? Even more incredibly, is there really a valid apriori argument against functionalism
regarding consciousness - the identification of consciousness with a class of computational process?
I have previously posted (here) about the way that an abstracted conception of reality, coming from scientific theory, can motivate denial that some basic appearance corresponds to reality. A
perennial example is time. I hope we all agree that there is such a thing as the appearance of time, the appearance of change, the appearance of time flowing... But on this very site, there are many
people who believe that reality is actually timeless, and that all these appearances are only appearances; that reality is fundamentally static, but that some of its fixed moments contain an illusion
of dynamism.
The case against functionalism with respect to conscious states is a little more subtle, because it's not being said that consciousness is an illusion; it's just being said that consciousness is some
sort of property of computational states. I argue first that this requires dualism, at least with our current physical ontology, because conscious states are replete with constituents not present in
physical ontology - for example, the "qualia", an exotic name for very straightforward realities like: the shade of green appearing in the banner of this site, the feeling of the wind on your skin,
really every sensation or feeling you ever had. In a world made solely of quantum fields in space, there are no such things; there are just particles and arrangements of particles. The truth of this
ought to be especially clear for color, but it applies equally to everything else.
In order that this post should not be overlong, I will not argue at length here for the proposition that functionalism implies dualism, but shall proceed to the second stage of the argument, which
does not seem to have appeared even in the philosophy literature. If we are going to suppose that minds and their states correspond solely to combinations of mesoscopic information-processing events
like chemical and electrical signals in the brain, then there must be a mapping from possible exact microphysical states of the brain, to the corresponding mental states. Supposing we have a mapping
from mental states to coarse-grained computational states, we now need a further mapping from computational states to exact microphysical states. There will of course be borderline cases. Functional
states are identified by their causal roles, and there will be microphysical states which do not stably and reliably produce one output behavior or the other.
Physicists are used to talking about thermodynamic quantities like pressure and temperature as if they have an independent reality, but objectively they are just nicely behaved averages. The
fundamental reality consists of innumerable particles bouncing off each other; one does not need, and one has no evidence for, the existence of a separate entity, "pressure", which exists in parallel
to the detailed microphysical reality. The idea is somewhat absurd.
Yet this is analogous to the picture implied by a computational philosophy of mind (such as functionalism) applied to an atomistic physical ontology. We do know that the entities which constitute
consciousness - the perceptions, thoughts, memories... which make up an experience - actually exist, and I claim it is also clear that they do not exist in any standard physical ontology. So, unless
we get a very different physical ontology, we must resort to dualism. The mental entities become, inescapably, a new category of beings, distinct from those in physics, but systematically correlated
with them. Except that, if they are being correlated with coarse-grained neurocomputational states which do not have an exact microphysical definition, only a functional definition, then the mental
part of the new combined ontology is fatally vague. It is impossible for fundamental reality to be objectively vague; vagueness is a property of a concept or a definition, a sign that it is
incomplete or that it does not need to be exact. But reality itself is necessarily exact - it is something - and so functionalist dualism cannot be true unless the underdetermination of the
psychophysical correspondence is replaced by something which says for all possible physical states, exactly what mental states (if any) should also exist. And that inherently runs against the
functionalist approach to mind.
Very few people consider themselves functionalists and dualists. Most functionalists think of themselves as materialists, and materialism is a monism. What I have argued is that functionalism, the
existence of consciousness, and the existence of microphysical details as the fundamental physical reality, together imply a peculiar form of dualism in which microphysical states which are
borderline cases with respect to functional roles must all nonetheless be assigned to precisely one computational state or the other, even if no principle tells you how to perform such an assignment.
The dualist will have to suppose that an exact but arbitrary border exists in state space, between the equivalence classes.
This - not just dualism, but a dualism that is necessarily arbitrary in its fine details - is too much for me. If you want to go all Occam-Kolmogorov-Solomonoff about it, you can say that the
information needed to specify those boundaries in state space is so great as to render this whole class of theories of consciousness not worth considering. Fortunately there is an alternative.
Here, in addressing this audience, I may need to undo a little of what you may think you know about quantum mechanics. Of course, the local preference is for the Many Worlds interpretation, and we've
had that discussion many times. One reason Many Worlds has a grip on the imagination is that it looks easy to imagine. Back when there was just one world, we thought of it as particles arranged in
space; now we have many worlds, dizzying in their number and diversity, but each individual world still consists of just particles arranged in space. I'm sure that's how many people think of it.
Among physicists it will be different. Physicists will have some idea of what a wavefunction is, what an operator algebra of observables is, they may even know about path integrals and the various
arcane constructions employed in quantum field theory. Possibly they will understand that the Copenhagen interpretation is not about consciousness collapsing an actually existing wavefunction; it is
a positivistic rationale for focusing only on measurements and not worrying about what happens in between. And perhaps we can all agree that this is inadequate, as a final description of reality.
What I want to say, is that Many Worlds serves the same purpose in many physicists' minds, but is equally inadequate, though from the opposite direction. Copenhagen says the observables are real but
goes misty about unmeasured reality. Many Worlds says the wavefunction is real, but goes misty about exactly how it connects to observed reality. My most frustrating discussions on this topic are
with physicists who are happy to be vague about what a "world" is. It's really not so different to Copenhagen positivism, except that where Copenhagen says "we only ever see measurements, what's the
problem?", Many Worlds says "I say there's an independent reality, what else is left to do?". It is very rare for a Many World theorist to seek an exact idea of what a world is, as you see Robin
Hanson and maybe Eliezer Yudkowsky doing; in that regard, reading the Sequences on this site will give you an unrepresentative idea of the interpretation's status.
One of the characteristic features of quantum mechanics is entanglement. But both Copenhagen, and a Many Worlds which ontologically privileges the position basis (arrangements of particles in space),
still have atomistic ontologies of the sort which will produce the "arbitrary dualism" I just described. Why not seek a quantum ontology in which there are complex natural unities - fundamental
objects which aren't simple - in the form of what we would presently called entangled states? That was the motivation for the quantum monadology described in my other really unpopular post. :-) [
Edit: Go there for a discussion of "the mind as tensor factor", mentioned at the start of this post.] Instead of saying that physical reality is a series of transitions from one arrangement of
particles to the next, say it's a series of transitions from one set of entangled states to the next. Quantum mechanics does not tell us which basis, if any, is ontologically preferred. Reality as a
series of transitions between overall wavefunctions which are partly factorized and partly still entangled is a possible ontology; hopefully readers who really are quantum physicists will get the
gist of what I'm talking about.
I'm going to double back here and revisit the topic of how the world seems to look. Hopefully we agree, not just that there is an appearance of time flowing, but also an appearance of a self. Here I
want to argue just for the bare minimum - that a moment's conscious experience consists of a set of things, events, situations... which are simultaneously "present to" or "in the awareness of"
something - a conscious being - you. I'll argue for this because even this bare minimum is not acknowledged by existing materialist attempts to explain consciousness. I was recently directed to this
brief talk about the idea that there's no "real you". We are given a picture of a graph whose nodes are memories, dispositions, etc., and we are told that the self is like that graph: nodes can be
added, nodes can be removed, it's a purely relational composite without any persistent part. What's missing in that description is that bare minimum notion of a perceiving self. Conscious experience
consists of a subject perceiving objects in certain aspects. Philosophers have discussed for centuries how best to characterize the details of this phenomenological ontology; I think the best was
Edmund Husserl, and I expect his work to be extremely important in interpreting consciousness in terms of a new physical ontology. But if you can't even notice that there's an observer there,
observing all those parts, then you won't get very far.
My favorite slogan for this is due to the other Jaynes, Julian Jaynes. I don't endorse his theory of consciousness at all; but while in a daydream he once said to himself, "Include the knower in the
known". That sums it up perfectly. We know there is a "knower", an experiencing subject. We know this, just as well as we know that reality exists and that time passes. The adoption of ontologies in
which these aspects of reality are regarded as unreal, as appearances as only, may be motivated by science, but it's false to the most basic facts there are, and one should show a little more
imagination about what science will say when it's more advanced.
I think I've said almost all of this before. The high point of the argument is that we should look for a physical ontology in which a self exists and is a natural yet complex unity, rather than a
vaguely bounded conglomerate of distinct information-processing events, because the latter leads to one of those unacceptably arbitrary dualisms. If we can find a physical ontology in which the
conscious self can be identified directly with a class of object posited by the theory, we can even get away from dualism, because physical theories are mathematical and formal and make few
commitments about the "inherent qualities" of things, just about their causal interactions. If we can find a physical object which is absolutely isomorphic to a conscious self, then we can turn the
isomorphism into an identity, and the dualism goes away. We can't do that with a functionalist theory of consciousness, because it's a many-to-one mapping between physical and mental, not an
So, I've said it all before; what's new? What have I accomplished during these last sixteen months? Mostly, I learned a lot of physics. I did not originally intend to get into the details of particle
physics - I thought I'd just study the ontology of, say, string theory, and then use that to think about the problem. But one thing led to another, and in particular I made progress by taking ideas
that were slightly on the fringe, and trying to embed them within an orthodox framework. It was a great way to learn, and some of those fringe ideas may even turn out to be correct. It's now
abundantly clear to me that I really could become a career physicist, working specifically on fundamental theory. I might even have to do that, it may be the best option for a day job. But what it
means for the investigations detailed in this essay, is that I don't need to skip over any details of the fundamental physics. I'll be concerned with many-body interactions of biopolymer electrons in
vivo, not particles in a collider, but an electron is still an electron, an elementary particle, and if I hope to identify the conscious state of the quantum self with certain special states from a
many-electron Hilbert space, I should want to understand that Hilbert space in the deepest way available.
My only peer-reviewed publication, from many years ago, picked out pathways in the microtubule which, we speculated, might be suitable for mobile electrons. I had nothing to do with noticing those
pathways; my contribution was the speculation about what sort of physical processes such pathways might underpin. Something I did notice, but never wrote about, was the unusual similarity (so I
thought) between the microtubule's structure, and a model of quantum computation due to the topologist Michael Freedman: a hexagonal lattice of qubits, in which entanglement is protected against
decoherence by being encoded in topological degrees of freedom. It seems clear that performing an ontological analysis of a topologically protected coherent quantum system, in the context of some
comprehensive ontology ("interpretation") of quantum mechanics, is a good idea. I'm not claiming to know, by the way, that the microtubule is the locus of quantum consciousness; there are a number of
possibilities; but the microtubule has been studied for many years now and there's a big literature of models... a few of which might even have biophysical plausibility.
As for the interpretation of quantum mechanics itself, these developments are highly technical, but revolutionary. A well-known, well-studied quantum field theory turns out to have a bizarre new
nonlocal formulation in which collections of particles seem to be replaced by polytopes in twistor space. Methods pioneered via purely mathematical studies of this theory are already being used for
real-world calculations in QCD (the theory of quarks and gluons), and I expect this new ontology of "reality as a complex of twistor polytopes" to carry across as well. I don't know which quantum
interpretation will win the battle now, but this is new information, of utterly fundamental significance. It is precisely the sort of altered holistic viewpoint that I was groping towards when I
spoke about quantum monads constituted by entanglement. So I think things are looking good, just on the pure physics side. The real job remains to show that there's such a thing as quantum
neurobiology, and to connect it to something like Husserlian transcendental phenomenology of the self via the new quantum formalism.
It's when we reach a level of understanding like that, that we will truly be ready to tackle the relationship between consciousness and the new world of intelligent autonomous computation. I don't
deny the enormous helpfulness of the computational perspective in understanding unconscious "thought" and information processing. And even conscious states are still states, so you can surely make a
state-machine model of the causality of a conscious being. It's just that the reality of how consciousness, computation, and fundamental ontology are connected, is bound to be a whole lot deeper than
just a stack of virtual machines in the brain. We will have to fight our way to a new perspective which subsumes and transcends the computational picture of reality as a set of causally coupled
black-box state machines. It should still be possible to "port" most of the thinking about Friendly AI to this new ontology; but the differences, what's new, are liable to be crucial to success.
Fortunately, it seems that new perspectives are still possible; we haven't reached Kantian cognitive closure, with no more ontological progress open to us. On the contrary, there are still lines of
investigation that we've hardly begun to follow.
The subject has already been raised in this thread, but in a clumsy fashion. So here is a fresh new thread, where we can discuss, calmly and objectively, the pros and cons of the "Oxford" version of
the Many Worlds interpretation of quantum mechanics.
This version of MWI is distinguished by two propositions. First, there is no definite number of "worlds" or "branches". They have a fuzzy, vague, approximate, definition-dependent existence. Second,
the probability law of quantum mechanics (the Born rule) is to be obtained, not by counting the frequencies of events in the multiverse, but by an analysis of rational behavior in the multiverse.
Normally, a prescription for rational behavior is obtained by maximizing expected utility, a quantity which is calculated by averaging "probability x utility" for each possible outcome of an action.
In the Oxford school's "decision-theoretic" derivation of the Born rule, we somehow start with a ranking of actions that is deemed rational, then we "divide out" by the utilities, and obtain
probabilities that were implicit in the original ranking.
I reject the two propositions. "Worlds" or "branches" can't be vague if they are to correspond to observed reality, because vagueness results from an object being dependent on observer definition,
and the local portion of reality does not owe its existence to how we define anything; and the upside-down decision-theoretic derivation, if it ever works, must implicitly smuggle in the premises of
probability theory in order to obtain its original rationality ranking.
Some references:
"Decoherence and Ontology: or, How I Learned to Stop Worrying and Love FAPP" by David Wallace. In this paper, Wallace says, for example, that the question "how many branches are there?" "does not...
make sense", that the question "how many branches are there in which it is sunny?" is "a question which has no answer", "it is a non-question to ask how many [worlds]", etc.
"Quantum Probability from Decision Theory?" by Barnum et al. This is a rebuttal of the original argument (due to David Deutsch) that the Born rule can be justified by an analysis of multiverse
Note: This post assumes that the Oxford version of Many Worlds is wrong, and speculates as to why this isn't obvious. For a discussion of the hypothesis itself, see Problems of the Deutsch-Wallace
version of Many Worlds.
smk asks how many worlds are produced in a quantum process where the outcomes have unequal probabilities; Emile says there's no exact answer, just like there's no exact answer for how many ink blots
are in the messy picture; Tetronian says this analogy is a great way to demonstrate what a "wrong question" is; Emile has (at this writing) 9 upvotes, and Tetronian has 7.
My thesis is that Emile has instead provided an example of how to dismiss a question and thereby fool oneself; Tetronian provides an example of treating an epistemically destructive technique of
dismissal as epistemically virtuous and fruitful; and the upvotes show that this isn't just their problem. [edit: Emile and Tetronian respond.]
I am as tired as anyone of the debate over Many Worlds. I don't expect the general climate of opinion on this site to change except as a result of new intellectual developments in the larger world of
physics and philosophy of physics, which is where the question will be decided anyway. But the mission of Less Wrong is supposed to be the refinement of rationality, and so perhaps this "case study"
is of interest, not just as another opportunity to argue over the interpretation of quantum mechanics, but as an opportunity to dissect a little bit of irrationality that is not only playing out here
and now, but which evidently has a base of support.
The question is not just, what's wrong with the argument, but also, how did it get that base of support? How was a situation created where one person says something irrational (or foolish, or however
the problem is best understood), and a lot of other people nod in agreement and say, that's an excellent example of how to think?
On this occasion, my quarrel is not with the Many Worlds interpretation as such; it is with the version of Many Worlds which says there's no actual number of worlds. Elsewhere in the thread, someone
says there are uncountably many worlds, and someone else says there are two worlds. At least those are meaningful answers (although the advocate of "two worlds" as the answer, then goes on to say
that one world is "stronger" than the other, which is meaningless).
But the proposition that there is no definite number of worlds, is as foolish and self-contradictory as any of those other contortions from the history of thought that rationalists and advocates of
common sense like to mock or boggle at. At times I have wondered how to place Less Wrong in the history of thought; well, this is one way to do it - it can have its own chapter in the history of
intellectual folly; it can be known by its mistakes.
Then again, this "mistake" is not original to Less Wrong. It appears to be one of the defining ideas of the Oxford-based approach to Many Worlds associated with David Deutsch and David Wallace; the
other defining idea being the proposal to derive probabilities from rationality, rather than vice versa. (I refer to the attempt to derive the Born rule from arguments about how to behave rationally
in the multiverse.) The Oxford version of MWI seems to be very popular among thoughtful non-physicist advocates of MWI - even though I would regard both its defining ideas as nonsense - and it may be
that its ideas get a pass here, partly because of their social status. That is, an important faction of LW opinion believes that Many Worlds is the explanation of quantum mechanics, and the Oxford
school of MWI has high status and high visibility within the world of MWI advocacy, and so its ideas will receive approbation without much examination or even much understanding, because of the
social and psychological mechanisms which incline people to agree with, defend, and laud their favorite authorities, even if they don't really understand what these authorities are saying or why they
are saying it.
However, it is undoubtedly the case that many of the LW readers who believe there's no definite number of worlds, believe this because the idea genuinely makes sense to them. They aren't just
stringing together words whose meaning isn't known, like a Taliban who recites the Quran without knowing a word of Arabic; they've actually thought about this themselves; they have gone through some
subjective process as a result of which they have consciously adopted this opinion. So from the perspective of analyzing how it is that people come to hold absurd-sounding views, this should be good
news. It means that we're dealing with a genuine failure to reason properly, as opposed to a simple matter of reciting slogans or affirming allegiance to a view on the basis of something other than
At a guess, the thought process involved is very simple. These people have thought about the wavefunctions that appear in quantum mechanics, at whatever level of technical detail they can muster;
they have decided that the components or substructures of these wavefunctions which might be identified as "worlds" or "branches" are clearly approximate entities whose definition is somewhat
arbitrary or subject to convention; and so they have concluded that there's no definite number of worlds in the wavefunction. And the failure in their thinking occurs when they don't take the next
step and say, is this at all consistent with reality? That is, if a quantum world is something whose existence is fuzzy and which doesn't even have a definite multiplicity - that is, we can't even
say if there's one, two, or many of them - if those are the properties of a quantum world, then is it possible for the real world to be one of those? It's the failure to ask that last question, and
really think about it, which must be the oversight allowing the nonsense-doctrine of "no definite number of worlds" to gain a foothold in the minds of otherwise rational people.
If this diagnosis is correct, then at some level it's a case of "treating the map as the territory" syndrome. A particular conception of the quantum-mechanical wavefunction is providing the "map" of
reality, and the individual thinker is perhaps making correct statements about what's on their map, but they are failing to check the properties of the map against the properties of the territory. In
this case, the property of reality that falsifies the map is, the fact that it definitely exists, or perhaps the corollary of that fact, that something which definitely exists definitely exists at
least once, and therefore exists with a definite, objective multiplicity.
Trying to go further in the diagnosis, I can identify a few cognitive tendencies which may be contributing. First is the phenomenon of bundled assumptions which have never been made distinct and
questioned separately. I suppose that in a few people's heads, there's a rapid movement from "science (or materialism) is correct" to "quantum mechanics is correct" to "Many Worlds is correct" to
"the Oxford school of MWI is correct". If you are used to encountering all of those ideas together, it may take a while to realize that they are not linked out of logical necessity, but just
contingently, by the narrowness of your own experience.
Second, it may seem that "no definite number of worlds" makes sense to an individual, because when they test their own worldview for semantic coherence, logical consistency, or empirical adequacy, it
seems to pass. In the case of "no-collapse" or "no-splitting" versions of Many Worlds, it seems that it often passes the subjective making-sense test, because the individual is actually relying on
ingredients borrowed from the Copenhagen interpretation. A semi-technical example would be the coefficients of a reduced density matrix. In the Copenhagen interpetation, they are probabilities.
Because they have the mathematical attributes of probabilities (by this I just mean that they lie between 0 and 1), and because they can be obtained by strictly mathematical manipulations of the
quantities composing the wavefunction, Many Worlds advocates tend to treat these quantities as inherently being probabilities, and use their "existence" as a way to obtain the Born probability rule
from the ontology of "wavefunction yes, wavefunction collapse no". But just because something is a real number between 0 and 1, doesn't yet explain how it manages to be a probability. In particular,
I would maintain that if you have a multiverse theory, in which all possibilities are actual, then a probability must refer to a frequency. The probability of an event in the multiverse is simply how
often it occurs in the multiverse. And clearly, just having the number 0.5 associated with a particular multiverse branch is not yet the same thing as showing that the events in that branch occur
half the time.
I don't have a good name for this phenomenon, but we could call it "borrowed support", in which a belief system receives support from considerations which aren't legitimately its own to claim. (Ayn
Rand apparently talked about a similar notion of "borrowed concepts".)
Third, there is a possibility among people who have a capacity for highly abstract thought, to adopt an ideology, ontology, or "theory of everything" which is only expressed in those abstract terms,
and to then treat that theory as the whole of reality, in a way that reifies the abstractions. This is a highly specific form of treating the map as the territory, peculiar to abstract thinkers. When
someone says that reality is made of numbers, or made of computations, this is at work. In the case at hand, we're talking about a theory of physics, but the ontology of that theory is incompatible
with the definiteness of one's own existence. My guess is that the main psychological factor at work here is intoxication with the feeling that one understands reality totally and in its essence. The
universe has bowed to the imperial ego; one may not literally direct the stars in their courses, but one has known the essence of things. Combine that intoxication, with "borrowed support" and with
the simple failure to think hard enough about where on the map the imperial ego itself might be located, and maybe you have a comprehensive explanation of how people manage to believe theories of
reality which are flatly inconsistent with the most basic features of subjective experience.
I should also say something about Emile's example of the ink blots. I find it rather superficial to just say "there's no definite number of blots". To say that the number of blots depends on
definition is a lot closer to being true, but that undermines the argument, because that opens the possibility that there is a right definition of "world", and many wrong definitions, and that the
true number of worlds is just the number of worlds according to the right definition.
Emile's picture can be used for the opposite purpose. All we have to do is to scrutinize, more closely, what it actually is. It's a JPEG that is 314 pixels by 410 pixels in size. Each of those pixels
will have an exact color coding. So clearly we can be entirely objective in the way we approach this question; all we have to do is be precise in our concepts, and engage with the genuine details of
the object under discussion. Presumably the image is a scan of a physical object, but even in that case, we can be precise - it's made of atoms, they are particular atoms, we can make objective
distinctions on the basis of contiguity and bonding between these atoms, and so the question will have an objective answer, if we bother to be sufficiently precise. The same goes for "worlds" or
"branches" in a wavefunction. And the truly pernicious thing about this version of Many Worlds is that it prevents such inquiry. The ideology that tolerates vagueness about worlds serves to protect
the proposed ontology from necessary scrutiny.
The same may be said, on a broader scale, of the practice of "dissolving a wrong question". That is a gambit which should be used sparingly and cautiously, because it easily serves to instead justify
the dismissal of a legitimate question. A community trained to dismiss questions may never even notice the gaping holes in its belief system, because the lines of inquiry which lead towards those
holes are already dismissed as invalid, undefined, unnecessary. smk came to this topic fresh, and without a head cluttered with ideas about what questions are legitimate and what questions are
illegitimate, and as a result managed to ask something which more knowledgeable people had already prematurely dismissed from their own minds.
How many universes "branch off" from a "quantum event", and in how many of them is the cat dead vs alive, and what about non-50/50 scenarios, and please answer so that a physics dummy can maybe kind
of understand?
(Is it just 1 with the live cat and 1 with the dead one?)
From a recent paper that is getting non-trivial attention...
"Quantum states are the key mathematical objects in quantum theory. It is therefore surprising that physicists have been unable to agree on what a quantum state represents. There are at least two
opposing schools of thought, each almost as old as quantum theory itself. One is that a pure state is a physical property of system, much like position and momentum in classical mechanics.
Another is that even a pure state has only a statistical significance, akin to a probability distribution in statistical mechanics. Here we show that, given only very mild assumptions, the
statistical interpretation of the quantum state is inconsistent with the predictions of quantum theory. This result holds even in the presence of small amounts of experimental noise, and is
therefore amenable to experimental test using present or near-future technology. If the predictions of quantum theory are confirmed, such a test would show that distinct quantum states must
correspond to physically distinct states of reality."
From my understanding, the result works by showing how, if a quantum state is determined only statistically by some true physical state of the universe, then it is possible for us to construct clever
quantum measurements that put statistical probability on outcomes for which there is literally zero quantum amplitude, which is a contradiction of Born's rule. The assumptions required are very mild,
and if this is confirmed in experiment it would give a lot of justification for a phyicalist / realist interpretation of the Many Worlds point of view.
More from the paper:
"We conclude by outlining some consequences of the result. First, one motivation for the statistical view is the obvious parallel between the quantum process of instantaneous wave function
collapse, and the (entirely non-mysterious) classical procedure of updating a probability distribution when new information is acquired. If the quantum state is a physical property of a system --
as it must be if one accepts the assumptions above -- then the quantum collapse must correspond to a real physical process. This is especially mysterious when two entangled systems are at
separate locations, and measurement of one leads to an instantaneous collapse of the quantum state of the other.
In some versions of quantum theory, on the other hand, there is no collapse of the quantum state. In this case, after a measurement takes place, the joint quantum state of the system and
measuring apparatus will contain a component corresponding to each possible macroscopic measurement outcome. This is unproblematic if the quantum state merely reflects a lack of information about
which outcome occurred. But if the quantum state is a physical property of the system and apparatus, it is hard to avoid the conclusion that each marcoscopically different component has a direct
counterpart in reality.
On a related, but more abstract note, the quantum state has the striking property of being an exponentially complicated object. Specifically, the number of real parameters needed to specify a
quantum state is exponential in the number of systems n. This has a consequence for classical simulation of quantum systems. If a simulation is constrained by our assumptions -- that is, if it
must store in memory a state for a quantum system, with independent preparations assigned uncorrelated states -- then it will need an amount of memory which is exponential in the number of
quantum systems.
For these reasons and others, many will continue to hold that the quantum state is not a real object. We have shown that this is only possible if one or more of the assumptions above is dropped.
More radical approaches[14] are careful to avoid associating quantum systems with any physical properties at all. The alternative is to seek physically well motivated reasons why the other two
assumptions might fail."
On a related note, in one of David Deutsch's original arguments for why Many Worlds was straightforwardly obvious from quantum theory, he mentions Shor's quantum factoring algorithm. Essentially he
asks any opponent of Many Worlds to give a real account, not just a parochial calculational account, of why the algorithm works when it is using exponentially more resources than could possibly be
classically available to it. The way he put it was: "where was the number factored?"
I was never convinced that regular quantum computation could really be used to convince someone of Many Worlds who did not already believe it, except possibly for bounded-error quantum computation
where one must accept the fact that there are different worlds to find one's self in after the computation, namely some of the worlds where the computation had an error due to the algorithm itself
(or else one must explain the measurement problem in some different way as per usual). But I think that in light of the paper mentioned above, Deutsch's "where was the number factored" argument may
deserve more credence.
Added: Scott Aaronson discusses the paper here (the comments are also interesting).
I've read through the Quantum Physics sequence and feel that I managed to understand most of it. But now it seems to me that the Double Slit and Schrodinger's cat experiments are not described quite
correctly. So I'd like to try to re-state them and see if anybody can correct any misunderstandings I likely have.
With the Double Slit experiment we usually hear it said the particle travels through both slits and then we see interference bands. The more precise explanation is that there is an complex valued
amplitude flow corresponding to the particle moving through the left slit and another for the right slit. But if we could manage to magically "freeze time" then we would find ourselves in one
position in configuration space where the particle is unambiguously in one position (let's say the left slit). Now any observer will have no way of knowing this at the time, and if they did detect
the particle's position in any way it would change the configuration and there would be no interference banding.
But the particle really is going through the left slit right now (as far as we are concerned), simply because that is what it means to be at some point in configuration space. The particle is going
through the right slit for other versions of ourselves nearby in configuration space.
The amplitude flow then continues to the point in configuration space where it arrives at the back screen, and it is joined by the amplitude flow via the right slit to the same region of
configuration space, causing an interference pattern. So this present moment in time now has more than one past, now we can genuinely say that it did go through both. Both pasts are equally valid.
The branching tree of amplitude flow has turned into a graph.
So far so good I hope (or perhaps I'm about to find out I'm completely wrong). Now for the cat.
I read recently that experimenters have managed to keep two clouds of caesium atoms in a coherent state for a hour. So what would this look like if we could scale it up to a cat?
The problem with this experiment is that a cat is a very complex system and the two particular types of states we are interested in (i.e. dead or alive) are very far apart in configuration space. It
may help to imagine that we could rearrange configuration space a little to put all the points labelled "alive" on the left and all the dead points on the right of some line. If we want to make the
gross simplification that we can treat the cat as a very simple system then this means that "alive" points are very close to the "dead" points in configuration space. In particular it means that
there are significant amplitude flows between the two sets of points, that is significant flows across the line in both directions. Of course such flows happen all the time, but the key point is here
the direction of the complex flow vectors would be aligned so as to cause a significant change in the magnitude of the final values in configuration space instead of tending to cancel out.
This means that as time proceeds the cat can move from alive to dead to alive to dead again, in the sense that in any point of configuration space that we find ourselves will contain an amplitude
contribution both from alive states and from dead states. In other words two different pasts are contributing to the present.
So sometime after the experiment starts we magically stop the clock on the wall of the universe. Since we are at a particular point the cat is either alive or dead, let's say dead. So the cat is not
alive and dead at the same time because we find ourselves at a single point in configuration space. There are also other points in the configuration space containing another instance of ourselves
along with an alive cat. But since we have not entangled anything else in the universe with the cat/box system as time ticks along the cat would be buzzing around from dead to alive and back to dead
again. When we open the box things entangle and we diverge far apart in configuration space, and now the cat remains completely dead or alive, at least for the point in configuration space we find
ourselves in.
How to sum up? Cats and photons are never dead or alive or going left or right at the same moment from the point of view of one observer somewhere in configuration space, but the present has an
amplitude contribution from multiple pasts.
If you're still reading this then thanks for hanging in there. I know there's some more detail about observations only being from a set of eigenvalues and so forth, but can I get some comments about
whether I'm on the right track or way off base?
Suppose you believe in the Copenhagen interpretation of quantum mechanics. Schroedinger puts his cat in a box, with a device that has a 50% chance of releasing a deathly poisonous gas. He will then
open the box, and observe a live or dead cat, collapsing that waveform.
But Schroedinger's cat is lazy, and spends most of its time sleeping. Schroedinger is a pessimist, or else an optimist who hates cats; and so he mistakes a sleeping cat for a dead cat with
probability P(M) > 0, but never mistakes a dead cat for a living cat.
So if the cat is dead with probability P(D) >= .5, Schroedinger observes a dead cat with probability P(D) + P(M)(1-P(D)).
If observing a dead cat causes the waveform to collapse such that the cat is dead, then P(D) = P(D) + P(M)(1-P(D)). This is possible only if P(D) = 1.
continue reading »
I've written a prior post about how I think that the Everett branching factor of reality dominates that of any plausible simulation, whether the latter is run on a Von Neumann machine, on a quantum
machine, or on some hybrid; and thus the probability and utility weight that should be assigned to simulations in general is negligible. I also argued that the fact that we live in an apparently
quantum-branching world could be construed as weak anthropic evidence for this idea. My prior post was down-modded into oblivion for reasons that are not relevant here (style, etc.) If I were to
replace this text you're reading with a version of that idea which was more fully-argued, but still stylistically-neutral (unlike my prior post), would people be interested?
Most people (not all, but most) are reasonably comfortable with infinity as an ultimate (lack of) limit. For example, cosmological theories that suggest the universe is infinitely large and/or
infinitely old, are not strongly disbelieved a priori.
By contrast, most people are fairly uncomfortable with manifest infinity, actual infinite quantities showing up in physical objects. For example, we tend to be skeptical of theories that would allow
infinite amounts of matter, energy or computation in a finite volume of spacetime.
continue reading »
I was planning to post this in the main area, but my thoughts are significantly less well-formed than I thought they were. Anyway, I hope that interested parties find it nonetheless.
In the Carnegie Mellon 2011 Buhl Lecture, Scott Aaronson gives a remarkably clear and concise review of P, NP, other fundamentals in complexity theory, and their quantum extensions. In particular,
beginning around the 46 minute mark, a sequence of examples is given in which the intuition from computability theory would have accurately predicted physical results (and in some cases this actually
happened, so it wasn't just hindsight bias).
In previous posts we have learned about Einstein's arrogance and Einstein's speed. This pattern of results flowing from computational complexity to physical predictions seems odd to me in that
context. Here we are using physical computers to derive abstractions about the limits of computation, and from there we are successfully able to intuit limits of physical computation (e.g. brains
computing abstractions of the fundamental limits of brains computing abstractions...) At what point do we hit the stage where individual scientists can rationally know that results from computational
complexity theory are more fundamental than traditional physics? It seems like a paradox wholly different than Einstein rationally knowing (from examining bits of theory-space evidence rather than
traditional-experiment-space evidence) that relativity would hold true. In what sort of evidence space can physical brain computation yielding complexity limits count as bits of evidence factoring
into expected physical outcomes (such as the exponential smallness of the spectral gap of NP-hard-Hamiltonians from the quantum adiabatic theorem)?
Maybe some contributors more well-versed in complexity theory can steer this in a useful direction.
21 videos, which cover subjects including the basic model of quantum computing, entanglement, superdense coding, and quantum teleportation.
To work through the videos you need to be comfortable with basic linear algebra, and with assimilating new mathematical terminology. If you’re not, working through the videos will be arduous at
best! Apart from that background, the main prerequisite is determination, and the willingness to work more than once over material you don’t fully understand.
In particular, you don’t need a background in quantum mechanics to follow the videos.
The videos are short, from 5-15 minutes, and each video focuses on explaining one main concept from quantum mechanics or quantum computing. In taking this approach I was inspired by the excellent
Khan Academy.
Link: michaelnielsen.org/blog/quantum-computing-for-the-determined/
Author: Michael Nielsen
The basics
Superdense coding
Quantum teleportation
The postulates of quantum mechanics (TBC)
I am reading through the sequence on quantum physics and have had some questions which I am sure have been thought about by far more qualified people. If you have any useful comments or links about
these ideas, please share.
Most of the strongest resistance to ideas about rationalism that I encounter comes not from people with religious beliefs per se, but usually from mathematicians or philosophers who want to assert
arguments about the limits of knowledge, the fidelity of sensory perception as a means for gaining knowledge, and various (what I consider to be) pathological examples (such as the zombie example).
Among other things, people tend to reduce the argument to the existence of proper names a la Wittgenstein and then go on to assert that the meaning of mathematics or mathematical proofs constitutes
something which is fundamentally not part of the physical world.
As I am reading the quantum physics sequence (keep in mind that I am not a physicist; I am an applied mathematician and statistician and so the mathematical framework of Hilbert spaces and amplitude
configurations makes vastly much more sense to me than billiard balls or waves, yet connecting it to reality is still very hard for me) I am struck by the thought that all thoughts are themselves
fundamentally just amplitude configurations, and by extension, all claims about knowledge about things are also statements about amplitude configurations. For example, my view is that the color red
does not exist in and of itself but rather that the experience of the color red is a statement about common configurations of particle amplitudes. When I say "that sign is red", one could unpack this
into a detailed statement about statistical properties of configurations of particles in my brain.
The same reasoning seems to apply just as well to something like group theory. States of knowledge about the Sylow theorems, just as an example, would be properties of particle amplitude
configurations in a brain. The Sylow theorems are not separately existing entities which are of themselves "true" in any sense.
Perhaps I am way off base in thinking this way. Can any philosophers of the mind point me in the right direction to read more about this?
EDIT at Karma -5: Could the next "good citizen" to vote this down leave me a comment as to why it is getting voted down, and if other "good citizens" to pile on after that, either upvote that comment
or put another comment giving your different reason?
Original Post:
Questions about the computability of various physical laws recently had me thinking: "well of course every real physical law is computable or else the universe couldn't function." That is to say
that in order of the time-evolution of anything in the universe to proceed "correctly," the physical processes themselves must be able to, and in real-time, keep up with the complexity of their
actual evolution. This seems to me a proof that every real physical process is computable by SOME sort of real computer, in the degenerate case that real computer is simply an actual physical model
of the process itself, create that model, observe whichever features of its time-evolution you are trying to compute, and there you have your computer.
Then if we have a physical law whose use in predicting time evolution is provably uncomputable, either we know that this physical law is NOT the only law that might be formulated to describe what it
is purporting to describe, or that our theory of computation is incomplete. In some sense what I am saying is consistent with the idea that quantum computing can quickly collapse down to plausibly
tractable levels the time it takes to compute some things which, as classical computation problems, blow up. This would be a good indication that quantum is an important theory about the universe,
that it not only explains a bunch of things that happen in the universe, but also explains how the universe can have those things happen in real-time without making mistakes.
What I am wondering is, where does this kind of consideration break with traditional computability theory? Is traditional computability theory limited to what Turing machines can do, while perhaps
it is straightforward to prove that the operation of this Universe requires computation beyond what Turing machines can do? Is traditional computability theory limited to digital representations
whereas the degenerate build-it-and-measure-it computer is what has been known as an analog computer? Is there somehow a level or measure of artificiality which must be present to call something a
computer, which rules out such brute-force approaches as build-it-and-measure-it?
At least one imagining of the singularity is absorbing all the resources of the universe into some maximal intelligence, the (possibly asymptotic) endpoint of intelligences desiging greater
intelligences until something makes them stop. But the universe is already just humming along like clockwork, with quantum and possibly even subtler-than-quantum gears turning in real time. What
does the singularity add to this picture that isn't already there?
EDIT: 1:19 PM PST 22 December 2010 I completed this post. I didn't realize an uncompleted version was already posted earlier.
I wanted to read the quantum sequence because I've been intrigued by the nature of measurement throughout my physics career. I was happy to see that articles such as joint configuration use beams of
photons and half and fully silvered mirrors to make its points. I spent years in graduate school working with a two-path interferometer with one moving mirror which we used to make spectrometric
measurements on materials and detectors. I studied the quantization of the electromagnetic field, reading and rereading books such as Yariv's Quantum Electronics and Marcuse's Principles of Quantum
Electronics. I developed with my friend David Woody a photodetector ttheory of extremely sensitive heterodyne mixers which explained the mysterious noise floor of these devices in terms of the shot
noise from detecting the stream of photons which are the "Local Oscillator" of that mixer.
My point being that I AM a physicist, and I am even a physicist who has worked with the kinds of configurations shown in this blog post, both experimentally and theoretically. I did all this work 20
years ago and have been away from any kind of Quantum optics stuff for 15 years, but I don't think that is what is holding me back here.
So when I read and reread the joint configuration blog post, I am concerned that it makes absolutely no sense to me. I am hoping that someone out there DOES understand this article and can help me
understand it. Someone who understands the more traditional kinds of interferometer configurations such as that described for example here and could help put this joint configuration blog post in
terms that relate it to this more usual interferometer situation.
I'd be happy to be referred to this discussion if it has already taken place somewhere. Or I'd be happy to try it in comments to this discussion post. Or I'd be happy to talk to someone on the
phone or in primate email, if you are that person email me at mwengler at gmail dot com.
To give you an idea of the kinds of things I think would help:
1) How might you build that experiment? Two photons coming in from right angles could be two radio sources at the same frequency and amplitude but possibly different phase as they hit the mirror.
In that case, we get a stream of photons to detector 1 proportional to sin(phi+pi/4)^2 and a stream of photons to detector 2 proportional to cos(phi+pi/4)^2 where phi is the phase difference of the
two waves as they hit the mirror, and I have not attempted to get the sign of the pi/4 term right to match the exact picture. Are they two thermal sources? In which case we get random phases at the
mirror and the photons split pretty randomly between detector 1 and detector 2, but there are no 2-photon correlations, it is just single photon statistics.
2) The half-silvered mirror is a linear device: two photons passing through it do not interact with each other. So any statistical effect correlating the two photons (that is, they must either both
go to detector 1 or both go to detector 2, but we will never see one go to 1 and the other go to 2) must be due to something going in the source of the photons. Tell me what the source of these
photons is that gives this gedanken effect.
3) The two-photon aspect of the statistical prediction of this seems at least vaguely EPR-ish. But in EPR the correlations of two photons come about because both photons originate from a single
process, if I recall correctly. Is this intending to look EPRish, but somehow leaving out some necessary features of the source of the two photons to get the correlation involved?
I remaing quite puzzled and look forward to anything anybody can tell me to relate the example given here to anything else in quantum optics or interferometers that I might already have some
knowledge of.
I had an incredibly frustrating conversation this morning trying to explain the idea of quantum immortality to someone whose understanding of MWI begins and ends at pop sci fi movies. I think I've
identified the main issue that I wasn't covering in enough depth (continuity of identity between near-identical realities) but I was wondering whether anyone has ever faced this problem before, and
whether anyone has (or knows where to find) a canned 5 minute explanation of it.
Sort of a response to: Collapse Postulate
Abstract: There are phenomena in mathematics where certain structures are distributed "at random;" that is, statistical statements can be made and probabilities can be used to predict the outcomes
of certain totally deterministic calculations. These calculations have a deep underlying structure which leads a whole class of problems to behave in the same way statistically, in a way that
appears random, while being entirely deterministic. If quantum probabilities worked in this way, it would not require collapse or superposition.
This is a post about physics, and I am not a physicist. I will reference a few technical details from my (extremely limited) research in mathematical physics, but they are not necessary to the
fundamental concept. I am sure that I have seen similar ideas somewhere in the comments before, but searching the site for "random + determinism" didn't turn much up so if anyone recognizes it I
would like to see other posts on the subject. However my primary purpose here is to expose the name "Deep Structure Determinism" that jasonmcdowell used for it when I explained it to him on the ride
back from the Berkeley Meetup yesterday.
Again I am not a physicist; it could be that there is a one or two sentence explanation for why this is a useless theory--of course that won't stop the name "Deep Structure Determinism" from being
aesthetically pleasing and appropriate.
For my undergraduate thesis in mathematics, I collected numerical evidence for a generalization of the Sato-Tate Conjecture. The conjecture states, roughly, that if you take the right set of
polynomials, compute the number of solutions to them over finite fields, and scale by a consistent factor, these results will have a probability distribution that is precisely a semicircle.
The reason that this is the case has something to do with the solutions being symmetric (in the way that y=x^2 if and only if y=(-x)^2 is a symmetry of the first equation) and their group of
symmetries being a circle. And stepping back one step, the conjecture more properly states that the numbers of solutions will be roots of a certain polynomial which will be the minimal polynomial of
a random matrix in SU[2].
That is at least as far as I follow the mathematics, if not further. However, it's far enough for me to stop and do a double take.
A "random matrix?" First, what does it mean for a matrix to be random? And given that I am writing up a totally deterministic process to feed into a computer, how can you say that the matrix is
A sequence of matrices is called "random" if when you integrate of that sequence, your integral converges to integrating over an entire group of matrices. Because matrix groups are often smooth
manifolds they are designed to be integrated over, and this ends up being sensible. However a more practical characterization, and one that I used in the writeup for my thesis, is that if you take a
histogram of the points you are measuring, the histogram's shape should converge to the shape of the group--that is, if you're looking at the matrices that determine a circle, your histogram should
look more and more like a semicircle as you do more computing. That is, you can have a probability distribution over the matrix space for where your matrix is likely to show up.
The actual computation that I did involved computing solutions to a polynomial equation--a trivial and highly deterministic procedure. I then scaled them, and stuck them in place. If I had not know
that these numbers were each coming from a specific equation I would have said that they were random; they jumped around through the possibilities, but they were concentrated around the areas of
higher probability.
So bringing this back to quantum physics: I am given to understand that quantum mechanics involves a lot of random matrices. These random matrices give the impression of being "random" in that it
seems like there are lots of possibilities, and one must get "chosen" at the end of the day. One simple way to deal with this is postulate many worlds, wherein there no one choice has a special
However my experience with random matrices suggests that there could just be some series of matrices, which satisfies the definition of being random, but which is inherently determined (in the way
that the Jacobian of a given elliptic curve is "determined.") If all quantum random matrices were selected from this list, it would leave us with the subjective experience of randomness, and given
that this sort of computation may not be compressible, the expectation of dealing with these variables as though they are random forever. It would also leave us in a purely deterministic world,
which does not branch, which could easily be linear, unitary, differentiable, local, symmetric, and slower-than-light.
|
{"url":"http://lesswrong.com/r/discussion/tag/quantum/","timestamp":"2014-04-18T21:22:15Z","content_type":null,"content_length":"184906","record_id":"<urn:uuid:05b2c1b1-7b21-4977-96e6-10d556063a93>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
|
<mathematics> A description of a quantity whose value is one of a fixed set of values, as opposed to a continuous - a value capable of infinitessimal variation. For example, integers are discrete
values whereas real numbers are continuous; digital sound has discrete amplitude leves whereas analog sound is continuous.
Last updated: 2009-10-08
Try this search on Wikipedia, OneLook, Google
Nearby terms: disclaimer « disconnect « Discordianism « discrete » discrete cosine transform » discrete Fourier transform » discrete preorder
Copyright Denis Howe 1985
|
{"url":"http://foldoc.org/discrete","timestamp":"2014-04-18T13:11:24Z","content_type":null,"content_length":"5036","record_id":"<urn:uuid:2a767a69-4e00-4d93-a555-6d240c9ce453>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Minor question on sample sizes [Archive] - Statistics Help @ Talk Stats Forum
06-17-2010, 11:44 AM
Pardon the double-posting -- I'm not sure what this forum's culture is on creating new threads for new topics, but I tend to try to keep each topic within one question. If I should add on to an open
topic in the future, let me know :)
As I mentioned, I'm tutoring a student in Probability/Statistics, and there's also one question on their latest assignment that I consider a little ambiguous:
Basically, given a population mean of 11 and a standard deviation of 3 to describe some set of data, the question is, "What size sample would be required to compute probabilities regarding the sample
mean using the normal approximation?"
Considering they haven't yet covered confidence intervals or other things, I'm not sure what sample size might be 'required'. The bigger the better, naturally, but I don't see how there is a minimum
threshold in this instance. Any suggestions?
|
{"url":"http://www.talkstats.com/archive/index.php/t-12399.html","timestamp":"2014-04-18T20:51:13Z","content_type":null,"content_length":"6625","record_id":"<urn:uuid:f8a8e696-5bab-4c8b-8f67-06e206e83ff4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math 5: Schedule, topics and worksheets
) Midterm: Thursday Oct 20, 6-8 pm, Kemeny 201 (solutions) 21 FComplex vibration modes of extended bodies (eg two masses, Chladni plates) 24 M8.5-8.7.1 Damping and excitation of modes. Lowest mode of
an elastic string. hitbar 626 WHW5 due. Damping and excitation of strings. Bowing. 27 Th X-hr- 28 F8.7.5Plucked and bowed strings. Pipe resonances. 31 M8.7.4, 8.3.3More pipes, register keys, clarinet
modes, Helmholtz resonator. openclosed 7Nov 2 W8.9HW6 due. (Stretched partials.) Resonance. 4 Fthis Human voice: speech, formants, vocal tract shape, vowels, singing. sopranos, von Kempelen's machine
. 7 M7.10, 7.13.2, 8.10.1, App.7Singing. Architectural acoustics: reverberation and early reflections. 89 WHW7 due. Derive Sabine's formula. Ray reflection, method of images. 10 Th X-hrQuiz 2 (35
mins, bring calculators) (answers) 11 F7.8, 7.11 Deadline to choose project topics. Multiple images and lattices. wave phenomena. image 14 M Interference and diffraction of waves. 916 WHW8 due. Human
hearing: anatomy, frequency sensitivity. 18 FHuman hearing: directional sensing, frequency masking. direction 21 Melectronic music recording and reproduction. samplingsampling 24 W - 28 Su: ---------
Thanksgiving recess --------- 28 M-Math in music composition, (Paganini 24th Caprice theme, Rachmaninov Rhapsody Op.43, inversion of melody). groups 1030 W-Project Presentation Mini-Conference (in
usual lecture slot) Dec 1 Th X-hr-Finish Project Presentations. Exam review session. practise questions. 2007 final (solutions), 2008 final (solutions). 2010 final (solutions). Final Exam: Monday Dec
5, 8am, Kemeny 108 (solutions) Project write-ups due at my office, 5pm, Tues Dec 6.
|
{"url":"http://www.math.dartmouth.edu/~m5f11/sched.html","timestamp":"2014-04-17T16:19:02Z","content_type":null,"content_length":"7910","record_id":"<urn:uuid:968cb6f8-79ea-449c-93bb-50efb983b5cb>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posted at 10:13 am ET on June 7, 2011
by Amanda Funderburke
The formulas should read (A2:D2) instead of (A2:C2). Sorry for the confusion, but an edit function would be nice. :)
Posted at 10:07 am ET on February 12, 2011
by Nicholas Cammarata
Regarding pivot tables, I recommend that you always select empty cells to show zeros. The following example will illustrate why:
First, consider the case where empty cells are left blank.
(A) (B) (C) (D)
(1) Jan Feb Mar Apr
(2) 24 (blank) (blank) 24
Using this pivot table, the formula =average(A2:C2) would return the value "24." It ignores the blank cells.
Now, let's contrast that with the case where empty cells contain zeros.
(A) (B) (C) (D)
(1) Jan Feb Mar Apr
(2) 24 0 0 24
Now, the formula =average(A2:C2) would return the value "12."
If the numbers above represent revenue $, for example, the lack of revenues in February and March should not be overlooked in calculations. Thus, I would expect calculations built on the latter
settings to be the correct ones for most users.
Finally, this issue is not exclusive to the =average formula. Others, such as =min and =mode, may also be thrown off by blank cells in a pivot table.
Posted at 10:02 am ET on February 12, 2011
by Nicholas Cammarata
There are a few people who know about the function Subtotal(Function_num,Range), but even only a few of these know about how powerful it can be when adding columns in a table in conjunction with Data
This is a tip worth sharing.
Posted at 11:43 am ET on January 13, 2011
by Alan Brown
Multiple Sumif formula
For multiple sum if, first you need to open brackets as follows: {=sum(if(....="x",if(...="y"..,range of values,0),0))}. Second, you need to type {} at the beginning and at the end of the formula by
pressing Ctrl, Upper Case, Enter keys at the same time while the cursor in on the formula.
Posted at 12:47 pm ET on October 20, 2010
by Sameh Kirollos
When creating a row index it is better to use the formula =row(). Then when a row is inserted, one does not have to recopy the formula A1+1, for example, for the entire list. The formula has to only
be added to the inserted row.
Posted at 11:56 am ET on October 7, 2010
by Matthew Saxon
Here is an oldie but a goodie that helps when you are working in a heavy-formulated Excel file. To review and understand the formulas without going into each cell, hit ALT and the (~). This will
display all live formulas in the sheet. Hit the key combination again to return to the number value.
Posted at 9:36 am ET on September 9, 2010
by Karlo Bustos
When creating a lookup table with VLOOKUP, index your rows - but don't use hardcoded sequential numbers (e.g. 1, 2, 3, 4, etc.), because this will cause problems if you later insert a row in the
middle of the data set. Instead, put a "1" in the first row and then use the formula A1 + 1 and then copy that down the column so that the indexing is dynamic and automatically incorporates any added
Posted at 3:41 pm ET on September 8, 2010
by George Austin
This is a short-lived tip for those who run into backward compatability problems with Office (as I do when trying to open spreadsheets from work on my older home computer). If you can't open a .xlsx
file, one way to view and even work with the contents is to open it in Google spreadsheets. I would only do this for viewing and limited operations, though, not for making major changes. Google
spreadsheets are still somewhat buggy.
Posted at 9:34 am ET on September 7, 2010
by Tim Reason
Reader Posts
|
{"url":"http://www.cfo.com/spreadsheets/index.cfm/l_see-all-tips/14520115","timestamp":"2014-04-20T12:26:28Z","content_type":null,"content_length":"31978","record_id":"<urn:uuid:b83627dd-a755-4bfd-b55d-e9b59e076030>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PIMS/SFU Discrete Math Seminar: Irene Pivotto
• Date: 10/11/2011
• Time: 14:30
Simon Fraser University
Pairs of representations of signed binary matroids.
Abstract: Given a binary matroid $M$ represented by a binary matrix $A$ we can consider the binary matroid represented by $A$ plus an extra binary row$\\Sigma$. We call this a signed matroid and say
that $(M,\\Sigma)$ is arepresentation of the signed matroid. We show that for every pair of representations $(M_1,\\Sigma_1)$ and
$(M_2,\\Sigma_2)$ of the same signed matroid there exist rows $\\Gamma_1$ and $\\Gamma_2$ such that $(M^*_1,\\Gamma_1)$ and $(M^*_2,\\Gamma_2)$ are representations of the same signed matroid (where
$M^*$ denotes the dual of $M$). An application of this result relates pairs of signed graphs representing the same even cycle matroid to pairs of grafts representing the same even cut matroid. All
terms used from matroid theory will be defined during the talk.
This is joint work with B. Guenin and P. Wollan.
|
{"url":"http://www.pims.math.ca/scientific-event/111011-pdmsip","timestamp":"2014-04-17T19:02:56Z","content_type":null,"content_length":"17381","record_id":"<urn:uuid:03830a78-7b91-46ee-9f77-a06baeaf36f8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tranformations on Functions
I was wondering if anyone could check my code for the following demonstration on transforming functions.
Code: Select all
q[x_]=(x-1)^2-1/;x>=0;Manipulate[Grid[{{TraditionalForm[A func[B x+W]+G]},{Plot[A*func[B*x+W]+G,{x,-6,6},PlotRange->{-6,6},PlotStyle->Thick,ImageSize->550]}},Alignment->Center,Spacings->{4,3}],
n->"f(x) = x",f->"f(x) = x^2", g->"f(x) = x^3", h->"f(x) = Sqrt[x]",k->"f(x) = |x|",j->"f(x) = sin(x)",
s->"f(x) = [x]",t->"f(x) = ln(x)",m->"f(x) = sgn (x)",r->"f(x) = 1/x",q->"f(x) = special"},PopupMenu},{{A,1},-3,3,1/4},{{B,1},-3,3,1/4},{{W,0,"C"},-5,5,.5},{{G,0,"D"},-5,5,.5}]
The problem is that if I take the labels in the PopupMenu (ex "f(x) = x^2"), and I format it within the quotation marks to how I want it to appear in the demonstration with the superscript, using a
particular font, font face, and size). When I reopen the file containing the demonstration, the code is all messed up and won't evaluate. I get the following code.
Code: Select all
"f(x) =\!\(\*
StyleBox[SuperscriptBox["x", "2"],
FontFamily->"Times New Roman",
Where the f(x) retains its formatting, but then I get the " \!\(\*..." and it won't evaluate correctly, unless I go back and change everything back. I really like the demonstration, and I have used
it a few times in class. However like the previous topic I have no idea why formatting is so hard. If you have any suggestions for improving the code, please let me know.
Andrew Bayliss
Re: Tranformations on Functions
Better late than never...
I'm not exactly certain of what all you want formatted in Times, Italic, and 16pt, but using DisplayForm may help. Wrap the (slightly edited) contents of the error message you were receiving in
DisplayForm and you get some pretty-printing output in the front end:
Code: Select all
DisplayForm[StyleBox["f(x) = " SuperscriptBox["x","2"],FontFamily->"Times New Roman",FontSize->16,FontSlant->"Italic"]]
Here is a stripped-down version of the code you gave:
Code: Select all
f[x_] = x^2;
StyleBox[A func[B x + W] + G, FontFamily -> "Times New Roman",
FontSize -> 16, FontSlant -> "Italic"]]}, {Plot[
A*func[B*x + W] + G, {x, -6, 6}, PlotRange -> {-6, 6},
PlotStyle -> Thick, ImageSize -> 550]}}, Alignment -> Center,
Spacings -> {4, 3}], {{func, f,
"Function"}, {f ->
StyleBox["f(x) = " SuperscriptBox["x", "2"],
FontFamily -> "Times New Roman", FontSize -> 16,
FontSlant -> "Italic"]]}}, {{A, 1}, -3, 3, 1/4}, {{B, 1}, -3, 3,
1/4}, {{W, 0, "C"}, -5, 5, .5}, {{G, 0, "D"}, -5, 5, .5}]
|
{"url":"http://www.wolfram.com/faculty/forum/viewtopic.php?f=11&t=152&start=0","timestamp":"2014-04-20T09:27:05Z","content_type":null,"content_length":"24288","record_id":"<urn:uuid:9a782b7d-9976-4754-9412-10dbac25fbdb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Interpolation in pure python
Posted on 23 February 2013
A long while ago, I wrote a script for matplotlib that later (after lots of work by Tony Yu and others) became the matplotlib streamplot function. This function generates plots that are constructed
out of streamlines of a flow, in such a way as to fill the domain uniformly with such streamlines. I wrote the script to plot winds in the tropical upper troposphere/lower stratosphere where there is
a lot of divergence and convergence (principally caused by convection over the western Pacific and subsidence over the eastern Pacific.)
The streamplot algorithm uses an adaptive 1st/2nd order Runge-Kutta scheme and linear interpolation of the velocity field to integrate the streamlines. This was implemented in pure python, which I
originally chose to do because the portion of the algorithm responsible for ensuring an even domain-filling distribution of streamlines imposes a very strange termination condition on the integral
that does not fit well with the integrators in scipy. In addition, matplotlib does not depend on scipy.
I initially expected a pure python algorithm to be incredibly slow, so I used a very simple interpolation scheme. Any interpolation scheme interpolating some data D[i] defined at points x[i] on to
the target point x0 is composed of two steps; first, the algorithm must identify the nearest grid points to the target point. Second, the interpolator must apply the appropriate interpolator to those
neighbouring points.
The second step is usually trivial to implement, but the first step is non-trivial in general. The first step becomes a lot simpler when we restrict ourselves to a regular grid with spacing dx as we
can then find the index i of the point just above x0 by rescaling and casting to an integer
i = int(1 + (x0 - x[0]) / dx)
I thought this was probably the fastest way of implementing an interpolator and so used this technique to implement the interpolator in streamplot. However, it added some complexity to the algorithm
and, most importantly, imposed a restriction that only regular grids (i.e. grids with evenly spaced axes) could be used. For my original purpose, this was not a problem but it is clearly a rather
serious limitation in general.
Recently I have been hitting this limitation and decided over the last couple of evenings to try to free streamplot of this limitation. To generalise the first interpolation step to work with any x
[i], a search is needed to find the smallest i such that x[i] > x0. Any pure python search algorithm would be too slow. Using argmax to do a linear search is an option:
i = numpy.argmax(x[i] > x0)
However, this does not scale well with the length of each dimension and is too slow in this context.
Looking through the standard library, I came across the bisect module which implements the interval bisection method for searching ordered lists and arrays – exactly what I needed to do. Implementing
this search is really simple; it’s just
This method is really fast – way faster than I expected. After re-implementing the streamplot integrator and interpolator to use this type of interpolation, typical plotting times were barely
Here is an example of the new script in action with a very non-uniform vertical grid spacing:
(This plot shows the boreal winter average zonal and vertical winds in the inner tropics.)
The new script is available here. Feel free to try it out if you need to do streamplots with uneven grids. Please note that this is a work-in-progress and so use with caution :-)
|
{"url":"http://www.flannaghan.com/2013/02/23/interpolation","timestamp":"2014-04-18T08:27:05Z","content_type":null,"content_length":"7900","record_id":"<urn:uuid:c718ff5a-2b71-4b56-a3eb-32175afc3d0d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Garden City, NY Math Tutor
Find a Garden City, NY Math Tutor
...SAS is a great software suite used for analyzing large amounts of data. From basic univariate statistics to complex multiple regressions, SAS can handle almost any mathematical analysis you
ask it to. What's great is that it is much easier than other programming languages, and is intuitive to learn.
26 Subjects: including trigonometry, algebra 1, algebra 2, calculus
...I scored in the 99th percentile on the GRE in Quantitative Reasoning (perfect 170,) and the 96th percentile in Verbal (166). I am a successful tutor because I have a strong proficiency in the
subject material I teach and a patient and creative approach that makes any subject simple to understand...
21 Subjects: including statistics, discrete math, differential equations, logic
...I will do what is necessary to assist students to appreciate math concepts and applications, provided students are keen listeners, think and apply the knowledge imparted in a systematic
manner. I firmly believe success can only be achieved through self discipline, belief in oneself and an unshak...
20 Subjects: including ACT Math, prealgebra, precalculus, differential equations
...I am currently tutoring two students who will be taking this test in November. I feel my experience with tutoring for the math regents, the GED, and the 10th grade International Exam have
prepared me in all areas required for the COOP/HSPT exam. I have been tutoring since January 2013 with WyzAnt.
16 Subjects: including SAT math, GRE, GED, algebra 1
...I am a graduate of Harvard University and a native New Yorker who has been tutoring students of all ages in everything from standardized tests to Spanish to Math to dance for over six years.
The SAT is a specialty of mine, and I love helping students discover all the tips and tricks necessary to...
31 Subjects: including linear algebra, ACT Math, reading, SAT math
Related Garden City, NY Tutors
Garden City, NY Accounting Tutors
Garden City, NY ACT Tutors
Garden City, NY Algebra Tutors
Garden City, NY Algebra 2 Tutors
Garden City, NY Calculus Tutors
Garden City, NY Geometry Tutors
Garden City, NY Math Tutors
Garden City, NY Prealgebra Tutors
Garden City, NY Precalculus Tutors
Garden City, NY SAT Tutors
Garden City, NY SAT Math Tutors
Garden City, NY Science Tutors
Garden City, NY Statistics Tutors
Garden City, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/garden_city_ny_math_tutors.php","timestamp":"2014-04-20T23:40:12Z","content_type":null,"content_length":"24088","record_id":"<urn:uuid:a4e31ae9-b05c-4efb-b147-9b848c648482>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the value of x in the diagram? (1 point) 1/2 4/21 2 4
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/512ad30fe4b098bb5fbabb0e","timestamp":"2014-04-19T22:36:45Z","content_type":null,"content_length":"47682","record_id":"<urn:uuid:bf8cf152-9401-4bd2-90c9-983393d9b1cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solving trigo
cos3X=sinX Find X, 0 degree < x < 360 degree Thanks in advance
There's the hard way and the easy way. Personally I like the easy way best: From the complementary angle formulae: $\cos (3x) = \sin x \Rightarrow \cos (3x) = \cos (90 - x)$. Case 1: $3x = 90 - x +
360 n$ where n is an integer. Solve for x and substitute appropriate values of n to get all values of x satisfying the given domain. Case 2: $3x = -(90 - x) + 360 n$ where n is an integer. Solve for
x and substitute appropriate values of n to get all values of x satisfying the given domain.
|
{"url":"http://mathhelpforum.com/trigonometry/47876-solving-trigo.html","timestamp":"2014-04-18T09:59:42Z","content_type":null,"content_length":"35223","record_id":"<urn:uuid:28ba63f8-f4ce-4ee3-a3e9-56e5f317dd2b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New Turán Densities for 3-Graphs
If $\mathcal{F}$ is a family of graphs then the Turán density of $\mathcal{F}$ is determined by the minimum chromatic number of the members of $\mathcal{F}$.
The situation for Turán densities of 3-graphs is far more complex and still very unclear. Our aim in this paper is to present new exact Turán densities for individual and finite families of
$3$-graphs, in many cases we are also able to give corresponding stability results. As well as providing new examples of individual $3$-graphs with Turán densities equal to $2/9,4/9,5/9$ and $3/4$ we
also give examples of irrational Turán densities for finite families of 3-graphs, disproving a conjecture of Chung and Graham. (Pikhurko has independently disproved this conjecture by a very
different method.)
A central question in this area, known as Turán's problem, is to determine the Turán density of $K_4^{(3)}=\{123, 124, 134, 234\}$. Turán conjectured that this should be $5/9$. Razborov [On
3-hypergraphs with forbidden 4-vertex configurations in SIAM J. Disc. Math. 24 (2010), 946--963] showed that if we consider the induced Turán problem forbidding $K_4^{(3)}$ and $E_1$, the 3-graph
with 4 vertices and a single edge, then the Turán density is indeed $5/9$. We give some new non-induced results of a similar nature, in particular we show that $\pi(K_4^{(3)},H)=5/9$ for a $3$-graph
$H$ satisfying $\pi(H)=3/4$.
We end with a number of open questions focusing mainly on the topic of which values can occur as Turán densities.
Our work is mainly computational, making use of Razborov's flag algebra framework. However all proofs are exact in the sense that they can be verified without the use of any floating point
operations. Indeed all verifying computations use only integer operations, working either over $\mathbb{Q}$ or in the case of irrational Turán densities over an appropriate quadratic extension of $\
Hypergraph, Turán problem
Full Text:
|
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v19i2p22","timestamp":"2014-04-20T00:59:26Z","content_type":null,"content_length":"17787","record_id":"<urn:uuid:f953d3d0-e7df-49fd-a01d-f18e8c9b4ae6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Modeling excitation-dependent bandstructure effects on InGaN light-emitting diode efficiency
« journal navigation
Modeling excitation-dependent bandstructure effects on InGaN light-emitting diode efficiency
Optics Express, Vol. 19, Issue 22, pp. 21818-21831 (2011)
Bandstructure properties in wurtzite quantum wells can change appreciably with changing carrier density because of screening of quantum-confined Stark effect. An approach for incorporating these
changes in an InGaN light-emitting-diode model is described. Bandstructure is computed for different carrier densities by solving Poisson and k·p equations in the envelop approximation. The
information is used as input in a dynamical model for populations in momentum-resolved electron and hole states. Application of the approach is illustrated by modeling device internal quantum
efficiency as a function of excitation.
© 2011 OSA
1. Introduction
Considerable progress is being made in advancing InGaN light-emitting diodes (LEDs). However, there are still concerns involving performance limitations. An example is efficiency loss at high current
density (efficiency droop) [
1. M. R. Krames, O. B. Shchekin, R. Mueller-Mach, G. O. Mueller, L. Zhou, G. Harbers, and M. G. Craford, “Status and future of high-power light-emitting diodes for solid-state lighting,” J. Display
Technol. 3, 160–175 (2007). [CrossRef]
], which can limit use of LEDs in applications requiring intense illumination. Understanding and mitigating the efficiency droop mechanism is important. Several explanations have been proposed,
including carrier leakage [
2. M. H. Kim, M. F. Schubert, Q. Dai, J. K. Kim, E. F. Schubert, J. Piprek, and Y. Park, “Origin of efficiency droop in GaN-based light-emitting diodes,” Appl. Phys. Lett. 91, 183507–183510 (2007).
], Auger recombination [
3. Y. C. Shen, G. O. Müller, S. Watanabe, N. F. Gardner, A. Munkholm, and M. R. Krames, “Auger recombination in InGaN measured by photoluminescence,” Appl. Phys. Lett. 91, 141101–141101 (2007).
], junction heating [
4. A. A. Efremov, N. I. Bochkareva, R. I. Gorbunov, D. A. Larinvovich, Yu. T. Rebane, D. V. Tarkhin, and Yu. G. Shreter, “Effect of the joule heating on the quantum efficiency and choice of thermal
conditions for high-power blue InGaN/GaN LEDs,” Semiconductors 40, 605–610 (2006). [CrossRef]
], carrier and defect delocalizations [
5. S. F. Chichibu, T. Azuhata, M. Sugiyama, T. Kitamura, Y. Ishida, H. Okumurac, H. Nakanishi, T. Sota, and T. Mukai, “Optical and structural studies in InGaN quantum well structure laser diodes,” J.
Vac. Sci. Technol. B 19, 2177–2183 (2001). [CrossRef]
6. I. A. Pope, P. M. Smowton, P. Blood, and J. D. Thompson, “Carrier leakage in InGaN quantum well light-emitting diodes emitting at 480nm,” Appl. Phys. Lett. 82, 2755–2757 (2003). [CrossRef]
]. The assertions are much debated. For example, in the case of Auger scattering, discrepancy exists in the Auger coefficient estimation between experimental-curve fitting and microscopic
calculations [
3. Y. C. Shen, G. O. Müller, S. Watanabe, N. F. Gardner, A. Munkholm, and M. R. Krames, “Auger recombination in InGaN measured by photoluminescence,” Appl. Phys. Lett. 91, 141101–141101 (2007).
7. H.-Y Ryu, H.-S. Kim, and J.-I. Shim, “Rate equation analysis of efficiency droop in InGaN light-emitting diodes,” Appl. Phys. Lett. 95, 081114–081117 (2009). [CrossRef]
9. K. T. Dellaney, P. Rinke, and C. G. Van de Walle, “Auger recombination rates in nitrides from first principles,” Appl. Phys. Lett. 94, 191109–191111 (2009). [CrossRef]
Discussions involving InGaN LED efficiency are commonly based on a rate equation for the total carrier density. The approach allows one to describe radiative and nonradiative carrier loss rates,
where the latter typically includes ad-hoc terms for producing an efficiency droop. A particularly successful model, in terms of reproducing experimental efficiency versus injection current data, is
model. [
3. Y. C. Shen, G. O. Müller, S. Watanabe, N. F. Gardner, A. Munkholm, and M. R. Krames, “Auger recombination in InGaN measured by photoluminescence,” Appl. Phys. Lett. 91, 141101–141101 (2007).
7. H.-Y Ryu, H.-S. Kim, and J.-I. Shim, “Rate equation analysis of efficiency droop in InGaN light-emitting diodes,” Appl. Phys. Lett. 95, 081114–081117 (2009). [CrossRef]
] The model’s name derives from the three phenomenological constants (
) introduced to account for Shockley-Read-Hall (SRH), radiative-recombination and Auger-scattering carrier losses, respectively. Bandstructure effects enter indirectly via these coefficients.
It is known that the bandstructure in wurtzite quantum-well (QW) structures can change noticeably with carrier density because of screening of the quantum-confined Stark effect (QCSE) [
10. A. Bykhovshi, B. Gelmonst, and M. Shur, “The influence of the strain-induced electric field on the charge distribution in GaN-AlN-GaN structure,” J. Appl. Phys. 74, 6734–6739 (1993). [CrossRef]
11. J. S. Im, H. Kollmer, J. Off, A. Sohmer, F. Scholz, and A. Hangleiter, “Reduction of oscillator strength due to piezoelectric fields in GaN/AlGaN quantum wells,” Phys. Rev. B 57, R9435–R9438
(1998). [CrossRef]
]. Incorporating these changes into the
model is challenging, without compromising the attractiveness of having only three fitting parameters, each with direct correspondence to a physical mechanism. This paper considers an alternative
that allows direct input of bandstructure properties, in particular, the band energy dispersions, confinement energies and optical transition matrix elements, as well as their carrier-density
dependences arising from screening of piezoelectric and spontaneous polarization fields. The model has the further advantage of providing a consistent treatment of spontaneous emission, carrier
capture and leakage, and nonequilibrium effects. Thus, the fitting parameter,
is eliminated and effects, such as plasma heating, are taken in account within an effective relaxation rate approximation for carrier-carrier and carrier-phonon scattering. All this is accomplished
by extending a previously reported non-equilibrium LED model that is based on dynamical equations for electron and hole occupations in each momentum (
) state [
12. W. W. Chow, M. H. Crawford, J. Y. Tsao, and M. Kneissl, “Internal efficiency of InGaN light-emitting diodes: beyond a quasiequilibrium model,” Appl. Phys. Lett. 97, 121105–121107 (2010).
]. The additions include an algorithm for simplifying and extracting bandstructure information relevant to the dynamical equations. Detailed bandstructure properties are obtained from solving
and Poisson equations [
13. S. L. Chuang and C. S. Chang, “k · p method for strained wurtzite semiconductors,” Phys. Rev. B 54, 2491–2504 (1996). [CrossRef]
]. Furthermore, since distinction between QW and barrier states is sometimes difficult in the presence of strong internal electric fields, extension is made to treat optical emission from these
states on equal footing.
Section 2 describes the model, derivation of the working equations and calculation of input bandstructure properties. Section 3 demonstrates the application of the k–resolved model by calculating
internal quantum efficiency (IQE) as a function of injection current for a multi-QW InGaN LED. Results are presented to illustrate the role of excitation dependences of band-structure. An example
involves a higher SRH coefficient in QWs than barriers combining with screening of QCSE to produce an efficiency droop in certain LED configurations. Section 4 explains the role of the bandstructure
by discussing the changes in QW confinement energies and envelope function overlap with increasing excitation. The section also describes the incorporation of Auger carrier loss into the model. With
the k–resolved model, any increase in plasma temperature or carrier leakage resulting from the Auger scattering is taken into account. Section 5 summarizes the paper.
2. Theory
The following Hamiltonian, adapted from quantum optics [
14. E. Jaynes and F. Cummings, “Comparison of quantum and semiclassical radiation theories with application to the beam maser,” Proc. IEEE 51, 89–109 (1963). [CrossRef]
], is used in the derivation of spontaneous emission from QW and barrier transitions: The summations are over QW and barrier states with subscript
) representing
) for QW states and
) for barrier states. In this notation, each QW state is denoted by its charge
, subband
and in-plane momentum
. A bulk state is specified by its charge
and 3-dimensional carrier momentum
. In
Eq. (1)
are electron (hole) annihilation and creation operators,
are corresponding operators for the photons,
is the carrier energy, Ω
is the photon frequency,
is the dipole matrix element,
is the active region volume and
is the host permittivity. Using the Hiesenberg operator equations of motion and the above Hamiltonian, the carrier populations and polarizations evolve according to where
is the transition frequency. Factorizing the operator products and truncating at the first level (Hartree-Fock approximation) give for
Eq. (4)
For an LED, it is customary to assumed that cavity influence is sufficiently weak so that
and only the spontaneous emission contribution is kept. Additionally, polarization dephasing is introduced, where the dephasing (with coefficient
) is assumed to be considerably faster than the population changes. This allows integration of
Eq. (5)
. The result is used to eliminate the polarization in
Eqs. (2)
, giving Converting the photon momentum summation into an integral, i.e. where Ω
is the speed of light in the semiconductor, the right-hand sides of
Eqs. (6)
may be integrated to give
Writing explicitly for QW populations and adding phenomenologically SRH carrier loss and relaxation contributions from carrier-carrier and carrier-phonon scattering, gives
′ is
. In
Eq. (11)
is the SRH coefficient for QW states,
are the effective carrier-carrier and carrier-phonon collision rates, respectively, and where
and Ω
are the QW dipole matrix element and transition energy. Similarly, for the barrier populations,
and Ω
are the barrier dipole matrix element and transition energy. In
Eq. (13)
is the SRH coefficient for barrier states and there is a pump contribution, where
is the current density,
is the electron charge,
, the injected carrier distribution, is a Fermi-Dirac function with chemical potential
and temperature
. For the asymptotic Fermi-Dirac distributions approached via carrier-carrier collisions, the chemical potential
and plasma temperature
are determined by conservation of carrier density and energy. In the case of carrier-phonon collisions, the chemical potential
is determined by conservation of carrier density and the lattice temperature
is an input quantity. Total carrier density and energy are computed by converting the sum over states to integrals, i.e., where
are the surface area and thickness of the active active region consisting of all QW and barrier layers. Further details involving implementation and comparison with results from quantum-kinetic
calculations are reported elsewhere [
15. W. W. Chow, H. C. Schneider, S. W. Koch, C. H. Chang, L. Chrostowski, and C. J. Chang-Hasnain, “Nonequilibrium model for semiconductor laser modulation response,” IEEE J. Quantum Electron. 38,
402–409 (2002). [CrossRef]
16. I. Waldmueller, W. W. Chow, M. C. Wanke, and E. W. Young, “Non-equilibrium many-body theory of intersub-band lasers,” IEEE J. Quantum Electron. 42, 292–301 (2006). [CrossRef]
]. Many-body effects [
17. W. W. Chow, A. F. Wright, A. Girndt, F. Jahnke, and S. W. Koch, “Microscopic theory of gain for an In-GaN/AlGaN quantum well laser,” Appl. Phys. Lett. 71, 2608–2610 (1997). [CrossRef]
] are neglected in
Eqs (11)
, as in recent studies involving bandstructure effects, such as from increasing QW envelope function overlap [
19. H. Zhao, G. Liu, J. Zhang, J. Poplawsky, V. Dierolf, and N. Tansu, “Approaches for high internal quantum efficiency green InGaN light-emitting diodes with large overlap quantum wells,” Opt.
Express 19, A991–A1007 (2011). [CrossRef] [PubMed]
], on InGaN LED performance. While behaviorial trends from bandstructure engineering many be described without many-body effects, some caution is advisable when using the quantitative results. One
reason is that Coulomb enhancement has been shown to change optical gain and absorption in QWs and bulk structures by as much as a factor of two [
20. W. W. Chow, A. Knorr, and S. W. Koch, “Theory of laser gain in group-III nitrides,” Appl. Phys. Lett. 67, 754–756 (1995). [CrossRef]
21. W. W. Chow, A. F. Wright, and J. S. Nelson, “Theoretical study of room temperate optical gain in GaN strained quantum wells,” Appl. Phys. Lett. 68, 296–298 (1996). [CrossRef]
]. Equally important, corresponding changes in SRH and Auger contributions will have to be taken into account. These concerns were addressed in a theory with first-principles treatment of Auger
scattering and optical emission [
8. J. Hader, J. V. Moloney, B. Pasenow, S. W. Koch, M. Sabathil, N. Linder, and S. Lutgen, “On the important of radiative and Auger losses in GaN-based quantum wells,” Appl. Phys. Lett. 92,
261103–261105 (2008). [CrossRef]
]. However, the simulations are very time demanding, and carrier transport, current leakage, as well as nonequilibrium effects are not modeled. Perhaps, a good compromise is the extension of the
model described in this paper to include many-body effects at the level of the screened Hartree-Fock approximation [
15. W. W. Chow, H. C. Schneider, S. W. Koch, C. H. Chang, L. Chrostowski, and C. J. Chang-Hasnain, “Nonequilibrium model for semiconductor laser modulation response,” IEEE J. Quantum Electron. 38,
402–409 (2002). [CrossRef]
22. S.-H. Park, D. Ahn, J. Park, and T -T. Lee, “Optical properties of staggered InGaN/InGaN/GaN quantum-well structures with Ga- and N-Faces,” Jpn. J. Appl. Phys. 50, 072101–07214 (2011). [CrossRef]
Bandstructure information enters directly into
Eqs (11)
via the dipole matrix elements
and carrier energies
. From
theory, the QW electron and hole eigenfunctions are [
] where |
〉 is a bulk electron or hole state,
) is the
-th envelope function associated with the
bulk state,
is the amplitude of the
-th envelope function contributing to the
-th subband at momentum
is position in the growth direction and
is position in the QW plane Using
Eq. (16)
, the square of the dipole matrix element may then be written as where and the bulk dipole matrix element in the absence of an electric field is given by
is the bulk material bandgap energy,
are the bare and effective electron masses, Δ
and Δ
are energy splittings associated with the bulk hole states. An iterative solution of the
and Poisson equations [
13. S. L. Chuang and C. S. Chang, “k · p method for strained wurtzite semiconductors,” Phys. Rev. B 54, 2491–2504 (1996). [CrossRef]
] is used to obtain the energies
and the overlap integral
. For these calculations we use the the bulk wurtzite material parameters listed in Refs. [
23. S. J. Jenkins, G. P. Srivastava, and J. C. Inkson, “Simple approach to self-energy corrections in semiconductors and insulators,” Phys. Rev. B 48, 4388–4397 (1993). [CrossRef]
26. O. Ambacher, “Growth and applications of Group III-nitrides,” J. Phys. D: Appl. Phys. 31, 2653–2710 (1998). [CrossRef]
3. Results
With the present model, it is necessary to solve the bandstructure and population problems self consistently. Simultaneous solution of both problems is very challenging and perhaps unnecessary. The
approach used in this paper is to first take care of the bandstructure part by iteratively solving the k · p and Poisson equations for a range of carrier densities. Bandstructure information needed
for the population part are ɛ[σ,α[σ],k[⊥]], ɛσ,kb and ξ[α[e],α[h],k[⊥]] versus total QW carrier density, nσqw=S−1∑ασ,,k⊥nσ,ασ,k⊥, where the nσqw dependences are from screening of the QW electric
To facilitate the solution of the dynamical population equations, the carrier states are grouped into two categories: those belonging to the QWs and those belonging to the barriers. The QW states are
treated using
Eq. (11)
and the barrier states are treated collectively with
Eq. (13)
. With a high internal electric field, the distinction between QW and barrier states may be ambiguous. In this paper, the choice is made by calculating ∫
dz |
, where integral is performed over the QWs. The states where the integral is greater than a half are grouped as QW states and the rest as barrier states. For the problem being addressed, which is the
excitation dependence of IQE, the distinction is only important because only QW transitions are affected by QCSE. For the barrier transitions, the dipole matrix element in the presence of an internal
electric field is approximated by an average, where each transition is weighted according to the occupations of the participating states. When solving the population equations, grouping the barrier
states appreciably reduces numerical demand, which remains substantial because one is still keeping track of a large number of
The second step involves numerically solving
Eqs. (11)
with the bandstructure quantities updated at each time step according to the instantaneous value of
. When steady state is reached, IQE is obtained from dividing the rate of carrier (electron or hole) loss via spontaneous emission by the rate of carrier injection:
Computed IQE versus current density curves for different values of SRH coefficients in the QWs are plotted in
Fig. 1
. Each curve shows an initial sharp increase in IQE with injection current, with emission occurring the instant there is an injected current. Quite interesting, especially because Auger carrier loss
is not included in the model, is the appearance of efficiency droop for
≳ 1. A larger SRH coefficient in QW than barrier is possible in present experimental devices, based on the roughly three times higher defect density in QWs than in barriers in LEDs measured at Sandia
]. The calculations are performed assuming an active region consisting five 4nm In
N QWs separated by 6nm GaN barriers and bounded by 20nm GaN layers. Electric field in the QWs is determined from the sum of piezoelectric and spontaneous polarization fields. The electric fields in
the barriers are from spontaneous polarization. Screening of these fields are determined semiclassically according to Poisson equation, and electron and hole envelope functions. Input parameters are
= 10
= 300
= 5 × 10
= 10
. Effects arising from doping profile, presence of carrier blocking layers and interface irregularities are ignored [
2. M. H. Kim, M. F. Schubert, Q. Dai, J. K. Kim, E. F. Schubert, J. Piprek, and Y. Park, “Origin of efficiency droop in GaN-based light-emitting diodes,” Appl. Phys. Lett. 91, 183507–183510 (2007).
28. S. Choi, H. J. Kim, S.-S. Kim, J. Liu, J. Kim, J.-H. Ryou, R. D. Dupuis, A. M. Fishcer, and F. A. Ponce, “Improvement of peak quantum efficiency and efficiency droop in III-nitride visible
light-emitting diodes with an InAlN electron-blocking layer,” Appl. Phys. Lett. 96, 221105–221107 (2010). [CrossRef]
]. These may be treated with the present model as details involving the growth-direction active-medium configuration. Not so straightforward is the description of effects arising from inhomogeneities
within the QW plane, such as current crowding and carrier localization [
6. I. A. Pope, P. M. Smowton, P. Blood, and J. D. Thompson, “Carrier leakage in InGaN quantum well light-emitting diodes emitting at 480nm,” Appl. Phys. Lett. 82, 2755–2757 (2003). [CrossRef]
29. J. Hader, J. V. Moloney, and S. W. Koch, “Density-activated defect recombination as a possible explanation for the efficiency droop in GaN-based diodes,” Appl. Phys. Lett. 96, 221106–221108
(2010). [CrossRef]
]. To incorporate such in-plane spatial inhomogeneities, one could divide the active region into different domains. However, nonuniform current spreading will likely require the full (3-d) solution
of Poisson equation [
30. Y. Y. Kudryk and A. V. Zinovchuk, “Efficiency droop in InGaN/GaN multiple quantum well light-emitting diodes with nonuniform current spreading,” Semicond. Sci. Technol. 26, 095007–095011 (2011).
]. To use the results in a microscopic (rather than
) treatment will lead to a substantially more complicated numerial model, similar to the ones used in modeling spatio-temporal behavior in semiconductor lasers [
31. C. Z. Ning, J. V. Moloney, A. Egan, and R. A. Indik, “A first-principles fully space-time resolved model of a semiconductor laser,” Quantum Semiclassical Opt. 9, 681–691 (1997). [CrossRef]
Defect recombination has been proposed as a mechanism for efficiency droop [
29. J. Hader, J. V. Moloney, and S. W. Koch, “Density-activated defect recombination as a possible explanation for the efficiency droop in GaN-based diodes,” Appl. Phys. Lett. 96, 221106–221108
(2010). [CrossRef]
]. The modeling was based on a rigorous microscopic treatment of the radiative process and a postulated nonlinear density-activated defect recombination current density. The IQE droop shown in
Fig. 1
is also from a nonlinear total carrier density dependence of defect recombination, where the nonlinearity arises from bandstructure changes caused by screening of QCSE. These band-structure changes
increase the fraction of injected carriers populating the higher SRH-loss, QW states relative to the lower SRH-loss, barrier states. Since the mechanism relates directly to defect densities in the QW
and barrier layers, a more direct connection to measurable parameters is made.
For further insight, it is more effective to use a less comprehensive model that ignores carrier leakage and nonequilibrium effects, so as to isolate the bandstructure effects. Such a model is
possible by extending the
model to distinguish between QW and barrier carrier densities,
, respectively. The following phenomenological (and less rigorous than
Eqs. (11)
] rate equations may be written: where
. 3-d (volume) densities are used to connect with the
model, especially in terms of the SRH and spontaneous emission coefficients.
Equations (21)
are coupled by assuming that intraband collisions are sufficiently rapid so that QW and barrier populations are in equilibrium at temperature
. Defining a total 2-d carrier density,
allows combining these equations to give where
is the number of QWs in the structure,
is the width of individual QWs,
is Boltzmann constant and Δ
is the averaged QW confinement energy. The steady state solution to
Eq. (23)
gives the internal quantum efficiency, where
is assumed to simplify the above expressions.
Bandstructure input to
Eq. (25)
are the confinement energies Δ
, Δ
and the QW
coefficient as functions of total carrier density,
. The information is extracted from the same bandstructure calculations performed for the more comprehensive
–resolved model, with the exception that only the zone center (
= 0) values are used. Confinement energies are approximated by
, where 〈〉
indicates an average over QW states. Based on
Eqs. (12)
, the assumption
= 〈
[QW] ηB[b]
is made, where 〈
is the average envelope function overlap of the allowed QW transitions and
is introduced to account for the difference in QW and barrier densities of states. This difference is automatically taken care of in the
–resolved model based on
Eqs. (11)
. Δ
, Δ
and 〈
versus carrier density
are plotted in
Fig. 2
. The sheet (2-d) density
is for a heterostructure consisting 5 QWs and 6 barrier layers that totals 84nm in width.
Figure 3
shows IQE versus current density computed with
Eq. (25)
and for different
. Input parameters are
= 300
= 0.16 and
. All the curves depict efficiency droop from the extended
model, where carrier dependences of confinement energies and QW bimolecular radiative coefficient are taken into account. They also indicate that the appearance of droop is insensitive to the fitting
, which affects only the IQE recovery arising from increase in QW emission.
While the above exercise reveals that bandstructure changes play a role in IQE droop, differences between
Figs. 1
suggest that there is also influence from other contributions in present experiments. That experimental results are in closer agreement with
Fig. 1
indicates the importance of these contributions in present LEDs. They include energy dispersions, carrier leakage and nonequilibrium carrier effects, such as an incomplete transfer of the carrier
population from barrier to QW states because of finite intraband collision rates. The presence of nonequilibrium effects is verified from least-squares fits of computed carrier populations to
Fermi-Dirac distributions. For
= 150
, the fits indicate elevated plasma temperatures of
> 360
for carrier-phonon scattering rate
= 10
> 600
= 10
With the present approach, the dynamical solution gives the carrier densities in QW and barrier states. The conversion to bulk (3-d) density is via division by the total QW layer width
in the case of the QW and by the total barrier width
in the case of the barrier. When performing the bandstructure calculation, quasiequilibrium condition is assumed to determine the QW and barrier bulk densities used in the solution of Poisson
equation. This is an inconsistency that is acceptable provided the dynamical solution does not produce carrier distributions deviating too far from quasiequilibrium distributions. Even though the
current density versus carrier density relationship depends on the input to the dynamical problem, and therefore, different for the different curves in
Fig. 1
, some insight into the connection between bandstructure and IQE excitation dependence may be obtained by examining
Figs. 1
together. The onset of droop in the curves in
Fig. 1
occurs around 45
, which corresponds to
around 10
^13 cm^−2
or a 3-d QW carrier density of 1× to 1.2 × 10
^18 cm^−3
. At these densities, the QCSE is essentially unscreened. At the start of IQE recovery which occurs over the range of 60 to 120
, the corresponding carrier densities are 2.8 × 10
< 3.0 × 10
^13 cm^−2
or 3-d QW carrier density of 8.5× to 9 × 10
^18 cm^−2
. According to
Fig. 2
, these are densities where wavefunction overlap is no longer negligible. Between the IQE peak and recovery,
changes from approximately 10
to 3.0 × 10
^13 cm^−2
. Within that carrier density range,
Fig. 2
shows significant increase in QW-barrier electron and hole energy separations.
4. Discussion of results
Further insight into the droop behavior described in the previous section is possible from closer examination of the bandstructure changes with excitation.
Figure 4
shows the absolute square of electron and hole envelope functions at zone center (
= 0) for four different carrier densities. For clarity, the curves are separated vertically according to their associated energies. The black lines plot the electron and hole confinement potentials,
while the red and blue curves indicate the QW and barrier states, respectively.
Starting at a carrier density of
= 2.3 × 10
^13 cm^−2
Fig. 4(a)
depicts confinement potentials differing appreciably from the flat-band situation [see
Fig. 4(d)
]. A result is small energy separation between QW and barrier states, leading to comparable QW and barrier populations, especially for the holes. Optical emission from barrier transitions occur via
the contribution
, as soon as the product of electron and hole populations,
becomes nonzero. In contrast, the QW contribution Σ
[α[e],α[h],k[⊥]] b[α[e],α[h],k[⊥]] n[e,α[e],k[⊥]] n[h,α[h],k[⊥]]
is negligible, even though the product
n[e,α[e],k[⊥]] n[h,α[h],k[⊥]]
may be appreciable. This is because QCSE spatially separates electrons and holes in the QWs, resulting in very small dipole matrix elements for QW transitions.
At a higher carrier density of
= 3.4 × 10
^13 cm^−2
, increased screening of QCSE leads to higher energy separation between QW and barrier states as shown in
Fig. 4(b)
. This causes the barrier populations to decrease relative to those of the QW. However, the QCSE is still sufficient to suppress the dipole matrix element. Moreover, for
> 1, a larger fraction of the injected carriers are populating the lower-lying and higher-loss QW states. The net result is reduced IQE because the smaller increase in
with increasing excitation that is not compensated by a corresponding increase in Σ
[α[e],α[h],k[⊥]] b[α[e],α[h],k[⊥]] n[e,α[e],k[⊥]] n[h,α[h],k[⊥]]
. Important to the appearance of droop is a lag between the increase in confinement energies and the increase in QW dipole matrix element, as illustrated in
Fig. 2
within the region 2.5 × 10
^13 cm^−2
< 5 × 10
^13 cm^−2
For QW emission to increase, a high carrier density is necessary to sufficiently screen the QW electric field. That is the case for
Fig. 4(c)
, where
= 6.8 × 10
^13 cm^−2
. An appreciable QW emission leads to a reversal of the IQE droop as shown in
Figs. 1
. Lastly,
Fig. 4(d)
shows the asymptotic flat-band case, both for reference and as a guide for assigning QW and barrier states. Note that some ambiguity remains, especially with the
= 2 subbands, which lie mostly in the triangular barrier regions of the confinement potentials at finite carrier densities.
Some questions remain. For example, one might expect a significant red shift of emission energy when optical transitions changes from barrier dominated to QW dominated. This need not be the case
because of the energy level shifts associated with the QCSE and Franz-Keldysh effects [
]. The curves in
Fig. 5
show the carrier density dependences of the average QW and barrier bandedges, 〈
+ 〈
, respectively. To a good approximation, emission energy is centered around the lower curve, which means that except for slight deviations around the cross-over region, the emission energy is blue
shifted with increasing excitation. Furthermore, it is always below the zero-field barrier bandgap.
Another question concerns the curves depicting IQE recovery at current densities lower than observed in present experiments. This discrepancy suggests the presence of other loss mechanisms, such as
Auger carrier loss. To illustrate the effect of Auger scattering, Auger carrier loss is incorporated into
Eqs. (11)
, as described in Ref. [
12. W. W. Chow, M. H. Crawford, J. Y. Tsao, and M. Kneissl, “Internal efficiency of InGaN light-emitting diodes: beyond a quasiequilibrium model,” Appl. Phys. Lett. 97, 121105–121107 (2010).
]. The results are shown in
Fig. 6
= 0.5, 2 and 4, with Auger coefficient
= 0, 10
, 5 × 10
and 10
^−31 cm^6s^−1
(dotted, dashed, dot-dashed and solid curves, respectively). For clarity, the
= 1 case in
Fig. 1
is omitted. The curves show the prolonging of the efficiency droop by Auger carrier loss. More importantly, the necessary Auger coefficient is shown to be
< 10
^−31 cm^6s^−1
which is smaller than that used in
models and are within the range predicted by microscopic calculation [
9. K. T. Dellaney, P. Rinke, and C. G. Van de Walle, “Auger recombination rates in nitrides from first principles,” Appl. Phys. Lett. 94, 191109–191111 (2009). [CrossRef]
5. Summary
This paper describes an approach to modeling InGaN LEDs that involves the self-consistent solution of bandstructure and carrier population problems. The motivation is to provide direct input of
bandstructure properties, in particular, their carrier-density dependences arising from screening of piezoelectric and spontaneous polarization fields. Other advantages include consistent treatment
of spontaneous emission, carrier capture and leakage and nonequilibrium effects, as well as description of optical emission from quantum-well and barrier transitions on equal footing.
Application of the model is demonstrated with two examples, that are chosen to illustrate of the role of bandstructure changes on LED efficiency. The first example shows that higher defect
recombination in QWs than barriers, when combined with bandstructure changes from screening of QCSE, can give rise to an efficiency droop. By casting the conditions in terms of defect densities in QW
versus barrier layers, the model provides direct connection to measurable device properties. Within this model, LED efficiency would increase again for excitation at which the QCSE is sufficiently
screened. The second example describes the role of Auger carrier loss in maintaining efficiency droop to high current densities, as observed in present experiments. By also including the effects of
bandstructure changes, carrier capture and leakage, and plasma heating, one finds the necessary Auger coefficient to be in closer agreement with microscopic calculations than estimates from
experimental curve fitting using the ABC model.
Lastly, the paper does not make use of any mechanism for the efficiency droop that is not already proposed in the literature. Rather, its goal is to introduce an approach for systematically
incorporating potential contributions, both intrinsic and extrinsic, to produce a comprehensive model based on microscopic physics. It is possible that the differences in observed droop behavior
(involving different LED emitting wavelengths, polar versus non polar substrates, with or without electron blocking layers, etc.) arise from differences in the relative importance of various
mechanisms. The
-resolved LED model described in this paper can provide a more precise estimation of their relative strengths than the commonly used
model [
3. Y. C. Shen, G. O. Müller, S. Watanabe, N. F. Gardner, A. Munkholm, and M. R. Krames, “Auger recombination in InGaN measured by photoluminescence,” Appl. Phys. Lett. 91, 141101–141101 (2007).
] and is easier to implement than a first-principles, many-body approach [
8. J. Hader, J. V. Moloney, B. Pasenow, S. W. Koch, M. Sabathil, N. Linder, and S. Lutgen, “On the important of radiative and Auger losses in GaN-based quantum wells,” Appl. Phys. Lett. 92,
261103–261105 (2008). [CrossRef]
References and links
1. M. R. Krames, O. B. Shchekin, R. Mueller-Mach, G. O. Mueller, L. Zhou, G. Harbers, and M. G. Craford, “Status and future of high-power light-emitting diodes for solid-state lighting,” J. Display
Technol. 3, 160–175 (2007). [CrossRef]
2. M. H. Kim, M. F. Schubert, Q. Dai, J. K. Kim, E. F. Schubert, J. Piprek, and Y. Park, “Origin of efficiency droop in GaN-based light-emitting diodes,” Appl. Phys. Lett. 91, 183507–183510 (2007).
3. Y. C. Shen, G. O. Müller, S. Watanabe, N. F. Gardner, A. Munkholm, and M. R. Krames, “Auger recombination in InGaN measured by photoluminescence,” Appl. Phys. Lett. 91, 141101–141101 (2007).
4. A. A. Efremov, N. I. Bochkareva, R. I. Gorbunov, D. A. Larinvovich, Yu. T. Rebane, D. V. Tarkhin, and Yu. G. Shreter, “Effect of the joule heating on the quantum efficiency and choice of thermal
conditions for high-power blue InGaN/GaN LEDs,” Semiconductors 40, 605–610 (2006). [CrossRef]
5. S. F. Chichibu, T. Azuhata, M. Sugiyama, T. Kitamura, Y. Ishida, H. Okumurac, H. Nakanishi, T. Sota, and T. Mukai, “Optical and structural studies in InGaN quantum well structure laser diodes,”
J. Vac. Sci. Technol. B 19, 2177–2183 (2001). [CrossRef]
6. I. A. Pope, P. M. Smowton, P. Blood, and J. D. Thompson, “Carrier leakage in InGaN quantum well light-emitting diodes emitting at 480nm,” Appl. Phys. Lett. 82, 2755–2757 (2003). [CrossRef]
7. H.-Y Ryu, H.-S. Kim, and J.-I. Shim, “Rate equation analysis of efficiency droop in InGaN light-emitting diodes,” Appl. Phys. Lett. 95, 081114–081117 (2009). [CrossRef]
8. J. Hader, J. V. Moloney, B. Pasenow, S. W. Koch, M. Sabathil, N. Linder, and S. Lutgen, “On the important of radiative and Auger losses in GaN-based quantum wells,” Appl. Phys. Lett. 92,
261103–261105 (2008). [CrossRef]
9. K. T. Dellaney, P. Rinke, and C. G. Van de Walle, “Auger recombination rates in nitrides from first principles,” Appl. Phys. Lett. 94, 191109–191111 (2009). [CrossRef]
10. A. Bykhovshi, B. Gelmonst, and M. Shur, “The influence of the strain-induced electric field on the charge distribution in GaN-AlN-GaN structure,” J. Appl. Phys. 74, 6734–6739 (1993). [CrossRef]
11. J. S. Im, H. Kollmer, J. Off, A. Sohmer, F. Scholz, and A. Hangleiter, “Reduction of oscillator strength due to piezoelectric fields in GaN/AlGaN quantum wells,” Phys. Rev. B 57, R9435–R9438
(1998). [CrossRef]
12. W. W. Chow, M. H. Crawford, J. Y. Tsao, and M. Kneissl, “Internal efficiency of InGaN light-emitting diodes: beyond a quasiequilibrium model,” Appl. Phys. Lett. 97, 121105–121107 (2010).
13. S. L. Chuang and C. S. Chang, “k · p method for strained wurtzite semiconductors,” Phys. Rev. B 54, 2491–2504 (1996). [CrossRef]
14. E. Jaynes and F. Cummings, “Comparison of quantum and semiclassical radiation theories with application to the beam maser,” Proc. IEEE 51, 89–109 (1963). [CrossRef]
15. W. W. Chow, H. C. Schneider, S. W. Koch, C. H. Chang, L. Chrostowski, and C. J. Chang-Hasnain, “Nonequilibrium model for semiconductor laser modulation response,” IEEE J. Quantum Electron. 38,
402–409 (2002). [CrossRef]
16. I. Waldmueller, W. W. Chow, M. C. Wanke, and E. W. Young, “Non-equilibrium many-body theory of intersub-band lasers,” IEEE J. Quantum Electron. 42, 292–301 (2006). [CrossRef]
17. W. W. Chow, A. F. Wright, A. Girndt, F. Jahnke, and S. W. Koch, “Microscopic theory of gain for an In-GaN/AlGaN quantum well laser,” Appl. Phys. Lett. 71, 2608–2610 (1997). [CrossRef]
18. W. W. Chow and S. W. Koch, Semiconductor-Laser Fundamentals: Physics of the Gain Materials (Springer, 1999).
19. H. Zhao, G. Liu, J. Zhang, J. Poplawsky, V. Dierolf, and N. Tansu, “Approaches for high internal quantum efficiency green InGaN light-emitting diodes with large overlap quantum wells,” Opt.
Express 19, A991–A1007 (2011). [CrossRef] [PubMed]
20. W. W. Chow, A. Knorr, and S. W. Koch, “Theory of laser gain in group-III nitrides,” Appl. Phys. Lett. 67, 754–756 (1995). [CrossRef]
21. W. W. Chow, A. F. Wright, and J. S. Nelson, “Theoretical study of room temperate optical gain in GaN strained quantum wells,” Appl. Phys. Lett. 68, 296–298 (1996). [CrossRef]
22. S.-H. Park, D. Ahn, J. Park, and T -T. Lee, “Optical properties of staggered InGaN/InGaN/GaN quantum-well structures with Ga- and N-Faces,” Jpn. J. Appl. Phys. 50, 072101–07214 (2011). [CrossRef]
23. S. J. Jenkins, G. P. Srivastava, and J. C. Inkson, “Simple approach to self-energy corrections in semiconductors and insulators,” Phys. Rev. B 48, 4388–4397 (1993). [CrossRef]
24. A. F. Wright and J. S. Nelson, “Consistent structural properties for AlN, GaN, and InN,” Phys. Rev. B 51, 7866–7869 (1995). [CrossRef]
25. S. H. Wei and A. Zunger, “Valence band splittings and band offsets of AlN, GaN, and InN,” Appl. Phys. Lett. 69, 2719–2711 (1996). [CrossRef]
26. O. Ambacher, “Growth and applications of Group III-nitrides,” J. Phys. D: Appl. Phys. 31, 2653–2710 (1998). [CrossRef]
27. A. Armstrong, Sandia National Laboratories, Albuquerque, NM 87185 (personal communication, 2010).
28. S. Choi, H. J. Kim, S.-S. Kim, J. Liu, J. Kim, J.-H. Ryou, R. D. Dupuis, A. M. Fishcer, and F. A. Ponce, “Improvement of peak quantum efficiency and efficiency droop in III-nitride visible
light-emitting diodes with an InAlN electron-blocking layer,” Appl. Phys. Lett. 96, 221105–221107 (2010). [CrossRef]
29. J. Hader, J. V. Moloney, and S. W. Koch, “Density-activated defect recombination as a possible explanation for the efficiency droop in GaN-based diodes,” Appl. Phys. Lett. 96, 221106–221108
(2010). [CrossRef]
30. Y. Y. Kudryk and A. V. Zinovchuk, “Efficiency droop in InGaN/GaN multiple quantum well light-emitting diodes with nonuniform current spreading,” Semicond. Sci. Technol. 26, 095007–095011 (2011).
31. C. Z. Ning, J. V. Moloney, A. Egan, and R. A. Indik, “A first-principles fully space-time resolved model of a semiconductor laser,” Quantum Semiclassical Opt. 9, 681–691 (1997). [CrossRef]
32. L. V. Keldysh, “Behaviour of non-metallic crystals in strong electric fields,” Sov. Phys. JETP 6, 763–770 (1958).
OCIS Codes
(230.3670) Optical devices : Light-emitting diodes
(230.5590) Optical devices : Quantum-well, -wire and -dot devices
(250.5590) Optoelectronics : Quantum-well, -wire and -dot devices
ToC Category:
Optical Devices
Original Manuscript: August 16, 2011
Revised Manuscript: September 22, 2011
Manuscript Accepted: September 22, 2011
Published: October 20, 2011
Weng W. Chow, "Modeling excitation-dependent bandstructure effects on InGaN light-emitting diode efficiency," Opt. Express 19, 21818-21831 (2011)
Sort: Year | Journal | Reset
1. M. R. Krames, O. B. Shchekin, R. Mueller-Mach, G. O. Mueller, L. Zhou, G. Harbers, and M. G. Craford, “Status and future of high-power light-emitting diodes for solid-state lighting,” J. Display
Technol.3, 160–175 (2007). [CrossRef]
2. M. H. Kim, M. F. Schubert, Q. Dai, J. K. Kim, E. F. Schubert, J. Piprek, and Y. Park, “Origin of efficiency droop in GaN-based light-emitting diodes,” Appl. Phys. Lett.91, 183507–183510 (2007).
3. Y. C. Shen, G. O. Müller, S. Watanabe, N. F. Gardner, A. Munkholm, and M. R. Krames, “Auger recombination in InGaN measured by photoluminescence,” Appl. Phys. Lett.91, 141101–141101 (2007).
4. A. A. Efremov, N. I. Bochkareva, R. I. Gorbunov, D. A. Larinvovich, Yu. T. Rebane, D. V. Tarkhin, and Yu. G. Shreter, “Effect of the joule heating on the quantum efficiency and choice of thermal
conditions for high-power blue InGaN/GaN LEDs,” Semiconductors40, 605–610 (2006). [CrossRef]
5. S. F. Chichibu, T. Azuhata, M. Sugiyama, T. Kitamura, Y. Ishida, H. Okumurac, H. Nakanishi, T. Sota, and T. Mukai, “Optical and structural studies in InGaN quantum well structure laser diodes,”
J. Vac. Sci. Technol. B19, 2177–2183 (2001). [CrossRef]
6. I. A. Pope, P. M. Smowton, P. Blood, and J. D. Thompson, “Carrier leakage in InGaN quantum well light-emitting diodes emitting at 480nm,” Appl. Phys. Lett.82, 2755–2757 (2003). [CrossRef]
7. H.-Y Ryu, H.-S. Kim, and J.-I. Shim, “Rate equation analysis of efficiency droop in InGaN light-emitting diodes,” Appl. Phys. Lett.95, 081114–081117 (2009). [CrossRef]
8. J. Hader, J. V. Moloney, B. Pasenow, S. W. Koch, M. Sabathil, N. Linder, and S. Lutgen, “On the important of radiative and Auger losses in GaN-based quantum wells,” Appl. Phys. Lett.92,
261103–261105 (2008). [CrossRef]
9. K. T. Dellaney, P. Rinke, and C. G. Van de Walle, “Auger recombination rates in nitrides from first principles,” Appl. Phys. Lett.94, 191109–191111 (2009). [CrossRef]
10. A. Bykhovshi, B. Gelmonst, and M. Shur, “The influence of the strain-induced electric field on the charge distribution in GaN-AlN-GaN structure,” J. Appl. Phys.74, 6734–6739 (1993). [CrossRef]
11. J. S. Im, H. Kollmer, J. Off, A. Sohmer, F. Scholz, and A. Hangleiter, “Reduction of oscillator strength due to piezoelectric fields in GaN/AlGaN quantum wells,” Phys. Rev. B57, R9435–R9438
(1998). [CrossRef]
12. W. W. Chow, M. H. Crawford, J. Y. Tsao, and M. Kneissl, “Internal efficiency of InGaN light-emitting diodes: beyond a quasiequilibrium model,” Appl. Phys. Lett.97, 121105–121107 (2010).
13. S. L. Chuang and C. S. Chang, “k · p method for strained wurtzite semiconductors,” Phys. Rev. B54, 2491–2504 (1996). [CrossRef]
14. E. Jaynes and F. Cummings, “Comparison of quantum and semiclassical radiation theories with application to the beam maser,” Proc. IEEE51, 89–109 (1963). [CrossRef]
15. W. W. Chow, H. C. Schneider, S. W. Koch, C. H. Chang, L. Chrostowski, and C. J. Chang-Hasnain, “Nonequilibrium model for semiconductor laser modulation response,” IEEE J. Quantum Electron.38,
402–409 (2002). [CrossRef]
16. I. Waldmueller, W. W. Chow, M. C. Wanke, and E. W. Young, “Non-equilibrium many-body theory of intersub-band lasers,” IEEE J. Quantum Electron.42, 292–301 (2006). [CrossRef]
17. W. W. Chow, A. F. Wright, A. Girndt, F. Jahnke, and S. W. Koch, “Microscopic theory of gain for an In-GaN/AlGaN quantum well laser,” Appl. Phys. Lett.71, 2608–2610 (1997). [CrossRef]
18. W. W. Chow and S. W. Koch, Semiconductor-Laser Fundamentals: Physics of the Gain Materials (Springer, 1999).
19. H. Zhao, G. Liu, J. Zhang, J. Poplawsky, V. Dierolf, and N. Tansu, “Approaches for high internal quantum efficiency green InGaN light-emitting diodes with large overlap quantum wells,” Opt.
Express19, A991–A1007 (2011). [CrossRef] [PubMed]
20. W. W. Chow, A. Knorr, and S. W. Koch, “Theory of laser gain in group-III nitrides,” Appl. Phys. Lett.67, 754–756 (1995). [CrossRef]
21. W. W. Chow, A. F. Wright, and J. S. Nelson, “Theoretical study of room temperate optical gain in GaN strained quantum wells,” Appl. Phys. Lett.68, 296–298 (1996). [CrossRef]
22. S.-H. Park, D. Ahn, J. Park, and T -T. Lee, “Optical properties of staggered InGaN/InGaN/GaN quantum-well structures with Ga- and N-Faces,” Jpn. J. Appl. Phys.50, 072101–07214 (2011). [CrossRef]
23. S. J. Jenkins, G. P. Srivastava, and J. C. Inkson, “Simple approach to self-energy corrections in semiconductors and insulators,” Phys. Rev. B48, 4388–4397 (1993). [CrossRef]
24. A. F. Wright and J. S. Nelson, “Consistent structural properties for AlN, GaN, and InN,” Phys. Rev. B51, 7866–7869 (1995). [CrossRef]
25. S. H. Wei and A. Zunger, “Valence band splittings and band offsets of AlN, GaN, and InN,” Appl. Phys. Lett.69, 2719–2711 (1996). [CrossRef]
26. O. Ambacher, “Growth and applications of Group III-nitrides,” J. Phys. D: Appl. Phys.31, 2653–2710 (1998). [CrossRef]
27. A. Armstrong, Sandia National Laboratories, Albuquerque, NM 87185 (personal communication, 2010).
28. S. Choi, H. J. Kim, S.-S. Kim, J. Liu, J. Kim, J.-H. Ryou, R. D. Dupuis, A. M. Fishcer, and F. A. Ponce, “Improvement of peak quantum efficiency and efficiency droop in III-nitride visible
light-emitting diodes with an InAlN electron-blocking layer,” Appl. Phys. Lett.96, 221105–221107 (2010). [CrossRef]
29. J. Hader, J. V. Moloney, and S. W. Koch, “Density-activated defect recombination as a possible explanation for the efficiency droop in GaN-based diodes,” Appl. Phys. Lett.96, 221106–221108
(2010). [CrossRef]
30. Y. Y. Kudryk and A. V. Zinovchuk, “Efficiency droop in InGaN/GaN multiple quantum well light-emitting diodes with nonuniform current spreading,” Semicond. Sci. Technol.26, 095007–095011 (2011).
31. C. Z. Ning, J. V. Moloney, A. Egan, and R. A. Indik, “A first-principles fully space-time resolved model of a semiconductor laser,” Quantum Semiclassical Opt.9, 681–691 (1997). [CrossRef]
32. L. V. Keldysh, “Behaviour of non-metallic crystals in strong electric fields,” Sov. Phys. JETP6, 763–770 (1958).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article »
|
{"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-19-22-21818&id=223751","timestamp":"2014-04-19T02:32:55Z","content_type":null,"content_length":"395677","record_id":"<urn:uuid:f000a9f7-e902-4afa-b134-b4056385b84e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help on Proofs (Rules of Inference)
March 25th 2010, 05:16 PM #1
Mar 2010
Help on Proofs (Rules of Inference)
I'm having a lot of trouble with proofs and rules of inference, If somebody could talk me through each of these problems I will be grateful. Proofs are very interesting to me and I know I need
practice, but help is always appreciated.
* = conjunction
(P > T) * P
(T v S) > (T > R)
(T*P) * R
R > (Q > S)
P * R
T > P
T v Q
(P v Q) > (P * (S v R))
P v (T * R)
(T * R) > S
~S * (P > Q)
Q v S
Thank you.
I'm having a lot of trouble with proofs and rules of inference, If somebody could talk me through each of these problems I will be grateful. Proofs are very interesting to me and I know I need
practice, but help is always appreciated.
* = conjunction
(P > T) * P
(T v S) > (T > R)
(T*P) * R
Let's see if I can write this the "usual" way AND "translate" it. You do the formal writing:
$(P\rightarrow T)\wedge P$ -- for this to be true both sides must be true, so P is true but then also T has to be true for $P\rightarrow T$ to be true.
$(T\vee S)\rightarrow (T\rightarrow R)$ -- The left side here is always true since T is, so the right side must be true as well, and since T is true also R is true
$T\wedge P\wedge R$ -- follows at once frome the above
R > (Q > S)
P * R
T > P
T v Q
Hint: the second line tells us that both P, R are true...and this is pretty much you need!
Now you try the other ones by yourself.
Ps. Of course, you must thoroughly know the truth tables for the different connectives...!
(P v Q) > (P * (S v R))
P v (T * R)
(T * R) > S
~S * (P > Q)
Q v S
Thank you.
March 25th 2010, 06:53 PM #2
Oct 2009
|
{"url":"http://mathhelpforum.com/discrete-math/135717-help-proofs-rules-inference.html","timestamp":"2014-04-18T10:48:44Z","content_type":null,"content_length":"35531","record_id":"<urn:uuid:4485c4dd-9d9f-4ff3-9d6d-5f69e2b4c114>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Stock Market Is STILL 41% To 52% Overvalued
The Q Ratio is a popular method of estimating the fair value of the stock market developed by Nobel Laureate James Tobin. It's a fairly simple concept, but laborious to calculate. The Q Ratio is the
total price of the market divided by the replacement cost of all its companies. Fortunately, the government does the work of accumulating the data for the calculation. The numbers are supplied in the
Federal Reserve Z.1 Flow of Funds Accounts of the United States, which is released quarterly.
The first chart shows Q Ratio from 1900 to the present. I've calculated the ratio since the latest Fed data (through 2011 Q2) based on a linear extrapolation of the Z.1 data itself.
Interpreting the Ratio
The data since 1945 is a simple calculation using data from the Federal Reserve Z.1 Statistical Release, section B.102., Balance Sheet and Reconciliation Tables for Nonfinancial Corporate Business.
Specifically it is the ratio of Line 35 (Market Value) divided by Line 32 (Replacement Cost). It might seem logical that fair value would be a 1:1 ratio. But that has not historically been the case.
The explanation, according to Smithers & Co. (more about them later) is that "the replacement cost of company assets is overstated. This is because the long-term real return on corporate equity,
according to the published data, is only 4.8%, while the long-term real return to investors is around 6.0%. Over the long-term and in equilibrium, the two must be the same."
The average (arithmetic mean) Q Ratio is about 0.71. In the chart below I've adjusted the Q Ratio to an arithmetic mean of 1 (i.e., divided the ratio data points by the average). This gives a more
intuitive sense to the numbers. For example, the all-time Q Ratio high at the peak of the Tech Bubble was 1.82 — which suggests that the market price was 157% above the historic average of
replacement cost. The all-time lows in 1921, 1932 and 1982 were around 0.30, which is about 57% below replacement cost. That's quite a range.
Another Means to an End
Smithers & Co., an investment firm in London, incorporates the Q Ratio in their analysis. In fact, CEO Andrew Smithers and economist Stephen Wright of the University of London coauthored a book on
the Q Ratio, Valuing Wall Street. They prefer the geometric mean for standardizing the ratio, which has the effect of weighting the numbers toward the mean. The chart below is adjusted to the
geometric mean, which, based on the same data as the two charts above, is 0.65. This analysis makes the Tech Bubble an even more dramatic outlier at 179% above the (geometric) mean.
Extrapolating Q
Unfortunately, the Q Ratio isn't a very timely metric. The Flow of Funds data is over two months old when it's released, and three months will pass before the next release. To address this problem,
I've been making estimates for the more recent months based on a combination of changes in the VTI (the Vanguard Total Market ETF) price changes and an extrapolation of the Flow of Funds data itself.
Bottom Line: The Message of Q
The mean-adjusted charts above indicate that the market remains significantly overvalued by historical standards — by about 41% in the arithmetic-adjusted version and 52% in the geometric-adjusted
version. Of course periods of over- and under-valuation can last for many years at a time, so the Q Ratio is not a useful indicator for short-term investment timelines. This metric is more
appropriate for formulating expectations for long-term market performance. As we can see in the next chart, the current level of Q has been associated with several market tops in history — the Tech
Bubble being the notable exception.
Please see the companion article Market Valuation Indicators that features overlays of the Q Ratio, the P/E10 and the regression to trend in US Stocks since 1900. There we can see the extent to which
these three indicators corroborate one another.
Footnote on intangibles: I frequently receive emails asking about the absence of a line item for intangibles in my Q Ratio analysis. On this topic I defer to Andrew Smithers, who touches on the topic
in the FAQs on his website:
Does the Existence of Intangible Assets Invalidate q?
No, the evidence is that that the aggregate value of intangibles, if any, does not change over time relative to the replacement value of tangible assets. This is shown by the mean reversion of q
relative to its average. For an academic analysis see "What Does q Predict?" by Donald Robertson and Stephen Wright, available on http://www.econ.bbk.ac.uk/faculty/wright.
This post originally appeared at Advisor Perspectives. Copyright 2014.
Comments on this post are now closed.
|
{"url":"http://www.businessinsider.com/the-stock-market-is-still-41-to-52-overvalued-2011-10","timestamp":"2014-04-19T20:37:35Z","content_type":null,"content_length":"128229","record_id":"<urn:uuid:6a520e8c-83f3-4fd4-8e7e-2d336043e525>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Special Supplement: The Development of Wittgenstein's Philosophy
by D.F. Pears
A Special Supplement: The Development of Wittgenstein’s Philosophy
This theory of logical necessity is so simple and elegant that it attracts all the attention, and Wittgenstein’s next step sometimes goes unnoticed. His next step is to argue that, though the
propositions of logic are tautologies and not theories about the nature of reality, the fact that logic exists does indicate something about the nature of reality—it indicates that reality consists
ultimately of simple things. His argument for this connection between the existence of logic and his ontology can be broken down into stages. The existence of logic depends on the possibility of
combining factual propositions to form tautologies. But that requires the possibility of first constructing factual propositions, without which there would be nothing to combine; and this, in its
turn, involves the possibility of elementary propositions and the ultimate granulation of reality. Read in this direction, the argument is a transcendental one, which, in the spirit of Kant, seeks to
show how the a priori propositions of logic are possible. They are possible only because reality consists ultimately of simple things. From this point the argument can be traced back in the reverse
direction—from simple things to elementary propositions, and thence by the application of the truth-functional formula to the limit of language, which is fixed by the possible permutations and
combinations of elementary propositions, however much this may be disguised by the convenient grossness of ordinary factual discourse.
So the task which Wittgenstein undertook in the Tractatus was really an investigation of the foundations of logic. His Notebooks show that this was how it began, and the other aspect of it, which he
emphasizes in the Preface to the Tractatus—the fixing of the limit of language—emerged later. The two aspects are connected because logic reveals the structure of factual discourse, and so reveals
the structure of reality, which factual discourse reflects. It is for logic to reveal these two structures, which are really one, because they are given in advance of experience—a priori. Experience
can only give us a world of facts—everything that is contingently the case—but this world floats in a space of possibilities, which is given a priori. The limit of the space of possibilities, which
is reflected in the limit of factual discourse, is given by logic. For the point of origin from which the limit is calculated is determined by logic, and the truth-functional formula, according to
which it is calculated, is a logical formula.
So though the propositions of logic are tautologies and lack factual sense, it is logic that reveals the essential structure of reality. The point is sometimes overlooked, because it might be thought
that, if the propositions of logic are tautologies, that is only because we happen to have chosen a language in which tautologies are produced when certain propositions are combined. So it might
appear that a different choice would have produced a different result, and that all logical truths are true merely by convention. But this was not Wittgenstein’s view. He held that, although we have
certain options—that this word should have this meaning, and that word that meaning—the general framework of any factual language is fixed objectively in advance. The general framework is a
truth-functional structure based on elementary propositions. When human beings devise a particular factual language, they must connect it up to this pre-existing structure. They have certain options
about the ways in which they make the connection, but the structure itself is rigid.
The Tractatus is a philosophical study of this structure, and the medium through which it works is logic. This explains why the book contains so little detailed analysis of particular types of
proposition. Wittgenstein is concerned with the general theory of factual language, and with the general theory of reality which he believed that he could deduce from it. He was not much concerned
with what seemed to him to be the comparatively trivial details of particular analyses. Now the general theory of language and the ontology do not themselves belong to factual discourse. That is one
connection between the darkness at the center of Wittgenstein’s theory of factual discourse and the mysteriousness of certain kinds of non-factual discourse—such things cannot be expressed in factual
propositions. But there is also another connection. Wittgenstein’s ontology and part of his theory of language—the part which deals with elementary propositions—are intrinsically mysterious. It is
not that they can be fully and clearly expounded in language whose only disadvantage—if it is one—is that it is not factual language. The exposition of them is necessarily sketchy and impossible to
illustrate with examples, because they are speculative theories. Even if his arguments proved that these things must be so, they could not be seen to be so.
Ten years later Wittgenstein had begun to dismantle parts of this system, but not in full view of the usual public. In fact, out of the mass of written work which he produced between the late 1920s
and his death in 1951 only one brief article was published in his lifetime. In this period his ideas became known through discussions with small groups and lectures to restricted audiences. The
result was that, at a time when some of the ideas of the Tractatus were being modified, they continued to exert an influence which was, perhaps, greater than it would otherwise have been. The theory
of analysis, or at least the usable part of it, was taken as a model by analytical philosophers in this period.
The model suggested that all factual propositions are truth-functional compounds of simpler propositions. So it appeared that the way to deal with a type of factual proposition which raises a
philosophical problem would be to analyze it into its simpler and more explicit components. Thus in the realm of contingency philosophy became the search for translations which would reveal all the
implications of the various types of factual proposition and the real nature of their subject matter. Factual propositions whose meaning appears on first inspection to be rather nebulous, such as
propositions about social or political groups, or propositions about electrons, would be analyzed in a way that made their implications absolutely clear. Similarly, in the realm of necessity, if all
a priori propositions are reducible to tautologies, the way to deal with a puzzling proposition, which is evidently a priori, and yet which does not seem to be tautological, would be to translate it
into a form in which it was demonstrably tautological. For example, it is evidently an a priori truth that we cannot travel backwards in time, and, though it does not seem to be a tautology in this
form, it could be hoped that it would be possible to translate it into a tautology if an adequate analysis of temporal terms could be found.
Wittgenstein’s modification of the second of these two ideas began in 1929, and can be seen in Philosophische Bemerkungen. All a priori propositions would be reducible to tautologies only if the kind
of final analysis which is described in the Tractatus could be achieved. But in the Bemerkungen he admits that it cannot be achieved. He had always conceded that he had not been able to produce final
analyses of that kind. But he had thought that they existed and awaited discovery. This is what he retracted in 1929.
In order to understand the retractation, it is necessary to remember the specification of elementary propositions: they are logically independent of one another. Now ordinary factual propositions are
not logically independent of one another. For example, a proposition describing a certain area as blue is incompatible with another proposition describing it as yellow. In the Tractatus Wittgenstein
had explained this incompatibility by saying that colors have a certain internal complexity. His idea was that propositions in which colors are mentioned could be analyzed down into elementary
propositions which would mention things with no internal complexity. If this kind of analysis could be achieved, an a priori proposition such as “Nothing can be blue and yellow simultaneously” would
be demonstrated to be a tautology, and its negation, “Something can be blue and yellow simultaneously,” would be demonstrated to be a contradiction. For in the final analysis of the proposition “This
thing is blue” there would be found one or more elementary propositions the negation of which would be found in the final analysis of the proposition “This thing is yellow.” Now if a proposition is
combined with its negation, the result is a contradiction. So in the final analysis the proposition which is produced when the two color propositions are combined—“This thing is both blue and yellow
simultaneously”—would turn out to be a contradiction, and its negation would therefore be a tautology.
If this program could be carried out, the necessary truth of all a priori propositions, except those which give the ontology of the Tractatus, would be shown to be merely the result of the fact that
certain combinations of elementary propositions are tautologies. The necessary truth of an a priori proposition would never depend on the specific natures of the things mentioned in elementary
propositions because those things would have no internal complexity. This is why the elementary propositions which mention those things would be logically independent of one another. However, the
fact that certain combinations of them are tautologies would reveal something about the nature of reality—it would reveal the general structure of reality, which is objectively necessary. To put the
same point the other way round, the general structure of reality would necessarily be reflected in the logical grammar of any factual language which human beings may devise. But, according to
Wittgenstein’s earlier theory, there would be no further sources of necessity in the nature of things. If we put on one side the necessary truths about the general structure of reality which give the
ontology of the Tractatus, there would be no residual sources of necessity left in the nature of things.
In the Bemerkungen he admits that this theory of necessity will not work in all cases. He had come to think that the necessary truth of certain specific a priori propositions cannot be explained by
reducing them to tautologies. His reason for this retractation was that he no longer found it possible to believe that there would be no incompatibilities between the words contained in the
elementary propositions into which he thought that all factual propositions are analyzable. Suppose, for example, that incompatibilities of colors were explained by analyzing propositions about them
into propositions about the velocities of particles. Then the same situation would recur at a lower level of analysis, because there would be a wide range of velocities which would be incompatible
with one another. So Wittgenstein conceded that elementary propositions would not be logically independent of one another. They would contain words which produced certain incompatibilities between
them, just as color words do, and it would not be possible to reduce these incompatibilities to contradictions by further analysis. All that the philosopher can do is to note the incompatibilities
and treat them as irreducible features of the logical grammar of the words.
|
{"url":"http://www.nybooks.com/articles/archives/1969/jan/16/a-special-supplement-the-development-of-wittgenste/?page=4","timestamp":"2014-04-17T06:55:10Z","content_type":null,"content_length":"48114","record_id":"<urn:uuid:f900cb1c-2de4-4824-98d2-36da7826e2c2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chance Guide to Supplementary Texts
Thomas Moore
Those of us who have taught a Chance course have used a supplementary textbook and find this helpful. It is useful for students to have a resource where they can get a systematic treatment of
statistical concepts at an elementary level. It is important that this book be elementary and very clear so that students with no formal background can read it on their own. It is useful if this book
is non-mathematical and organized into chapters that may be understood if read in a non-linear order. Two books that have met these requirements are David Moore's Statistics: Concepts and
Controversies (3rd edition, paperback, 1991, W.H. Freeman) [Moore] and Freedman, Pisani, Purves, and Adhikari's Statistics (2nd edition, hardback, 1991, W.W. Norton) [FPPA].
Moore is the more elementary of the two. It contains 3 major parts: I - ``Collecting Data'', II - ``Organizing Data'', and III - ``Drawing Conclusions from Data''. Part I has very lively and
intuitive chapters on sampling, experiments, and measurement with lots of real life examples. Part II has good explanation of what are now standard descriptive and EDA statistics, including an
excellent chapter on relationships including both qualitative variables (cross-classifications, controlling for extraneous variables), and quantitative variables (regression, correlation). All of
this is done at the descriptive level with the emphasis being on what relationships mean. Part III treats probabilities as a frequentist and thinks of their computation as a job for simulations. This
paves ground for a very intutive final chapter on statistical inference, with the emphasis on what a confidence interval is and what a statistical test is and their uses and abuses.
FPPA has much the same material as Moore but at a slightly deeper level. For example, FPPA discusses residuals and the root-mean-square-error for the regression line while Moore does not. FPPA does a
lot more with probability than Moore. FPPA discusses conditional probability, teaches computing probability by a few simple rules, and has a chapter on the binomial. FPPA then devotes 3 entire
chapters to computing probabilities for sums (and averages) of draws (with replacement) from a box of numbered tickets - so-called ``box models''. This discussion culminates in the central limit
theorem. It is done in a highly conversational, non-technical way, free of mathematical proof or notation. Yet it clearly demands much more of the student than does Moore.
While both books can be used with a beginning audience, FPPA will demand more of the student and so may require a bit more help or should perhaps be used with students who are a bit more
quantitatively gifted.
Both Moore and FPPA contain excellent exercises. We recommend assigning some of these to the students on a regular basis to be counted in the course grade. Some of us assign these exercises on a
daily basis to be handed in and graded like normal homework. Others have had the students keep these in a looseleaf journal, to be self-graded and commented on by the students, and handed in 3 or 4
times during the semester. Since class discussion rarely revolves explicity around these assignments we are still feeling our way around the notion of getting our students to understand this
What follows is an identification of the various topics with portions of these two textbooks that pertain to those topics. We have chosen to list statistical topics followed by Chance topics. In the
list of Chance topics you should find most of the topics that at least one of us has taught. Attached to this document is a copy of the table of contents of each book.
Statistical topic: Surveys and sampling
Chance topics: Public opinion polls and survey sampling; Census undercount
• from MOORE: The Introduction to Part I (pages 1 - 3) plus Chapter 1, sections 1 - 5 form the essential reading. This covers the need for good sampling, investigates basic concepts and
terminilogy, without going beyond simple random samples. Moore gives lots of real life examples of good and bad surveys, and discusses concepts of bias, precision, and confidence interval. The
remaining sections of chapter 1 give the rest of the story including other kinds of probability sampling, as well as good discussions of policy and ethical issues. In chapter 8, sections 1 and 2,
Moore discusses confidence intervals more fully and includes formulas for confidence intervals. To understand these sections it helps if the student knows the rudiments of the normal curve, which
he/she can get from section 5 of chapter 4. It also helps to have read the probability material from Moore, but neither of these prerequesites is absolutely necessary.
• from FPPA: Chapter 19 has an excellent overview of survey samples with good historical examples of good and bad samples to motivate the need for proper design. This chapter covers the concept of
chance error in samples but does not explicitly introduce confidence intervals. To get this story the reader must read somewhat more technical chapters 20, 21, and 23. A hearty student could read
sections 1 - 3 of chapter 21, ignoring the few technical references and gain insight into the meaning of a confidence interval. To properly understand chapters 20, 21, and 23 it is necessary to
have read and understood at least chapters 13, 16, and 17 on basic probability and on setting up and working with box models. These chapters form the basic information for any Chance topic about
Statistical topic: Experiments and observational studies
Chance topics: Clinical trials, experiments, and other studies
Since many discussions in our Chance courses have centered on media reports of some new scientific study this material is central to much of the course. For this reason it might be good to include
this reading early in the course. These readings and discussions typically do not require statistical inference.
• from Moore: The Introduction to Part I (pages 1 - 3) plus Chapter 2 on experimentation is the essential material. This is full of good examples and is highly non-technical.
• from FPPA: Chapters 1 and 2 on controlled experiments and observational studies provide all the basic information.
Statistical topic: Probability
Chance topics: Paradoxes, coincidences, gambling, DNA fingerprinting, streaks and runs (e.g., in sports), card shuffling, lotteries, etc.
• from Moore: Moore has a very basic approach to probability in chapter 7. This reading will provide the student with the concept of long-range relative frequency, the law of averages, equally
likely models, simulation, and expected value. This is a spare toolkit, so the instructor should be prepared to supplement as needed.
• from FPPA: Chapters 13 and 14 introduce the bare essentials to tackle many probability topics. Chapter 13 includes the long-range frequency definition, the definition of conditional events and
the product rule for a sequence of two possibly dependent events, independent events and the implications of indendence, and uses the famous Collins case as a final example. Chapter 14 adds the
notion of equally likely models and the addition rule for mutually exclusive events to compute probabilities. These 2 chapters are really sufficient to handle any of the example topics listed
above with the exception of gambling, which seems to need the notion of expected value. To gain this concept one must read chapters 16 and 17 of FPPA which tell the reader how to compute
probabilities for sums (or averages) of draws from a box using the normal approximation. Chapter 17 includes roulette examples.
Statistical topic: Statistical tests
Chance topics: Statistics in the law and more advanced articles on clinical trials and experiments
Note: Journal articles on an experiment will typically require knowledge of statistical tests and P-value.
• from Moore: Two sections give the essence: sections 3 and 5 of Chapter 8. (Section 4 is an optional and more technical section.) It would be advantageous to have read Moore's probability
material, but it is not absolutely necessary. Section 3 introduces tests, P-value, and statistical significance. Section 5 discusses ``uses and abuses'' of these concepts.
• from FPPA: Sections 1 - 5 of Chapter 26 and Chapter 29 form the core. In Chapter 26 we learn what a statistical test is, about P-value, and about statistical significance. It really helps to know
how sample means vary (chapter 16 and 17) when reading Chapter 26. Chapter 29 includes the ``uses and abuses'' discussion and is important.
Statistical topic: Basic descriptive statistics
Chance topics: A variety of Chance topics require knowledge of means, standard deviations, basic graphical procedures, and the normal curve.
• from Moore: Chapters 4 and 5 give the core and are very readable. Especially important sections from Chapter 5 are section 1 on cross-classified data and section 3 on correlation and causation.
• from FPPA: Chapters 4, 5, 8, and 9 form the core. Chapter 5 gives the normal curve a thorough going over, but is accessible by most students. One could skip sections 3 and 4 of chapter 8 (one
need not know how to compute
This document was generated using the LaTeX2HTML translator Version 0.6.4 (Tues Aug 30 1994) Copyright © 1993, 1994, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
The command line arguments were:
latex2html -split 0 Moore.tex.
The translation was initiated by J. Laurie Snell on Tue Mar 21 14:46:20 CST 1995
|
{"url":"http://www.dartmouth.edu/~chance/teaching_aids/books_articles/Moore/Moore.html","timestamp":"2014-04-20T04:15:02Z","content_type":null,"content_length":"11318","record_id":"<urn:uuid:fe37bdd1-5078-49f1-a561-e3538cbc83dd>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thought experiment: Origin of inertia is gravitational?
Maybe stupid question, but if there is red/blue shift, wouldnt it be preserved after you stop applying force and thus stoping the box again?
Lets consider this in detail:
consider a photon moving back and forth, it will impart equal (but opposite sign) momentum when reflected by both sides.
consider the case when a photon is moving towards -x direction after the box is accelerated, photon is blue shifted, thus when it hit the wall it will impart more momentum then before. But it is
reflected as blue shifted photon, so when it hit the other side of the wall it will have the same blue shifted momentum and impart same momentum, reflected back and so forth as before and everything
in balance again.
So Doppler shift only cause reaction force (change of momentum) when there is a change of velocity.
|
{"url":"http://www.physicsforums.com/showthread.php?p=3755282","timestamp":"2014-04-16T10:28:57Z","content_type":null,"content_length":"65499","record_id":"<urn:uuid:2133794d-0e41-4abc-b6cf-255a52778bad>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to convert string to int without using library functions in c
C language tricky good pointers questions answers and explanation operators data types arrays structures questions functions recursion preprocessors, looping, file handling, strings questions switch
case if else printf advance c linux objective types mcq faq interview questions and answers with explanation and solution for freshers or beginners. Placement online written test prime numbers
Armstrong Fibonacci series factorial palindrome code programs examples on c c++ tutorials and pdf
5 comments:
1. while(str[i]!='\0'){
if(str[i]< 48 || str[i] > 57){
printf("Unable to convert it into integer.\n");
return 0;
sum = sum*10 + (str[i] - 48);
for this part, i do not understand the str[i] is smaller than 48 and larger than 57?
Can you explain it again?
2. str[i] will return ASCII value of that character. ASCII value of '0' is 48 and '9' is 57.
3. can any one write the full c program for day number for example 04-feb-2012 must be printed as 35 2012
means jan as 31 days so 31+4=35
4. how can i print prime number in if else condition with out using loop;
5. may i put
sum = sum*10 + (str[i] );
instead of
sum = sum*10 + (str[i] - 48); ?
and here 48 means 0?
|
{"url":"http://www.cquestions.com/2011/09/how-to-convert-string-to-int-without.html","timestamp":"2014-04-21T02:00:03Z","content_type":null,"content_length":"122171","record_id":"<urn:uuid:ee374c0b-f477-4a5d-b686-b8aebebdaf21>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two-dimensional Stokes Flows with Time-dependent Free Boundaries Driven by Surface Tension -- from Wolfram Library Archive
We consider the two-dimensional quasi-steady Stokes flow of an incompressible Newtonian fluid occupying a time-dependent simply-connected region bounded by a free surface, the motion being driven
solely by a constant surface tension acting at the free boundary. Of particular concern here are such flows that start from an initial configuration with the fluid occupying an array of touching
circular disks. We show that, when there are N such disks in a general position, the evolution of the fluid region is described by a conformal map involving 2N-1 time-dependent parameters whose
variation is governed by N Invariants and N-1 first order differential equations. When N=2, or when the problem enjoys some special features of symmetry, the moving boundary of the fluid domain
during the motion can be determined by solving purely algebraic equations, the solution of a single differential equation being needed only to link a particular boundary shape to a particular time.
The analysis is aided by exploiting a connection with Hele-Shaw free boundary flows when the zero-surface-tension model is employed. If the initial configuration for the Stokes flow problem can be
produced by injection (or suction) at N points into an initially empty Hele-Shaw cell, as can the N-disk configuration referred to above, then so can all later configurations; the points where the
fluid must be injected move, but the amount to be injected at each of the N points remains invariant. The efficacy of our solution procedure is illustrated by a number of examples, and we exploit the
method to show that the free boundary in such a Stokes flow driven by surface tension alone may pass through a cusped state.
|
{"url":"http://library.wolfram.com/infocenter/Articles/3206/","timestamp":"2014-04-16T07:26:51Z","content_type":null,"content_length":"32485","record_id":"<urn:uuid:5fc972a3-eac8-46f3-a2aa-6c2295948de4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
|
c program
This is a discussion on c program within the C Programming forums, part of the General Programming Boards category; 11111111...
yes... is compiles. but does not gives me the right anwer.... also.. i need..to get the porbability..but idont know..where to put it... so i can get it right
please..help///me ..at least tell me if iam going in the right direction..or if all the porgram is wrong..
Where are you reading the birthdays for each guest at the party?
should be on birtdays..but i dont think is right..can you tell me waht i can do//??
bdays = 1 + rand() % 365; printf("%d, %d \n", i, bdays); Change that to: bdays[i] = 1 + rand()%365; printf("%d, %d\n",i,bdays[i]);
this is my new..modeified code.. that i was working on... but still doesnt work can you tell me waht is wrong...with it.. pleas Code: #include<stdlib.h> #include <time.h> int party(int n); main() {
int guests,j,i,k, prob,n; long int parties, samebday, diffbday, count, result; printf("Enter the number of parties: "); scanf("%ld", &parties); printf("Enter the number of guests: "); scanf("%d", &
guests); printf(" the probability that 2 guests have the same birthday is: %d", prob); srand(time(NULL)); samebday = 0; diffbday = 0; for(i=1;i<=k;i++) { prob = prob*((double) (n-i+1) / n); prob =
1.0- prob; } for (j = 1; j <= parties; j++) { party(guests); count = party(result); printf("%d \n", count); if (count == 1) samebday++; if (count == 0) diffbday++; } printf("%d \n", samebday); printf
("%d \n", diffbday); return 0; } int party(int n) { int i,k,match, bdays[i]; long int parties, hold; match = 0; for(i = 0; i < n; i++) { bdays[i] = 1 + rand() % 365; printf("%d, %d \n", i, bdays[i]);
for(k=0;k>=i;k --) { if(bdays[k] == bdays[i]) match=1; } } return(match); }
#include<stdlib.h> #include <time.h> int party(int n); main() { int guests,j,i,k, prob,n; long int parties, samebday, diffbday, count, result; printf("Enter the number of parties: "); scanf("%ld", &
parties); printf("Enter the number of guests: "); scanf("%d", &guests); printf(" the probability that 2 guests have the same birthday is: %d", prob); srand(time(NULL)); samebday = 0; diffbday = 0;
for(i=1;i<=k;i++) { prob = prob*((double) (n-i+1) / n); prob = 1.0- prob; } for (j = 1; j <= parties; j++) { party(guests); count = party(result); printf("%d \n", count); if (count == 1) samebday++;
if (count == 0) diffbday++; } printf("%d \n", samebday); printf("%d \n", diffbday); return 0; } int party(int n) { int i,k,match, bdays[i]; long int parties, hold; match = 0; for(i = 0; i < n; i++) {
bdays[i] = 1 + rand() % 365; printf("%d, %d \n", i, bdays[i]); for(k=0;k>=i;k --) { if(bdays[k] == bdays[i]) match=1; } } return(match); }
No. Your modification shows that you copied this from somewhere and don't have the slightest clue what you are doing. I am not going to edit this for you unless you explain what your code does in
Okay, so why the hell are you declaring the array in your party function as size i? Give it a meaningful value like 50 or 100 or however many guests you have. In this case, I'm assuming n.
oh ok..thanks.... and hte rest.. is ok.. or is wrong/.?
there are..hints...my teacher...sent me by email..but still i dont get it • Write a function with prototype int party(int n) that will simulate one party with n guests. The function will return 1 if
two of the guests have the same birthday, and 0 if all guests have different birthdays (i.e., born on the same day of the year, not necessarily in the same year). To be more precise, the function
party should return 1 if any two birthdays are the same, regardless of whether any other guests have the same birthday. For example, if two pairs of guests have the same birthday, the function
returns 1. If three guests have the same birthday, the function returns 1. And so forth. The only time the function returns 0 is if all the birthdays are different. • In the function party, use the
function rand to generate the random birthdays. (A birthday may simply be a number between 1 and 365. You can ignore leap years). • Call srand once at the beginning of main. (Do not call srand in the
function party. If you do, and pass it the same seed every time, your function will simulate the same party every time it is called.) • In the function party, use an int array to record birthdays
that have been assigned. Give some thought to how large the array should be, and how you will record birthdays. There is more than one reasonable way to do this. • Aside: If you understand
probability theory, you might be able to answer this question analytically. For many other questions of this sort, however, simulation is the only practical way to find an answer.
for(k=0;k>=i;k --) this is wrong. It needs to read (for k = i; k>=0;k--). The rest seems right. Give it a try.
Actually one more thing. Delete all the variables you are not actually using, they are just cluttering up the code. Also, make sure you return 1 where you currently set your match to 1 instead of
just setting the variable to 1. That's what the prof. was suggesting. If you find two that match return 1, you don't even care about the rest. If not, the for will finish and it will return 0 at the
bottom of the function.
|
{"url":"http://cboard.cprogramming.com/c-programming/126202-c-program.html","timestamp":"2014-04-18T01:14:03Z","content_type":null,"content_length":"90945","record_id":"<urn:uuid:1d7f08e4-af0a-4e13-bb39-01e986fe9b31>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Adding pennies together every day they add together HELP
10-19-2011 #1
Registered User
Join Date
Oct 2011
Adding pennies together every day they add together HELP
Hi this is a homework assignment, and I dont want anybody to just give it away because I really want to learn my self but I am so stumped.. I understand I need too add the first number of pennies
together but I am so confused on how to do this, would i use 2 variables to get this and add them together? But if i did this I dont think it would work in one cout statement since Day 1 should
start with 1 penny... so thats why I was thinking I needed to use pennies++; but I have no clue somebody please point me in the right direction here is the code so far
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;
int main()
int totalDays = 0, pennies = 0, days = 1, pay;
cout << "How many days have you worked? ";
cin >> totalDays;
if(totalDays < 1)
cout << endl << "Pick up some hours please...re-enter." << endl
<< endl;
}while(totalDays < 1);
cout << setw(5) << left << endl << "Day #" << setw(17) << right << "Pay" << endl
<< "------------------------" << endl;
while(days <= totalDays)
pennies += days;
cout << setw(5) << left << "Day " << days++ << setw(17) << right << pennies;
cout << endl;
return 0;
it should be 1 penny for day 1
2 pennies for day 2
4 pennies for day 3
8 pennies for day 4 because 1+2+4 plus the 1 penny for that day...and this makes me think like the pennies for all those days have to have seperate variables but I know for sure that thats not
what im suppose to do...so confused
Last edited by cullman; 10-19-2011 at 09:03 PM.
2 pennies for day 2
4 pennies for day 3
8 pennies for day 4
So are you supposed to be printing powers of two?
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
thats what i was thinking at first but it doesnt work...
NEVERMIND! before i posted this i wanted to try it out a couple times and i got it!!! i made the counter start at -1 so it would go to 0 and 1
so power to zero makes 1 penny and then power of 2 and so on.. heres the code!
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;
int main()
int totalDays = 0, days = 1;
double pennies,pay, x=-1;
cout << "How many days have you worked? ";
cin >> totalDays;
if(totalDays < 1)
cout << endl << "Pick up some hours please...re-enter." << endl
<< endl;
}while(totalDays < 1);
cout << setw(5) << left << endl << "Day #" << setw(17) << right << "Pay" << endl
<< "------------------------" << endl;
while(days <= totalDays)
pennies = pow(2, x);
cout << setw(5) << left << "Day " << days++ << setw(17) << right << pennies;
cout << endl;
Why not just
pennies *= 2;
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
Because day 1 wouldnt start with penny unless i set penny to 1 and did two seperate cout statements but then the rest of the program wouldnt work because i needed to add up all the pennies and
cout total earned
Just initialize it outside of your loop ? I don't see the problem.
Well i got it now but i dont understand what you mean initialize it out of the loop, i did initialize and declare all my variables in the beginning
I just did pennies =( pennies + 1)/100 to get the amount of pennies that day in dollar form
10-19-2011 #2
10-20-2011 #3
Registered User
Join Date
Oct 2011
10-20-2011 #4
10-20-2011 #5
Registered User
Join Date
Oct 2011
10-20-2011 #6
Registered User
Join Date
Aug 2011
Montreal, Quebec, Canada
10-20-2011 #7
Registered User
Join Date
Oct 2011
10-20-2011 #8
Registered User
Join Date
Oct 2011
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/142295-adding-pennies-together-every-day-they-add-together-help.html","timestamp":"2014-04-19T18:35:07Z","content_type":null,"content_length":"71597","record_id":"<urn:uuid:991ab3bc-a8be-4b17-8e04-d6e64e68195c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
teachers should know. Many of these ideas are treated in more detail in textbooks intended for prospective elementary school teachers.
A major theme of the chapter is that numbers are ideas—abstractions that apply to a broad range of real and imagined situations. Operations on numbers, such as addition and multiplication, are also
abstractions. Yet in order to communicate about numbers and operations, people need representations—something physical, spoken, or written. And in order to carry out any of these operations, they
need algorithms: step-by-step procedures for computation. The chapter closes with a discussion of the relationship between number and other important mathematical domains such as algebra, geometry,
and probability.
Number Systems
At first, school arithmetic is mostly concerned with the whole numbers: 0, 1, 2, 3, and so on.^1 The child’s focus is on counting and on calculating— adding and subtracting, multiplying and dividing.
Later, other numbers are introduced: negative numbers and rational numbers (fractions and mixed numbers, including finite decimals). Children expend considerable effort learning to calculate with
these less intuitive kinds of numbers. Another theme in school mathematics is measurement, which forms a bridge between number and geometry.
Mathematicians like to take a bird’s-eye view of the process of developing an understanding of number. Rather than take numbers a pair at a time and worry in detail about the mechanics of adding them
or multiplying them, they like to think about whole classes of numbers at once and about the properties of addition (or of multiplication) as a way of combining pairs of numbers in the class. This
view leads to the idea of a number system. A number system is a collection of numbers, together with some operations (which, for purposes of this discussion, will always be addition and
multiplication), that combine pairs of numbers in the collection to make other numbers in the same collection. The main number systems of arithmetic are (a) the whole numbers, (b) the integers (i.e.,
the positive whole numbers, their negative counterparts, and zero), and (c) the rational numbers—positive and negative ratios of whole numbers, except for those ratios of a whole number and zero.
Thinking in terms of number systems helps one clarify the basic ideas involved in arithmetic. This approach was an important mathematical discovery in the late nineteenth and early twentieth
centuries. Some ideas of arithmetic are fairly subtle and cause problems for students, so it is useful to have a viewpoint from which the connections between ideas can be surveyed.
|
{"url":"http://books.nap.edu/openbook.php?record_id=9822&page=72","timestamp":"2014-04-16T10:46:47Z","content_type":null,"content_length":"35479","record_id":"<urn:uuid:72a6b51c-a4a3-409d-a0be-4ad6e8172f88>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
|
transpose and symmetry
June 23rd 2009, 02:35 PM #1
Oct 2007
transpose and symmetry
Prove if (transpose of A)*A = A, then A is symmetric and A = A^2:
I think I understand it, but Im not sure if I can put it into proper terms
Since the transpose of A = A (if symmetric), then if you multiply both sides of the equation, you get (Transpose of A)*A = A^2
so therefore A must = A^2 (using the original statement...if (transpose of A)*A = A...)
Any clarification please?
$A\ =\ A^\mathrm TA$
$\implies\ A^\mathrm T\ =\ (A^\mathrm TA)^\mathrm T\ =\ A^\mathrm T(A^\mathrm T)^\mathrm T\ =\ A^\mathrm TA\ =\ A$
Hence $A$ is symmetric and $A=A^\mathrm TA=AA=A^2.$
June 23rd 2009, 03:27 PM #2
|
{"url":"http://mathhelpforum.com/advanced-algebra/93580-transpose-symmetry.html","timestamp":"2014-04-17T22:08:25Z","content_type":null,"content_length":"32009","record_id":"<urn:uuid:0ea765fc-8e22-4fe5-a627-c645151f8801>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Can a point be 5 unit of lenght from a line?
May 13th 2011, 06:33 AM #1
Junior Member
Jan 2011
Can a point be 5 unit of lenght from a line?
Is there any points on the line y=2x-1 which fulfill the demand that the distance to the point (1;3) should be 5 unit of length?
I tried to draw the graph and mark the point (1;3), however I still do not know how to do the calculations. Thank you in advance!
There are three possibilities:
A. The point on the line that is closest to (1,3) is greater than five units from (1,3). In this case, there are no points satisfying your criterion.
B. The point on the line that is closest to (1,3) is exactly five units from (1,3). In this case, there is exactly one point satisfying your criterion.
C. The point on the line that is closest to (1,3) is less than five units from (1,3). In this case, there are exactly two points satisfying your criterion.
You could try to find the point on the line closest to (1,3), but I claim that's more work than you need to do. If you plot the point and the line, you can see at a glance that the point (1,1) is
on the line. What is the distance from (1,1) to (1,3)? What does that say about the distance from (1,3) to the point on the line closest to (1,3)? So which case are we in?
Hi Anna55,
On your graph draw a circle radius 5 from center (1,3). Write a second equation for the circle
( X-h)^2 + (y-k)^2 = r^2 where h and k are the center coordinates. Solve the two equations for the the two points where the circle meets the straight line
May 13th 2011, 06:42 AM #2
May 13th 2011, 06:55 AM #3
May 13th 2011, 03:35 PM #4
Super Member
Nov 2007
Trumbull Ct
|
{"url":"http://mathhelpforum.com/algebra/180415-can-point-5-unit-lenght-line.html","timestamp":"2014-04-21T05:46:00Z","content_type":null,"content_length":"41039","record_id":"<urn:uuid:86f2511a-ee01-45b4-bb8f-fa71cbe96292>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the status of Cantor-Schroder-Bernstein in Reverse Math?
up vote 7 down vote favorite
I'd like to know which of the set theories in SOSOA prove what versions of Cantor-Schroder-Bernstein? For my own purposes I can use arbitrarily high quantifier complexity, but I wonder how little
transfinite recursion will suffice.
lo.logic reverse-math
2 Could you clarify a bit what set theories you mean? There aren't that many proper set theories in SOSOA and the way Simpson approaches things, they usually implicitly include global choice or that
every set is countable, both of which make CSB rather trivial. For other weak set theories, this old answer of mine is relevant - mathoverflow.net/questions/18042/… – François G. Dorais♦ Dec 19
'12 at 14:25
2 Meanwhile, the computable version of the Cantor-Schroeder-Bernstein theorem is known as Myhill's theorem en.wikipedia.org/wiki/Myhill_isomorphism_theorem, and the proof relativizes to oracles. –
Joel David Hamkins Dec 19 '12 at 14:47
The constructibility theories include global choice, but I am looking at things like $\mathsf{B}_0^\mathrm{set}$ or $\mathsf{ATR}_0^\mathrm{set}$. – Colin McLarty Dec 19 '12 at 14:51
$\mathsf{ATR}_0^{\mathrm{set}}$ includes the Axiom of Countability. $\mathsf{B}_0^{\mathrm{set}}$ is extremely weak and very hard to use; that doesn't seem to fit well with the second part of your
question. – François G. Dorais♦ Dec 19 '12 at 15:19
If countablity is necessary to to prove CSB in $\mathsf{ATR}_0^{\mathrm{set}}$, that would help me know how to direct my efforts. And $\mathsf{B}_0^{\mathrm{set}}$ supports a considerable theory
of ordinals. It might prove enough of CSB for me. This is for work in progress and I do not know `how much' of CSB I would need. For now, I wonder what versions of it are available. – Colin
McLarty Dec 19 '12 at 15:48
show 1 more comment
1 Answer
active oldest votes
I will show that variants of the following proof work in extremely weak set theories but perhaps not in $\mathsf{B}_0^{\mathrm{set}}$.
We can always reduce to the case where one of the two injections is an inclusion. Suppose that $B \subseteq A$ and $f:A \to B$ is an injection. Say that $x \in B$ is a $B$-stopper if
there is a finite sequence $\langle x_0,\dots,x_n \rangle$ with $x_0 = x$, $x_n \in B - f[A]$, and $f(x_{i+1}) = x_i$ for each $i \lt n$. The function $g:A \to B$ defined by $g(x) =
x$ if $x$ is a $B$-stopper and $g(x) = f(x)$ if $x$ is not a $B$-stopper is a bijection.
The verification that $g$ is a bijection is a straightforward feat of plain logic provided that the base theory can handle the formation of arbitrary finite sequences. (I won't consider
systems that can't handle arbitrary finite sequences.) So it is enough to make sure that $g$ exists. Assuming $\Delta_0$-comprehension, this is equivalent to the existence of the set of
$B$-stoppers. If $B^{\lt\omega}$ exists, then the set of $B$-stoppers can be formed by $\Delta_0$-comprehension.
Sadly, Simpson's system $\mathsf{B}_0^{\mathrm{set}}$ does satisfy $\Delta_0$-comprehension, but it does not prove that $X^{\lt\omega}$ exists for every set $X$. In fact, I don't think
it is known whether this system proves that $X^n$ exists for every set $X$ and every $n \lt \omega$. (See A. R. D. Mathias, Weak systems of Gandy, Jensen and Devlin, where this system is
up vote 6 known as $\mathsf{GJI}_0$, modulo the fact that Simpson's formulation of the axiom of infinity is a little unusual. I think Simpson's axiom of infinity prevents Gandy's model but not the
down vote general problem it illustrates.)
If $B^{\lt\omega}$ does not exist, then the definition of $B$-stopper given above requires $\Sigma_1$-comprehension. However, the precise set of $B$-stoppers is not needed. If $C \
subseteq B$ is such that $B-f[A] \subseteq C$ and both $C$ and $A-C$ are closed under $f$, then the map $h:A\to B$ defined by $h(x) = x$ if $x \in C$ and $h(x) = f(x)$ if $x \in A-C$ is
a bijection. Over $\mathsf{B}_0^{\mathrm{set}}$ (even without infinity), the existence of such a set $C$ is easily established using the compactness theorem for propositional logic,
which is known to be weaker than $\Sigma_1$-comprehension. This is the weakest system that I know which proves CSB.
Remark. The language of propositional logic is difficult to work with in set theories that do not prove that $X^{\lt\omega}$ exists for every set $X$. However, the theory in question
consists of $p_x \leftrightarrow p_{f(x)}$ for $x \in A$, $p_x$ for $x \in B-f[A]$, $\lnot p_x$ for $x \in A-B$. Since these formulas are all short, we can get by with standard finite
powers of sets.
In any case, I strongly advise against working in set theories that cannot prove that $X^{\lt\omega}$ exists for every set $X$.
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic reverse-math or ask your own question.
|
{"url":"http://mathoverflow.net/questions/116788/what-is-the-status-of-cantor-schroder-bernstein-in-reverse-math?answertab=votes","timestamp":"2014-04-17T01:37:27Z","content_type":null,"content_length":"58691","record_id":"<urn:uuid:8bc0ae20-c105-4f4c-a053-76ba125781f1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summer Tutoring
Now that students, teachers, parents and tutors have had a chance to catch their breath from final exams, it's time to make use of the weeks we have before school starts back. Consider all that could
be accomplished in the next few weeks:
• Areas of math that students NEVER REALLY GRASPED could be fully explained. This could be elementary skills like adding fractions, middle school topics like systems of equations, or high school
areas like sequences and series.
• Students could have a TREMENDOUS HEAD STARTon topics that will be covered in the first few weeks of school. Imagine your son or daughter being able to raise their hand to answer a question in the
first week of school because they had worked several problems just like the ones that the teacher is demonstrating.
• ENORMOUS PROGRESS could be made in the area of preparation for the standardized tests (PSAT, SAT, ACT and more) that are so important to getting into a great college.
• STUDY SKILLS could be mastered so that your child excels in ways you've always hoped for and in all subjects, not just math.
• Topics in HOME SCHOOL MATH could be reviewed so that your daughter or son is ready to start their new curriculum in the fall.
• GROUP LESSONS can be coordinated so that students learn together in small groups of 3 to 5 students and have a competitive component added. I have seen this work wonders in helping boys and girls
overcome shyness about raising their hand, taking a chance that their answer may be right or wrong and communicating with each other. There is a vast positive difference in how fast the material
is mastered.
My hourly rate has been reduced for June and July so that students can get much-needed help during these precious weeks before their heavy class loads start back.
I hope to hear from you and I look forward to helping your child make great progress in the area of math!
|
{"url":"http://www.wyzant.com/resources/blogs/234433/summer_tutoring","timestamp":"2014-04-18T20:47:10Z","content_type":null,"content_length":"35014","record_id":"<urn:uuid:ba75d16e-d9e8-4b1a-a72a-b3e75bd3f93b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Getting Started with Writing Mathematics in LaTeX
LaTeX and mathematics go well together. At the risk of preaching to the converted, this post sets out (a) reasons to learn to write mathematics in LaTeX, (b) a few free internet guides on learning to
write mathematics in LaTeX, (c) several internet resources which can facilitate writing mathematics in LaTeX.
Reasons to Learn to Write Mathematics in LaTeX
There are many good reasons to learn how to write mathematics in LaTeX.
• LaTeX encourages mathematical writing. It's just plain text. It's easy to write, copy, and paste formulas. That which is easy gets done.
• Many websites such as Wikipedia, Stackexchange, Wordpress blogs, and so on use LaTeX to render mathematics. Thus, learning mathematics and LaTeX pays dividends in many places.
• LaTeX demystifies the complexity of mathematical texts. Before learning LaTeX, my mind boggled at the apparent complexity of mathematical documents. All the numbering, cross-references, and
symbols that appeared so complex, appear relatively straightforward from the perspective of LaTeX.
• LaTeX fosters skills in verbalising mathematics. The markup is similar to verbalised mathematics. Along with pronunciation guides and mathematical video courses, writing LaTeX can be particularly
useful for the self-taught social scientist.
Getting Started
If you're completely new to LaTeX, check out
this guide
by David R. Wilkins. In regards to getting started with writing mathematics in LaTeX, the internet is filled with resources. The following are two goods guides:
Setting up a Learning Environment
The following links provide an environment of rapid feedback useful when learning to write mathematics in LaTeX.
• A text editor that provides lists of mathematical symbols: I use WinEdt, which has a GUI for many mathematical symbols. But many text editors provide such support.
• Codecogs has a real time online LaTeX equation editor.
• Uni Colorado provides a quick to load extensive list of symbols
• Scott Pakin provides a big download with a comprehensive list of LaTeX symbols
• DeTeXify allows you to draw the desired symbol online and the program attempts to guess the LaTeX symbol.
• Tex.SE is a great site for getting answers to LaTeX related questions.
Basic Working LaTeX file
The following provides the simplest of working environments which can be useful when first learning.
No comments:
|
{"url":"http://jeromyanglim.blogspot.com/2010/10/getting-started-with-writing.html","timestamp":"2014-04-21T12:08:17Z","content_type":null,"content_length":"85590","record_id":"<urn:uuid:cc2fd33f-f180-41b6-82c3-8a46e0c027be>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
can somebody explain this .i don't understand the way used by Matt sir http://www.sosmath.com/CBB/viewtopic.php?p=214676&highlight=#214676
Let $f(x)=\lfloor x^2\rfloor-\lfloor x\rfloor^2$. Then $\mathop{\mbox{range}}(f)\subseteq\mathbb{Z}$. To prove the converse inclusion ( $\mathbb{Z}\subseteq\mathop{\mbox{range}}(f)$), one must show
that every integer $n$ is in the range of $f$. For this, one needs to find a pre-image of every integer. Matt showed that $f(n+1/2)=n$.
sir my basic question depends on domain(INPUT).if we look at f(x) then its domain is all real numbers (INPUT) and its range is Z(OUTPUT).Then how can we straight forward choose x=n+1/2.here x means
part of my domain and domain can be any real number.therefore i can take any value of x let x=1/3.now how can i choose x=n+1/2.perhaps sir you are able to understand my confusion.
Consider the claim: "There exists a natural number p such that both p and p + 2 are prime". This claim is true. To prove it, consider p = 5; then 5 and 7 are prime, as required. This finished the
proof. One does not have to prove that p and p + 2 are prime for all natural p. One also does not have to find all possible p such that p and p + 2 are prime. One does not even have to know if there
are infinitely many such p's (this is called Twin Prime Conjecture, and it is still unsolved). All is needed to prove a claim that starts with "there exists an x such that A(x)" (where A is some
property) is to find a single example x that makes A(x) true. Now, in your case, the claim is, "For every integer n, there exists a real x such that f(x) = n". A proof starts by fixing an arbitrary
n. Then we need to find a single x such that f(x) = n. We don't have to find all such x's; we don't have to consider x's such that f(x) is not n. We are interested in only one thing: finding a single
x such that f(x) = n. It turns out that x = n + 1/2 works because f(n + 1/2) = n.
|
{"url":"http://mathhelpforum.com/pre-calculus/173647-range.html","timestamp":"2014-04-17T20:51:44Z","content_type":null,"content_length":"39358","record_id":"<urn:uuid:66d68d09-606e-4ddc-aa1b-eaaaaecf4077>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Introductory Statistics Using Technology
The fifth edition of Prem S. Mann’s Introductory Statistics using Technology covers all the basic topics found in an introductory statistics course. It starts with standard graphical and numerical
summaries for one variable, then moves to probability and random variables, and ends with inference procedures including one-way analysis of variance, simple linear regression, and chi-square
goodness-of-fit and independence tests. There is a chapter on nonparametric inference methods, but, as in lots of new and revised editions these days, this chapter is only available by download off
of the book’s web site. This text is accessible to a wide range of students and a calculus background is not needed.
The teaching options for this book are flexible because chapters tend to be broken into small sections. The exercises provided for most sections were divided into conceptual and procedural problems
and application problems. Each chapter contains self-review problems with answers given in the back of the text. There are also mini-projects for each chapter which seem to entail either analysis of
a larger set of data or an “activity” project which makes students collect data or search for and interpret articles which use statistics.
There are plenty of examples discussed in each chapter which are solved first by hand and then by technology, when possible. The three technologies used are a TI-83 calculator, MINITAB, and Excel
using the KADDSTAT plug-in. The explanation accompanying each use of technology (e.g. how to calculate descriptive statistics or draw a histogram) is very thorough and would definitely help students
using either of the three technology options. One drawback is that this approach can lead to a very lengthy example and busy pages. In some sections it was hard to determine when the example ended
and the discussion picked up again.
My overall impression of Introductory Statistics using Technology was favorable. It presents most concepts in a concise manner while solving examples which use the concepts in detail. The text is
especially suited for an instructor looking to incorporate any of the three technologies into their introductory class.
Katherine St. Clair is Assistant Professor of Statistics at Colby College.
|
{"url":"http://www.maa.org/publications/maa-reviews/introductory-statistics-using-technology","timestamp":"2014-04-16T20:02:38Z","content_type":null,"content_length":"104667","record_id":"<urn:uuid:d2f25b24-942c-44a9-b8fe-113a5a100446>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
|
circumference of the circle
August 18th 2012, 10:31 AM
circumference of the circle
from what i understand the circle has a fixed circumference of about 3 meters
how is that possible when you can draw smaller and bigger ones how can the circumference not change according the size you draw?
August 18th 2012, 10:59 AM
Re: circumference of the circle
Maybe you've misunderstood something ...
The cirumference of a circle is calculated by $p = \pi \cdot d$, where p means perimeter and d is the diameter of the circle.
Rearranging this equation you'll get $\frac pd = \pi \approx 3.14159$
So maybe you've heard that the ratio of these 2 lengthes is a fixed value of nearly 3.
August 18th 2012, 11:03 AM
Re: circumference of the circle
I have no clue what you are talking about. "from what i understand the circle has a fixed circumference of about 3 meters"??? What circle are you talking about? Surely you don't mean "the circle"
in the sense of "all circles". Every circle has circumference proportional to its diameter, NOT "fixed". The circumference of a circle is $\pi$times its diameter. Now $\pi$ is a constant slightly
larger than 3 so every circle has circumference "about 3" (but NOT "3 meters") times the diameter of circle. Perhaps $\pi$ is the "about 3" you are thinking of. But, again, $\pi$ is a number, not
a measurement, and so has no units such as "meters". And the circumference is that number times the diameter of the circle.
A circle of diameter 1 mm has circumference about 3.14 mm. A circle of diameter 1 light year has circumference about 3.14 light years. Neither of those is anywhere near "3 meters".
|
{"url":"http://mathhelpforum.com/geometry/202297-circumference-circle-print.html","timestamp":"2014-04-19T00:46:46Z","content_type":null,"content_length":"6860","record_id":"<urn:uuid:d0a17669-f821-4f40-bbfa-15c8be720bd6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pelham, NY Calculus Tutor
Find a Pelham, NY Calculus Tutor
...I have tutored at all levels including elementary math, high school, university and graduate levels. I also tutor in test prep (SAT, ACT, and GRE). I work with students to design a tutoring
program that will eliminate weaknesses and reinforce current course material. Please contact me for a meeting.
19 Subjects: including calculus, reading, geometry, writing
I have over 20 years experience in tutoring, primarily in high school and introductory college level physics. I know that physics can be an intimidating subject, so I combine clear explanations
of the material with strategies for how to catch mistakes without getting discouraged. Keeping a good attitude can be a key part of mastering physics.
18 Subjects: including calculus, reading, GRE, physics
...I can help my students in Math, Physics, Chemistry, and Micro Economics. I have done this successfully at the High School, SAT and College levels. I can go to the student's home and teach
person-to-person or use Skype or iChat to teach person-to-person remotely.
12 Subjects: including calculus, chemistry, physics, geometry
...Even though, aside from when I worked at a teaching assistant while getting my masters in mechanical engineering, I don't have teaching background, I have always enjoyed tutoring because I
enjoy learning and believe that it's an important skill for life. I primarily tutor high school math and sc...
26 Subjects: including calculus, chemistry, physics, statistics
...I can help you learn specific applications or help you complete an assignment for work or school. I am available for questions by phone....so don't panic if your boss needs something ASAP and
you don't know how to maneuver through the assignment. I also teach you how to use the help functions so you can explore specific features on your own.
76 Subjects: including calculus, English, writing, reading
Related Pelham, NY Tutors
Pelham, NY Accounting Tutors
Pelham, NY ACT Tutors
Pelham, NY Algebra Tutors
Pelham, NY Algebra 2 Tutors
Pelham, NY Calculus Tutors
Pelham, NY Geometry Tutors
Pelham, NY Math Tutors
Pelham, NY Prealgebra Tutors
Pelham, NY Precalculus Tutors
Pelham, NY SAT Tutors
Pelham, NY SAT Math Tutors
Pelham, NY Science Tutors
Pelham, NY Statistics Tutors
Pelham, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Pelham_NY_calculus_tutors.php","timestamp":"2014-04-19T23:11:53Z","content_type":null,"content_length":"24083","record_id":"<urn:uuid:6fa0e895-2174-42e2-9512-4396bbe878b5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
|
182: Nash
Explain xkcd: It's 'cause you're dumb.
(Difference between revisions)
m m
(→Explanation: Punctuation: removed comma splice) (→Explanation: typo, and if this edit war continues this page gets locked too)
← Older edit Newer edit →
Line 10: Line 10:
The first panel references a scene in the movie {{w|A Beautiful Mind (film)|A Beautiful Mind}} in The first panel references a scene in the movie {{w|A Beautiful Mind (film)|A Beautiful Mind}} in
which {{w|John Forbes Nash, Jr.|Dr. John Forbes Nash, Jr.}} comes up with his famous concept of which {{w|John Forbes Nash, Jr.|Dr. John Forbes Nash, Jr.}} comes up with his famous concept of
{{w|Nash equilibrium}} when he realizes that they get suboptimal results if all the guys go after {{w|Nash equilibrium}} when he realizes that they get suboptimal results if all the guys go after
the same hot girl. The second panel deconstructs the idea by having Dr. Nash point out that the same hot girl. The second panel deconstructs the idea by having Dr. Nash point out that
staying away from the hot girl does not actually constitute a stable Nash equilibrium. The third staying away from the hot girl does not actually constitute a stable Nash equilibrium. The third
panel has physicist {{w|Richard Feynman|Dr. Richard Feynman}} render the entire discussion moot panel has physicist {{w|Richard Feynman|Dr. Richard Feynman}} render the entire discussion moot
by taking all the girls while the mathematicians ponder optimal strategies. by taking all the girls while the mathematicians ponder optimal strategies.
Feynman shared the Nobel Prize in Physics in 1965 for his important work in {{w|quantum Feynman shared the Nobel Prize in Physics in 1965 for his important work in {{w|quantum
electrodynamics}}. Feynman wrote {{w|Richard Feynman#Popular works|popular books}} and gave electrodynamics}}. Feynman wrote {{w|Richard Feynman#Popular works|popular books}} and gave
public lectures. These presented his work advanced theoretical physics to the general public, a public lectures. These presented his work advanced theoretical physics to the general public, a
− practice that was not very common at that time. One of [DEL:is :DEL]more famous books, ''Surely + practice that was not very common at that time. One of more famous books, ''Surely You're Joking,
You're Joking, Mr. Feynman!'' gives many personal anecdotes from his lifetime, and it contains a Mr. Feynman!'' gives many personal anecdotes from his lifetime, and it contains a passage giving
passage giving advice on the best way to pick up a girl in a bar. advice on the best way to pick up a girl in a bar.
The aforementioned public books and lectures brought him great attention in the media, and his The aforementioned public books and lectures brought him great attention in the media, and his
exceptional results in physics coupled with this have led to his getting an almost [http:// exceptional results in physics coupled with this have led to his getting an almost [http://
articles.latimes.com/2001/dec/02/magazine/tm-10496 cult-like following] among scientists. articles.latimes.com/2001/dec/02/magazine/tm-10496 cult-like following] among scientists.
Revision as of 03:54, 26 August 2013
Title text: Maybe someday science will get over its giant collective crush on Richard Feynman. But I doubt it!
The first panel references a scene in the movie A Beautiful Mind in which Dr. John Forbes Nash, Jr. comes up with his famous concept of Nash equilibrium when he realizes that they get suboptimal
results if all the guys go after the same hot girl. The second panel deconstructs the idea by having Dr. Nash point out that staying away from the hot girl does not actually constitute a stable Nash
equilibrium. The third panel has physicist Dr. Richard Feynman render the entire discussion moot by taking all the girls while the mathematicians ponder optimal strategies.
Feynman shared the Nobel Prize in Physics in 1965 for his important work in quantum electrodynamics. Feynman wrote popular books and gave public lectures. These presented his work advanced
theoretical physics to the general public, a practice that was not very common at that time. One of his more famous books, Surely You're Joking, Mr. Feynman! gives many personal anecdotes from his
lifetime, and it contains a passage giving advice on the best way to pick up a girl in a bar.
The aforementioned public books and lectures brought him great attention in the media, and his exceptional results in physics coupled with this have led to his getting an almost cult-like following
among scientists.
The title text explains this: Randall wonders whether this "collective crush" (crush as in love affair) will fade away one day, but he doubts it. Great respect for Feynman continues to this day, even
though he died about a quarter-century ago.
[Cueball and Dr. Nash stand talking to each other. Cueball is pointing off-panel.]
Cueball: Hey, Dr. Nash, I think those gals over there are eyeing us. This is like your Nash Equilibrium, right? One of them is hot, but we should each flirt with one of her less-desirable
friends. Otherwise we risk coming on too strong to the hot one and just driving the group off.
Dr. Nash: Well, that's not really the sort of situation I wrote about. Once we're with the ugly ones, there's no incentive for one of us not to try to switch to the hot one. It's not a stable
Cueball: Crap, forget it. Looks like all three are leaving with one guy.
[Dr. Nash shakes his fist.]
Dr. Nash: Dammit, Feynman!
add a comment!
This page could do with rigor. "Could do" does not mean "needs", however. It is not incomplete, just a bit threadbare. --Quicksilver (talk) 05:27, 24 August 2013 (UTC)
Argh! How "wrong" was the title text, anyway? What remains to be explained, or what is incorrect? --
) 04:24, 25 August 2013 (UTC)
|
{"url":"http://www.explainxkcd.com/wiki/index.php?title=182:_Nash&oldid=47782&diff=prev","timestamp":"2014-04-20T10:58:55Z","content_type":null,"content_length":"36268","record_id":"<urn:uuid:6bf2f97c-a4ed-4290-b86f-8ed19ce4c5bd>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
|
EJS CSM Textbook Chapter 4: Oscillations
EJS CSM Textbook Chapter 4: Oscillations Documents
This material has 3 associated documents. Select a document title to view a document's information.
Main Document
written by Wolfgang Christian
Chapter 4 explores the behavior of oscillatory systems, including the simple harmonic oscillator, a simple pendulum, and electrical circuits. We introduce the concept of phase space and also show how
the EJS ODE editor is used to solve arrays of differential equations.
Last Modified February 20, 2012
previous versions.
Supplemental Documents
A multi-model ready to run jar file containing EJS models to accompany Chapter 4 of An Introduction to Computer Simulation Methods.
Last Modified February 20, 2012
previous versions.
Source Code Documents
A multi-model EJS source code archive to accompany Chapter 4 of An Introduction to Computer Simulation Methods
Last Modified February 20, 2012
previous versions.
|
{"url":"http://www.compadre.org/OSP/document/ServeFile.cfm?ID=9374&DocID=1272&DocFID=4360&Attachment=1","timestamp":"2014-04-16T10:24:55Z","content_type":null,"content_length":"17579","record_id":"<urn:uuid:400b6b47-e7c2-4ced-8fcd-eedb6e9ee3ae>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
|
7 CFR 1463.106 - Base quota levels for eligible tobacco producers.
§ 1463.106 Base quota levels for eligible tobacco producers.
(a) BQL is determined separately, for each of the years 2002, 2003 and 2004, for each kind of tobacco and for each farm for which a 2002 farm marketing quota had been established under part 723 of
this title.
(1) The 2002-crop year BQL for burley producers is the 2002 effective quota pounds actually marketed, adjusted for disaster lease and transfer, and considered-planted undermarketings and
overmarketings. The BQL is then multiplied by the producer's share in the 2002 crop to determine the producer's 2002 BQL. The adjustments for disaster lease and transfer and considered-planted
undermarketings and overmarketings are made as follows:
(i) Disaster-leased pounds are added to the marketings of the transferring farm and deducted from the marketings of the receiving farm;
(ii) Considered-planted pounds are added to the farm's actual marketings, and includes only undermarketings that were not part of the farm's 2003 effective quota.
(iii) Pounds actually marketed as overmarketings and sold penalty-free are added to the farm BQL after the BQL adjustment factor of 1.12486 has been applied to the overmarketed pounds.
(2) The 2003-crop year BQL for burley producers is the 2003 effective quota pounds actually marketed, adjusted for disaster lease and transfer and considered-planted undermarketings and
overmarketings, as follows:
(i) Disaster leases are added to the marketings of the transferring farm and deducted from the marketings of receiving farm.
(ii) Considered-planted pounds are added to the farm's actual marketings, and includes only undermarketings that were not part of the farm's 2004 effective quota.
(iii) Pounds actually marketed as overmarketings and sold penalty-free are added to the farm BQL after the BQL adjustment factor of 1.071295 has been applied to the overmarketed pounds.
Step Calculation
Subtract all 2002 undermarketings from the 2003 marketings, including undermarketings from the parent farm in any special tobacco combinations. Leased pounds are apportioned undermarketing
1 history by dividing the transferring farm's undermarketings by the transferring farm's effective quota, before any temporary transfers, resulting in the percentage of undermarketings that were
2 Multiply the 2003 marketings remaining after Step 1 times 1.12486 (the 2003-BQL adjustment factor).
3 Add the undermarketings that were subtracted in Step 1 to the sum of Step 2 to determine the farm 2003 BQL.
4 Multiply the sum from Step 3 times the producer's share in the 2003 crop to determine the producer's 2003 BQL.
(3) The 2004-crop year BQL for burley producers is the 2004 effective quota before disaster lease and transfer is calculated as follows:
Step Calculation
Subtract all 2003 undermarketings from the 2004 effective quota, including undermarketings from the parent farm in any special tobacco combinations. Leased pounds are apportioned undermarketing
1 history by dividing the transferring farm's undermarketings by the transferring farm's effective quota, before any temporary transfers, resulting in the percentage of undermarketings that were
2 Multiply the 2004 effective quota remaining after Step 1 times 1.071295 (the 2004 BQL adjustment factor).
3 Multiply the undermarketings that were subtracted in Step 1 times 1.12486 (the 2003 BQL adjustment factor).
4 Add the effective quota from Step 2 to the undermarketings in Step 3 to determine the farm 2004 BQL.
5 Multiply the sum from Step 4 times the producer's share in the 2004 crop to determine the producer's 2004 BQL.
(1) The 2002-crop year BQL for flue-cured producers is the effective 2002 quota actually marketed, adjusted for disaster lease and transfer and considered-planted undermarketings and overmarketings.
The BQL is then multiplied by the producer's share in the 2002 crop to determine the producer's 2002 BQL. Adjustments for disaster lease and transfer and considered-planted undermarketings and
overmarketings are calculated as follows:
(i) Disaster-leased pounds are added to the marketings of the transferring farm and deducted from the marketings of the receiving farm;
(ii) Considered-planted pounds are added to the farm's actual marketings, and include only undermarketings that were not part of the farm's 2003 effective quota.
(iii) Pounds actually marketed as overmarketings and sold penalty-free are added to the farm BQL after the BQL adjustment factor of 1.10497 has been applied to the overmarketed pounds.
(2) The 2003-crop year BQL for flue-cured producers is the 2003 effective quota actually marketed, adjusted for disaster lease and transfer and considered-planted undermarketings and overmarketings,
as follows:
(i) Disaster leases are added to the marketings of the transferring farm and deducted from the marketings of the receiving farm.
(ii) Considered-planted pounds are added to the farm's actual marketings, and includes only undermarketings that were in not part of the farm's 2004 effective quota.
(iii) Pounds actually marketed as overmarketings and sold penalty-free are added to the farm BQL after the BQL adjustment factor of 1.23457 has been applied to the overmarketed pounds.
Step Calculation
1 Subtract all 2002 undermarketings from the 2003 marketings, including undermarketings from the parent farm in any special tobacco combinations.
2 Multiply the 2003 marketings remaining after Step 1 times 1.10497 (the 2003 BQL adjustment factor).
3 Add the undermarketings that were subtracted in Step 1 to the sum of Step 2 to determine the farm 2003 BQL.
4 Multiply the sum from step 3 times the producer's share in the 2003 crop to determine the producer's 2003 BQL.
(3) The 2004-crop year BQL for flue-cured producers is the 2004 effective quota before disaster lease and transfer. The 2004 BQL is calculated as follows:
Step Calculation
1 Subtract all 2003 undermarketings from the 2004 effective quota, including undermarketings from the parent farm in any special tobacco combinations.
2 Multiply the 2004 effective quota remaining after Step 1 times 1.23457 (the 2004 BQL adjustment factor).
3 Multiply the undermarketings that were subtracted in Step 1 times 1.10497 (the 2003 BQL adjustment factor).
4 Add the effective quota from Step 2 to the undermarketings in Step 3 to determine the farm 2004 BQL.
5 Multiply the sum from Step 4 times the producer's share in the 2004 crop to determine the producer's 2004 BQL.
Step Calculation
1 Multiply the 2002 farm's basic allotment times the farm's average yield for 2001, 2002, and 2003 to get the 2004 farm base pounds total.
2 Multiply any 2002 special tobacco combination acres times the 2002-equivalence factor of 1.000.
3 Multiply the sum from Step 2 times the farm's average yield for 2001, 2002, and 2003 to get the 2002 farm special tobacco combination pounds total.
4 Add the sum from Step 1 to the sum from Step 3 to get the 2004 farm BQL total.
5 Multiply the sum from Step 4 times the producer's share in the 2002 crop to get the producer 2002 BQL.
Step Calculation
1 Multiply the 2002 farm's basic allotment times the farm's average yield for 2001, 2002, and 2003 to get the 2003 farm base pounds total.
2 Multiply any 2003 special tobacco combination acres times the 2003 BQL adjustment factor of 0.8929.
3 Multiply the sum from Step 2 times the farm's average yield for 2001, 2002, and 2003 to get the 2003 farm special tobacco combination pounds total.
4 Add the sum from Step 1 to the sum from Step 3 to get the 2003 farm BQL total.
5 Multiply the sum from Step 4 times the producer's share in the 2003 crop to get the producer 2003 BQL.
Step Calculation
1 Multiply the 2002 farm's basic allotment times the farm's average yield for 2001, 2002, and 2003 to get the 2004 farm base pounds total.
2 Multiply any 2004 special tobacco combination acres times the 2004 BQL adjustment factor of 0.9398.
3 Multiply the sum from Step 2 times the farm's average yield for 2001, 2002, and 2004 to get the 2003 farm special tobacco combination pounds total.
4 Add the sum from Step 1 to the sum from Step 3 to get the 2004 farm BQL total.
5 Multiply the sum from Step 4 times the producer's share in the 2004 crop to get the producer 2004 BQL.
(e) The BQL's for producers of all kinds of tobacco other than burley, flue-cured and cigar filler and binder, are established by year, as follows.
Step Calculation
1 Multiply the 2002 farm's basic allotment times the farm's average yield for 2001, 2002, and 2003 to get the 2002 farm base pounds total.
2 Multiply any 2002 special tobacco combination acres times the farm's average yield for 2001, 2002, and 2003 to get the 2002 special tobacco combinations pounds total.
3 Add the sum from Step 1 to the sum from Step 2.
4 Multiply any 2002 acres leased to or from the farm times the farm's average yield for 2001, 2002, and 2003 to get the 2002 lease pounds total. Then, to the sum from either:
(i) Step 3, add pounds leased to the farm to get the farm 2002 BQL total
(ii)Step 3, subtract pounds leased from the farm to get the farm 2002 BQL total.
5 Multiply the result from Step 4 times the producer's share in the 2002 crop to get the producer 2002 BQL.
Step Calculation
1 Multiply the 2002 farm's basic allotment times the farm's average yield for 2001, 2002, and 2003 to get the 2003 farm base pounds total.
2 Multiply any 2003 special tobacco combinations acres times the applicable 2003 BQL adjustment factor:
(i) Fire-cured (type 21)—1.0000
(ii) Fire-cured (types 22-23)—.980392
(iii) Dark Air-cured (35-36)—.952381
(iv) Virginia Sun-cured (type 37) 1.0000
3 Multiply the sum from Step 2 times the farm's average yield for 2001, 2002, and 2003 to get the 2003 farm special tobacco combination pounds total.
4 Add the sum from Step 1 to the sum from Step 3.
5 Multiply any 2003 acres leased times the applicable 2003 BQL adjustment factor:
(i) Fire-cured (type 21) 1.0000
(ii) Fire-cured (types 22-23)—.980392
(iii) Dark Air-cured (35-36)—.952381
(iv) Virginia Sun-cured (type 37) 1.0000
6 Multiply the sum from Step 5 times the farm's average yield for 2001, 2002, and 2003 to get the 2003 lease pounds total.
7 To the sum from Step 4 either:
(i) Add pounds from Step 6 leased to the farm to get the farm 2003 BQL total
(ii) Subtract pounds from Step 6 leased from the farm to get the farm 2003 BQL total.
8 Multiply the sum from Step 7 times the producer's share in the 2003 crop to get the producer 2003 BQL total.
Step Calculation
1 Multiply the 2002 farm's basic allotment times the farm's average yield for 2001, 2002, and 2003 to get the 2004 farm base pounds total.
2 Multiply any 2004 special tobacco combinations acres times the applicable 2004 BQL adjustment factor:
(i) Fire-cured (type 21) 1.0000
(ii) Fire-cured (types 22-23)—.951837
(iii) Dark Air-cured (35-36)—.94264
(iv) Virginia Sun-cured (type 37) 1.0000
3 Multiply the sum from Step 2 times the farm's average yield for 2001, 2002, and 2003 to get the 2004 farm special tobacco combination pounds total.
4 Add the sum from Step 1 to the sum from Step 3.
5 Multiply any 2004 acres leased times the applicable 2004 BQL adjustment factor:
(i) Fire-cured (type 21) 1.0000
(ii) Fire-cured (types 22-23)—.951837
(iii) Dark Air-cured (35-36)—.92464
(iv) Virginia Sun-cured (type 37) 1.0000
6 Multiply the sum from Step 5 times the farm's average yield for 2001, 2002, and 2003 to get the 2004 lease pounds total.
7 To the sum from Step 4 either:
(i) Add pounds from Step 6 leased to the farm to get the farm 2004 BQL total
(ii) Subtract pounds from Step 6 leased from the farm to get the farm 2004 BQL total.
8 Multiply the sum from Step 7 times the producer's share in the 2004 crop to get the producer 2004 BQL total.
Title 7 published on 2014-01-01
no entries appear in the Federal Register after this date.
|
{"url":"http://www.law.cornell.edu/cfr/text/7/1463.106","timestamp":"2014-04-18T22:34:54Z","content_type":null,"content_length":"52578","record_id":"<urn:uuid:de661012-08e4-4de9-8703-5465628d1b82>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chapter 111. Subchapter B
Chapter 111. Texas Essential Knowledge and Skills for Mathematics
Subchapter B. Middle School
Statutory Authority: The provisions of this Subchapter B issued under the Texas Education Code, §§7.102(c)(4), 28.002, 28.0021(a)(1), and 28.008, unless otherwise noted.
(a) Introduction.
(1) Within a well-balanced mathematics curriculum, the primary focal points at Grade 6 are using ratios to describe direct proportional relationships involving number, geometry, measurement,
probability, and adding and subtracting decimals and fractions.
(2) Throughout mathematics in Grades 6-8, students build a foundation of basic understandings in number, operation, and quantitative reasoning; patterns, relationships, and algebraic thinking;
geometry and spatial reasoning; measurement; and probability and statistics. Students use concepts, algorithms, and properties of rational numbers to explore mathematical relationships and to
describe increasingly complex situations. Students use algebraic thinking to describe how a change in one quantity in a relationship results in a change in the other; and they connect verbal,
numeric, graphic, and symbolic representations of relationships. Students use geometric properties and relationships, as well as spatial reasoning, to model and analyze situations and solve problems.
Students communicate information about geometric figures or situations by quantifying attributes, generalize procedures from measurement experiences, and use the procedures to solve problems.
Students use appropriate statistics, representations of data, reasoning, and concepts of probability to draw conclusions, evaluate arguments, and make recommendations.
(3) Problem solving in meaningful contexts, language and communication, connections within and outside mathematics, and formal and informal reasoning underlie all content areas in mathematics.
Throughout mathematics in Grades 6-8, students use these processes together with graphing technology and other mathematical tools such as manipulative materials to develop conceptual understanding
and solve problems as they do mathematics.
(b) Knowledge and skills.
(1) Number, operation, and quantitative reasoning. The student represents and uses rational numbers in a variety of equivalent forms. The student is expected to:
(A) compare and order non-negative rational numbers;
(B) generate equivalent forms of rational numbers including whole numbers, fractions, and decimals;
(C) use integers to represent real-life situations;
(D) write prime factorizations using exponents;
(E) identify factors of a positive integer, common factors, and the greatest common factor of a set of positive integers; and
(F) identify multiples of a positive integer and common multiples and the least common multiple of a set of positive integers.
(2) Number, operation, and quantitative reasoning. The student adds, subtracts, multiplies, and divides to solve problems and justify solutions. The student is expected to:
(A) model addition and subtraction situations involving fractions with objects, pictures, words, and numbers;
(B) use addition and subtraction to solve problems involving fractions and decimals;
(C) use multiplication and division of whole numbers to solve problems including situations involving equivalent ratios and rates;
(D) estimate and round to approximate reasonable results and to solve problems where exact answers are not required; and
(E) use order of operations to simplify whole number expressions (without exponents) in problem solving situations.
(3) Patterns, relationships, and algebraic thinking. The student solves problems involving direct proportional relationships. The student is expected to:
(A) use ratios to describe proportional situations;
(B) represent ratios and percents with concrete models, fractions, and decimals; and
(C) use ratios to make predictions in proportional situations.
(4) Patterns, relationships, and algebraic thinking. The student uses letters as variables in mathematical expressions to describe how one quantity changes when a related quantity changes. The
student is expected to:
(A) use tables and symbols to represent and describe proportional and other relationships such as those involving conversions, arithmetic sequences (with a constant rate of change), perimeter and
area; and
(B) use tables of data to generate formulas representing relationships involving perimeter, area, volume of a rectangular prism, etc.
(5) Patterns, relationships, and algebraic thinking. The student uses letters to represent an unknown in an equation. The student is expected to formulate equations from problem situations described
by linear relationships.
(6) Geometry and spatial reasoning. The student uses geometric vocabulary to describe angles, polygons, and circles. The student is expected to:
(A) use angle measurements to classify angles as acute, obtuse, or right;
(B) identify relationships involving angles in triangles and quadrilaterals; and
(C) describe the relationship between radius, diameter, and circumference of a circle.
(7) Geometry and spatial reasoning. The student uses coordinate geometry to identify location in two dimensions. The student is expected to locate and name points on a coordinate plane using ordered
pairs of non-negative rational numbers.
(8) Measurement. The student solves application problems involving estimation and measurement of length, area, time, temperature, volume, weight, and angles. The student is expected to:
(A) estimate measurements (including circumference) and evaluate reasonableness of results;
(B) select and use appropriate units, tools, or formulas to measure and to solve problems involving length (including perimeter), area, time, temperature, volume, and weight;
(C) measure angles; and
(D) convert measures within the same measurement system (customary and metric) based on relationships between units.
(9) Probability and statistics. The student uses experimental and theoretical probability to make predictions. The student is expected to:
(A) construct sample spaces using lists and tree diagrams; and
(B) find the probabilities of a simple event and its complement and describe the relationship between the two.
(10) Probability and statistics. The student uses statistical representations to analyze data. The student is expected to:
(A) select and use an appropriate representation for presenting and displaying different graphical representations of the same data including line plot, line graph, bar graph, and stem and leaf plot;
(B) identify mean (using concrete objects and pictorial models), median, mode, and range of a set of data;
(C) sketch circle graphs to display data; and
(D) solve problems by collecting, organizing, displaying, and interpreting data.
(11) Underlying processes and mathematical tools. The student applies Grade 6 mathematics to solve problems connected to everyday experiences, investigations in other disciplines, and activities in
and outside of school. The student is expected to:
(A) identify and apply mathematics to everyday experiences, to activities in and outside of school, with other disciplines, and with other mathematical topics;
(B) use a problem-solving model that incorporates understanding the problem, making a plan, carrying out the plan, and evaluating the solution for reasonableness;
(C) select or develop an appropriate problem-solving strategy from a variety of different types, including drawing a picture, looking for a pattern, systematic guessing and checking, acting it out,
making a table, working a simpler problem, or working backwards to solve a problem; and
(D) select tools such as real objects, manipulatives, paper/pencil, and technology or techniques such as mental math, estimation, and number sense to solve problems.
(12) Underlying processes and mathematical tools. The student communicates about Grade 6 mathematics through informal and mathematical language, representations, and models. The student is expected
(A) communicate mathematical ideas using language, efficient tools, appropriate units, and graphical, numerical, physical, or algebraic mathematical models; and
(B) evaluate the effectiveness of different representations to communicate ideas.
(13) Underlying processes and mathematical tools. The student uses logical reasoning to make conjectures and verify conclusions. The student is expected to:
(A) make conjectures from patterns or sets of examples and nonexamples; and
(B) validate his/her conclusions using mathematical properties and relationships.
Source: The provisions of this §111.22 adopted to be effective September 1, 1998, 22 TexReg 7623; amended to be effective August 1, 2006, 30 TexReg 1930.
(a) Introduction.
(1) Within a well-balanced mathematics curriculum, the primary focal points at Grade 7 are using direct proportional relationships in number, geometry, measurement, and probability; applying
addition, subtraction, multiplication, and division of decimals, fractions, and integers; and using statistical measures to describe data.
(2) Throughout mathematics in Grades 6-8, students build a foundation of basic understandings in number, operation, and quantitative reasoning; patterns, relationships, and algebraic thinking;
geometry and spatial reasoning; measurement; and probability and statistics. Students use concepts, algorithms, and properties of rational numbers to explore mathematical relationships and to
describe increasingly complex situations. Students use algebraic thinking to describe how a change in one quantity in a relationship results in a change in the other; and they connect verbal,
numeric, graphic, and symbolic representations of relationships. Students use geometric properties and relationships, as well as spatial reasoning, to model and analyze situations and solve problems.
Students communicate information about geometric figures or situations by quantifying attributes, generalize procedures from measurement experiences, and use the procedures to solve problems.
Students use appropriate statistics, representations of data, reasoning, and concepts of probability to draw conclusions, evaluate arguments, and make recommendations.
(3) Problem solving in meaningful contexts, language and communication, connections within and outside mathematics, and formal and informal reasoning underlie all content areas in mathematics.
Throughout mathematics in Grades 6-8, students use these processes together with graphing technology and other mathematical tools such as manipulative materials to develop conceptual understanding
and solve problems as they do mathematics.
(b) Knowledge and skills.
(1) Number, operation, and quantitative reasoning. The student represents and uses numbers in a variety of equivalent forms. The student is expected to:
(A) compare and order integers and positive rational numbers;
(B) convert between fractions, decimals, whole numbers, and percents mentally, on paper, or with a calculator; and
(C) represent squares and square roots using geometric models.
(2) Number, operation, and quantitative reasoning. The student adds, subtracts, multiplies, or divides to solve problems and justify solutions. The student is expected to:
(A) represent multiplication and division situations involving fractions and decimals with models, including concrete objects, pictures, words, and numbers;
(B) use addition, subtraction, multiplication, and division to solve problems involving fractions and decimals;
(C) use models, such as concrete objects, pictorial models, and number lines, to add, subtract, multiply, and divide integers and connect the actions to algorithms;
(D) use division to find unit rates and ratios in proportional relationships such as speed, density, price, recipes, and student-teacher ratio;
(E) simplify numerical expressions involving order of operations and exponents;
(F) select and use appropriate operations to solve problems and justify the selections; and
(G) determine the reasonableness of a solution to a problem.
(3) Patterns, relationships, and algebraic thinking. The student solves problems involving direct proportional relationships. The student is expected to:
(A) estimate and find solutions to application problems involving percent; and
(B) estimate and find solutions to application problems involving proportional relationships such as similarity, scaling, unit costs, and related measurement units.
(4) Patterns, relationships, and algebraic thinking. The student represents a relationship in numerical, geometric, verbal, and symbolic form. The student is expected to:
(A) generate formulas involving unit conversions within the same system (customary and metric), perimeter, area, circumference, volume, and scaling;
(B) graph data to demonstrate relationships in familiar concepts such as conversions, perimeter, area, circumference, volume, and scaling; and
(C) use words and symbols to describe the relationship between the terms in an arithmetic sequence (with a constant rate of change) and their positions in the sequence.
(5) Patterns, relationships, and algebraic thinking. The student uses equations to solve problems. The student is expected to:
(A) use concrete and pictorial models to solve equations and use symbols to record the actions; and
(B) formulate problem situations when given a simple equation and formulate an equation when given a problem situation.
(6) Geometry and spatial reasoning. The student compares and classifies two- and three-dimensional figures using geometric vocabulary and properties. The student is expected to:
(A) use angle measurements to classify pairs of angles as complementary or supplementary;
(B) use properties to classify triangles and quadrilaterals;
(C) use properties to classify three-dimensional figures, including pyramids, cones, prisms, and cylinders; and
(D) use critical attributes to define similarity.
(7) Geometry and spatial reasoning. The student uses coordinate geometry to describe location on a plane. The student is expected to:
(A) locate and name points on a coordinate plane using ordered pairs of integers; and
(B) graph reflections across the horizontal or vertical axis and graph translations on a coordinate plane.
(8) Geometry and spatial reasoning. The student uses geometry to model and describe the physical world. The student is expected to:
(A) sketch three-dimensional figures when given the top, side, and front views;
(B) make a net (two-dimensional model) of the surface area of a three-dimensional figure; and
(C) use geometric concepts and properties to solve problems in fields such as art and architecture.
(9) Measurement. The student solves application problems involving estimation and measurement. The student is expected to:
(A) estimate measurements and solve application problems involving length (including perimeter and circumference) and area of polygons and other shapes;
(B) connect models for volume of prisms (triangular and rectangular) and cylinders to formulas of prisms (triangular and rectangular) and cylinders; and
(C) estimate measurements and solve application problems involving volume of prisms (rectangular and triangular) and cylinders.
(10) Probability and statistics. The student recognizes that a physical or mathematical model (including geometric) can be used to describe the experimental and theoretical probability of real-life
events. The student is expected to:
(A) construct sample spaces for simple or composite experiments; and
(B) find the probability of independent events.
(11) Probability and statistics. The student understands that the way a set of data is displayed influences its interpretation. The student is expected to:
(A) select and use an appropriate representation for presenting and displaying relationships among collected data, including line plot, line graph, bar graph, stem and leaf plot, circle graph, and
Venn diagrams, and justify the selection; and
(B) make inferences and convincing arguments based on an analysis of given or collected data.
(12) Probability and statistics. The student uses measures of central tendency and variability to describe a set of data. The student is expected to:
(A) describe a set of data using mean, median, mode, and range; and
(B) choose among mean, median, mode, or range to describe a set of data and justify the choice for a particular situation.
(13) Underlying processes and mathematical tools. The student applies Grade 7 mathematics to solve problems connected to everyday experiences, investigations in other disciplines, and activities in
and outside of school. The student is expected to:
(A) identify and apply mathematics to everyday experiences, to activities in and outside of school, with other disciplines, and with other mathematical topics;
(B) use a problem-solving model that incorporates understanding the problem, making a plan, carrying out the plan, and evaluating the solution for reasonableness;
(C) select or develop an appropriate problem-solving strategy from a variety of different types, including drawing a picture, looking for a pattern, systematic guessing and checking, acting it out,
making a table, working a simpler problem, or working backwards to solve a problem; and
(D) select tools such as real objects, manipulatives, paper/pencil, and technology or techniques such as mental math, estimation, and number sense to solve problems.
(14) Underlying processes and mathematical tools. The student communicates about Grade 7 mathematics through informal and mathematical language, representations, and models. The student is expected
(A) communicate mathematical ideas using language, efficient tools, appropriate units, and graphical, numerical, physical, or algebraic mathematical models; and
(B) evaluate the effectiveness of different representations to communicate ideas.
(15) Underlying processes and mathematical tools. The student uses logical reasoning to make conjectures and verify conclusions. The student is expected to:
(A) make conjectures from patterns or sets of examples and nonexamples; and
(B) validate his/her conclusions using mathematical properties and relationships.
Source: The provisions of this §111.23 adopted to be effective September 1, 1998, 22 TexReg 7623; amended to be effective August 1, 2006, 30 TexReg 1930; amended to be effective February 22, 2009, 34
TexReg 1056.
(a) Introduction.
(1) Within a well-balanced mathematics curriculum, the primary focal points at Grade 8 are using basic principles of algebra to analyze and represent both proportional and non-proportional linear
relationships and using probability to describe data and make predictions.
(2) Throughout mathematics in Grades 6-8, students build a foundation of basic understandings in number, operation, and quantitative reasoning; patterns, relationships, and algebraic thinking;
geometry and spatial reasoning; measurement; and probability and statistics. Students use concepts, algorithms, and properties of rational numbers to explore mathematical relationships and to
describe increasingly complex situations. Students use algebraic thinking to describe how a change in one quantity in a relationship results in a change in the other; and they connect verbal,
numeric, graphic, and symbolic representations of relationships. Students use geometric properties and relationships, as well as spatial reasoning, to model and analyze situations and solve problems.
Students communicate information about geometric figures or situations by quantifying attributes, generalize procedures from measurement experiences, and use the procedures to solve problems.
Students use appropriate statistics, representations of data, reasoning, and concepts of probability to draw conclusions, evaluate arguments, and make recommendations.
(3) Problem solving in meaningful contexts, language and communication, connections within and outside mathematics, and formal and informal reasoning underlie all content areas in mathematics.
Throughout mathematics in Grades 6-8, students use these processes together with graphing technology and other mathematical tools such as manipulative materials to develop conceptual understanding
and solve problems as they do mathematics.
(b) Knowledge and skills.
(1) Number, operation, and quantitative reasoning. The student understands that different forms of numbers are appropriate for different situations. The student is expected to:
(A) compare and order rational numbers in various forms including integers, percents, and positive and negative fractions and decimals;
(B) select and use appropriate forms of rational numbers to solve real-life problems including those involving proportional relationships;
(C) approximate (mentally and with calculators) the value of irrational numbers as they arise from problem situations (such as p, Ö2);
(D) express numbers in scientific notation, including negative exponents, in appropriate problem situations; and
(E) compare and order real numbers with a calculator.
(2) Number, operation, and quantitative reasoning. The student selects and uses appropriate operations to solve problems and justify solutions. The student is expected to:
(A) select appropriate operations to solve problems involving rational numbers and justify the selections;
(B) use appropriate operations to solve problems involving rational numbers in problem situations;
(C) evaluate a solution for reasonableness; and
(D) use multiplication by a given constant factor (including unit rate) to represent and solve problems involving proportional relationships including conversions between measurement systems.
(3) Patterns, relationships, and algebraic thinking. The student identifies proportional or non-proportional linear relationships in problem situations and solves problems. The student is expected
(A) compare and contrast proportional and non-proportional linear relationships; and
(B) estimate and find solutions to application problems involving percents and other proportional relationships such as similarity and rates.
(4) Patterns, relationships, and algebraic thinking. The student makes connections among various representations of a numerical relationship. The student is expected to generate a different
representation of data given another representation of data (such as a table, graph, equation, or verbal description).
(5) Patterns, relationships, and algebraic thinking. The student uses graphs, tables, and algebraic representations to make predictions and solve problems. The student is expected to:
(A) predict, find, and justify solutions to application problems using appropriate tables, graphs, and algebraic equations; and
(B) find and evaluate an algebraic expression to determine any term in an arithmetic sequence (with a constant rate of change).
(6) Geometry and spatial reasoning. The student uses transformational geometry to develop spatial sense. The student is expected to:
(A) generate similar figures using dilations including enlargements and reductions; and
(B) graph dilations, reflections, and translations on a coordinate plane.
(7) Geometry and spatial reasoning. The student uses geometry to model and describe the physical world. The student is expected to:
(A) draw three-dimensional figures from different perspectives;
(B) use geometric concepts and properties to solve problems in fields such as art and architecture;
(C) use pictures or models to demonstrate the Pythagorean Theorem; and
(D) locate and name points on a coordinate plane using ordered pairs of rational numbers.
(8) Measurement. The student uses procedures to determine measures of three-dimensional figures. The student is expected to:
(A) find lateral and total surface area of prisms, pyramids, and cylinders using concrete models and nets (two-dimensional models);
(B) connect models of prisms, cylinders, pyramids, spheres, and cones to formulas for volume of these objects; and
(C) estimate measurements and use formulas to solve application problems involving lateral and total surface area and volume.
(9) Measurement. The student uses indirect measurement to solve problems. The student is expected to:
(A) use the Pythagorean Theorem to solve real-life problems; and
(B) use proportional relationships in similar two-dimensional figures or similar three-dimensional figures to find missing measurements.
(10) Measurement. The student describes how changes in dimensions affect linear, area, and volume measures. The student is expected to:
(A) describe the resulting effects on perimeter and area when dimensions of a shape are changed proportionally; and
(B) describe the resulting effect on volume when dimensions of a solid are changed proportionally.
(11) Probability and statistics. The student applies concepts of theoretical and experimental probability to make predictions. The student is expected to:
(A) find the probabilities of dependent and independent events;
(B) use theoretical probabilities and experimental results to make predictions and decisions; and
(C) select and use different models to simulate an event.
(12) Probability and statistics. The student uses statistical procedures to describe data. The student is expected to:
(A) use variability (range, including interquartile range (IQR)) and select the appropriate measure of central tendency to describe a set of data and justify the choice for a particular situation;
(B) draw conclusions and make predictions by analyzing trends in scatterplots; and
(C) select and use an appropriate representation for presenting and displaying relationships among collected data, including line plots, line graphs, stem and leaf plots, circle graphs, bar graphs,
box and whisker plots, histograms, and Venn diagrams, with and without the use of technology.
(13) Probability and statistics. The student evaluates predictions and conclusions based on statistical data. The student is expected to:
(A) evaluate methods of sampling to determine validity of an inference made from a set of data; and
(B) recognize misuses of graphical or numerical information and evaluate predictions and conclusions based on data analysis.
(14) Underlying processes and mathematical tools. The student applies Grade 8 mathematics to solve problems connected to everyday experiences, investigations in other disciplines, and activities in
and outside of school. The student is expected to:
(A) identify and apply mathematics to everyday experiences, to activities in and outside of school, with other disciplines, and with other mathematical topics;
(B) use a problem-solving model that incorporates understanding the problem, making a plan, carrying out the plan, and evaluating the solution for reasonableness;
(C) select or develop an appropriate problem-solving strategy from a variety of different types, including drawing a picture, looking for a pattern, systematic guessing and checking, acting it out,
making a table, working a simpler problem, or working backwards to solve a problem; and
(D) select tools such as real objects, manipulatives, paper/pencil, and technology or techniques such as mental math, estimation, and number sense to solve problems.
(15) Underlying processes and mathematical tools. The student communicates about Grade 8 mathematics through informal and mathematical language, representations, and models. The student is expected
(A) communicate mathematical ideas using language, efficient tools, appropriate units, and graphical, numerical, physical, or algebraic mathematical models; and
(B) evaluate the effectiveness of different representations to communicate ideas.
(16) Underlying processes and mathematical tools. The student uses logical reasoning to make conjectures and verify conclusions. The student is expected to:
(A) make conjectures from patterns or sets of examples and nonexamples; and
(B) validate his/her conclusions using mathematical properties and relationships.
Source: The provisions of this §111.24 adopted to be effective September 1, 1998, 22 TexReg 7623; amended to be effective August 1, 2006, 30 TexReg 1930; amended to be effective February 22, 2009, 34
TexReg 1056.
§111.25. Implementation of Texas Essential Knowledge and Skills for Mathematics, Middle School, Adopted 2012.
(a) The provisions of §§111.26-111.28 of this subchapter shall be implemented by school districts.
(b) No later than August 31, 2013, the commissioner of education shall determine whether instructional materials funding has been made available to Texas public schools for materials that cover the
essential knowledge and skills for mathematics as adopted in §§111.26-111.28 of this subchapter.
(c) If the commissioner makes the determination that instructional materials funding has been made available under subsection (b) of this section, §§111.26-111.28 of this subchapter shall be
implemented beginning with the 2014-2015 school year and apply to the 2014-2015 and subsequent school years.
(d) If the commissioner does not make the determination that instructional materials funding has been made available under subsection (b) of this section, the commissioner shall determine no later
than August 31 of each subsequent school year whether instructional materials funding has been made available. If the commissioner determines that instructional materials funding has been made
available, the commissioner shall notify the State Board of Education and school districts that §§111.26-111.28 of this subchapter shall be implemented for the following school year.
(e) Sections 111.21-111.24 of this subchapter shall be superseded by the implementation of §§111.25-111.28 under this section.
Source: The provisions of this §111.25 adopted to be effective September 10, 2012, 37 TexReg 7109.
§111.26. Grade 6, Adopted 2012.
(a) Introduction.
(1) The desire to achieve educational excellence is the driving force behind the Texas essential knowledge and skills for mathematics, guided by the college and career readiness standards. By
embedding statistics, probability, and finance, while focusing on computational thinking, mathematical fluency, and solid understanding, Texas will lead the way in mathematics education and prepare
all Texas students for the challenges they will face in the 21st century.
(2) The process standards describe ways in which students are expected to engage in the content. The placement of the process standards at the beginning of the knowledge and skills listed for each
grade and course is intentional. The process standards weave the other knowledge and skills together so that students may be successful problem solvers and use mathematics efficiently and effectively
in daily life. The process standards are integrated at every grade level and course. When possible, students will apply mathematics to problems arising in everyday life, society, and the workplace.
Students will use a problem-solving model that incorporates analyzing given information, formulating a plan or strategy, determining a solution, justifying the solution, and evaluating the
problem-solving process and the reasonableness of the solution. Students will select appropriate tools such as real objects, manipulatives, algorithms, paper and pencil, and technology and techniques
such as mental math, estimation, number sense, and generalization and abstraction to solve problems. Students will effectively communicate mathematical ideas, reasoning, and their implications using
multiple representations such as symbols, diagrams, graphs, computer programs, and language. Students will use mathematical relationships to generate solutions and make connections and predictions.
Students will analyze mathematical relationships to connect and communicate mathematical ideas. Students will display, explain, or justify mathematical ideas and arguments using precise mathematical
language in written or oral communication.
(3) The primary focal areas in Grade 6 are number and operations; proportionality; expressions, equations, and relationships; and measurement and data. Students use concepts, algorithms, and
properties of rational numbers to explore mathematical relationships and to describe increasingly complex situations. Students use concepts of proportionality to explore, develop, and communicate
mathematical relationships. Students use algebraic thinking to describe how a change in one quantity in a relationship results in a change in the other. Students connect verbal, numeric, graphic, and
symbolic representations of relationships, including equations and inequalities. Students use geometric properties and relationships, as well as spatial reasoning, to model and analyze situations and
solve problems. Students communicate information about geometric figures or situations by quantifying attributes, generalize procedures from measurement experiences, and use the procedures to solve
problems. Students use appropriate statistics, representations of data, and reasoning to draw conclusions, evaluate arguments, and make recommendations. While the use of all types of technology is
important, the emphasis on algebra readiness skills necessitates the implementation of graphing technology.
(4) Statements that contain the word "including" reference content that must be mastered, while those containing the phrase "such as" are intended as possible illustrative examples.
(b) Knowledge and skills.
(1) Mathematical process standards. The student uses mathematical processes to acquire and demonstrate mathematical understanding. The student is expected to:
(A) apply mathematics to problems arising in everyday life, society, and the workplace;
(B) use a problem-solving model that incorporates analyzing given information, formulating a plan or strategy, determining a solution, justifying the solution, and evaluating the problem-solving
process and the reasonableness of the solution;
(C) select tools, including real objects, manipulatives, paper and pencil, and technology as appropriate, and techniques, including mental math, estimation, and number sense as appropriate, to solve
(D) communicate mathematical ideas, reasoning, and their implications using multiple representations, including symbols, diagrams, graphs, and language as appropriate;
(E) create and use representations to organize, record, and communicate mathematical ideas;
(F) analyze mathematical relationships to connect and communicate mathematical ideas; and
(G) display, explain, and justify mathematical ideas and arguments using precise mathematical language in written or oral communication.
(2) Number and operations. The student applies mathematical process standards to represent and use rational numbers in a variety of forms. The student is expected to:
(A) classify whole numbers, integers, and rational numbers using a visual representation such as a Venn diagram to describe relationships between sets of numbers;
(B) identify a number, its opposite, and its absolute value;
(C) locate, compare, and order integers and rational numbers using a number line;
(D) order a set of rational numbers arising from mathematical and real-world contexts; and
(E) extend representations for division to include fraction notation such as a/b represents the same number as a ÷ b where b ≠ 0.
(3) Number and operations. The student applies mathematical process standards to represent addition, subtraction, multiplication, and division while solving problems and justifying solutions. The
student is expected to:
(A) recognize that dividing by a rational number and multiplying by its reciprocal result in equivalent values;
(B) determine, with and without computation, whether a quantity is increased or decreased when multiplied by a fraction, including values greater than or less than one;
(C) represent integer operations with concrete models and connect the actions with the models to standardized algorithms;
(D) add, subtract, multiply, and divide integers fluently; and
(E) multiply and divide positive rational numbers fluently.
(4) Proportionality. The student applies mathematical process standards to develop an understanding of proportional relationships in problem situations. The student is expected to:
(A) compare two rules verbally, numerically, graphically, and symbolically in the form of y = ax or y = x + a in order to differentiate between additive and multiplicative relationships;
(B) apply qualitative and quantitative reasoning to solve prediction and comparison of real-world problems involving ratios and rates;
(C) give examples of ratios as multiplicative comparisons of two quantities describing the same attribute;
(D) give examples of rates as the comparison by division of two quantities having different attributes, including rates as quotients;
(E) represent ratios and percents with concrete models, fractions, and decimals;
(F) represent benchmark fractions and percents such as 1%, 10%, 25%, 33 1/3%, and multiples of these values using 10 by 10 grids, strip diagrams, number lines, and numbers;
(G) generate equivalent forms of fractions, decimals, and percents using real-world problems, including problems that involve money; and
(H) convert units within a measurement system, including the use of proportions and unit rates.
(5) Proportionality. The student applies mathematical process standards to solve problems involving proportional relationships. The student is expected to:
(A) represent mathematical and real-world problems involving ratios and rates using scale factors, tables, graphs, and proportions;
(B) solve real-world problems to find the whole given a part and the percent, to find the part given the whole and the percent, and to find the percent given the part and the whole, including the use
of concrete and pictorial models; and
(C) use equivalent fractions, decimals, and percents to show equal parts of the same whole.
(6) Expressions, equations, and relationships. The student applies mathematical process standards to use multiple representations to describe algebraic relationships. The student is expected to:
(A) identify independent and dependent quantities from tables and graphs;
(B) write an equation that represents the relationship between independent and dependent quantities from a table; and
(C) represent a given situation using verbal descriptions, tables, graphs, and equations in the form y = kx or y = x + b.
(7) Expressions, equations, and relationships. The student applies mathematical process standards to develop concepts of expressions and equations. The student is expected to:
(A) generate equivalent numerical expressions using order of operations, including whole number exponents and prime factorization;
(B) distinguish between expressions and equations verbally, numerically, and algebraically;
(C) determine if two expressions are equivalent using concrete models, pictorial models, and algebraic representations; and
(D) generate equivalent expressions using the properties of operations: inverse, identity, commutative, associative, and distributive properties.
(8) Expressions, equations, and relationships. The student applies mathematical process standards to use geometry to represent relationships and solve problems. The student is expected to:
(A) extend previous knowledge of triangles and their properties to include the sum of angles of a triangle, the relationship between the lengths of sides and measures of angles in a triangle, and
determining when three lengths form a triangle;
(B) model area formulas for parallelograms, trapezoids, and triangles by decomposing and rearranging parts of these shapes;
(C) write equations that represent problems related to the area of rectangles, parallelograms, trapezoids, and triangles and volume of right rectangular prisms where dimensions are positive rational
numbers; and
(D) determine solutions for problems involving the area of rectangles, parallelograms, trapezoids, and triangles and volume of right rectangular prisms where dimensions are positive rational numbers.
(9) Expressions, equations, and relationships. The student applies mathematical process standards to use equations and inequalities to represent situations. The student is expected to:
(A) write one-variable, one-step equations and inequalities to represent constraints or conditions within problems;
(B) represent solutions for one-variable, one-step equations and inequalities on number lines; and
(C) write corresponding real-world problems given one-variable, one-step equations or inequalities.
(10) Expressions, equations, and relationships. The student applies mathematical process standards to use equations and inequalities to solve problems. The student is expected to:
(A) model and solve one-variable, one-step equations and inequalities that represent problems, including geometric concepts; and
(B) determine if the given value(s) make(s) one-variable, one-step equations or inequalities true.
(11) Measurement and data. The student applies mathematical process standards to use coordinate geometry to identify locations on a plane. The student is expected to graph points in all four
quadrants using ordered pairs of rational numbers.
(12) Measurement and data. The student applies mathematical process standards to use numerical or graphical representations to analyze problems. The student is expected to:
(A) represent numeric data graphically, including dot plots, stem-and-leaf plots, histograms, and box plots;
(B) use the graphical representation of numeric data to describe the center, spread, and shape of the data distribution;
(C) summarize numeric data with numerical summaries, including the mean and median (measures of center) and the range and interquartile range (IQR) (measures of spread), and use these summaries to
describe the center, spread, and shape of the data distribution; and
(D) summarize categorical data with numerical and graphical summaries, including the mode, the percent of values in each category (relative frequency table), and the percent bar graph, and use these
summaries to describe the data distribution.
(13) Measurement and data. The student applies mathematical process standards to use numerical or graphical representations to solve problems. The student is expected to:
(A) interpret numeric data summarized in dot plots, stem-and-leaf plots, histograms, and box plots; and
(B) distinguish between situations that yield data with and without variability.
(14) Personal financial literacy. The student applies mathematical process standards to develop an economic way of thinking and problem solving useful in one's life as a knowledgeable consumer and
investor. The student is expected to:
(A) compare the features and costs of a checking account and a debit card offered by different local financial institutions;
(B) distinguish between debit cards and credit cards;
(C) balance a check register that includes deposits, withdrawals, and transfers;
(D) explain why it is important to establish a positive credit history;
(E) describe the information in a credit report and how long it is retained;
(F) describe the value of credit reports to borrowers and to lenders;
(G) explain various methods to pay for college, including through savings, grants, scholarships, student loans, and work-study; and
(H) compare the annual salary of several occupations requiring various levels of post-secondary education or vocational training and calculate the effects of the different annual salaries on lifetime
Source: The provisions of this §111.26 adopted to be effective September 10, 2012, 37 TexReg 7109.
§111.27. Grade 7, Adopted 2012.
(a) Introduction.
(1) The desire to achieve educational excellence is the driving force behind the Texas essential knowledge and skills for mathematics, guided by the college and career readiness standards. By
embedding statistics, probability, and finance, while focusing on computational thinking, mathematical fluency, and solid understanding, Texas will lead the way in mathematics education and prepare
all Texas students for the challenges they will face in the 21st century.
(2) The process standards describe ways in which students are expected to engage in the content. The placement of the process standards at the beginning of the knowledge and skills listed for each
grade and course is intentional. The process standards weave the other knowledge and skills together so that students may be successful problem solvers and use mathematics efficiently and effectively
in daily life. The process standards are integrated at every grade level and course. When possible, students will apply mathematics to problems arising in everyday life, society, and the workplace.
Students will use a problem-solving model that incorporates analyzing given information, formulating a plan or strategy, determining a solution, justifying the solution, and evaluating the
problem-solving process and the reasonableness of the solution. Students will select appropriate tools such as real objects, manipulatives, algorithms, paper and pencil, and technology and techniques
such as mental math, estimation, number sense, and generalization and abstraction to solve problems. Students will effectively communicate mathematical ideas, reasoning, and their implications using
multiple representations such as symbols, diagrams, graphs, computer programs, and language. Students will use mathematical relationships to generate solutions and make connections and predictions.
Students will analyze mathematical relationships to connect and communicate mathematical ideas. Students will display, explain, or justify mathematical ideas and arguments using precise mathematical
language in written or oral communication.
(3) The primary focal areas in Grade 7 are number and operations; proportionality; expressions, equations, and relationships; and measurement and data. Students use concepts, algorithms, and
properties of rational numbers to explore mathematical relationships and to describe increasingly complex situations. Students use concepts of proportionality to explore, develop, and communicate
mathematical relationships, including number, geometry and measurement, and statistics and probability. Students use algebraic thinking to describe how a change in one quantity in a relationship
results in a change in the other. Students connect verbal, numeric, graphic, and symbolic representations of relationships, including equations and inequalities. Students use geometric properties and
relationships, as well as spatial reasoning, to model and analyze situations and solve problems. Students communicate information about geometric figures or situations by quantifying attributes,
generalize procedures from measurement experiences, and use the procedures to solve problems. Students use appropriate statistics, representations of data, and reasoning to draw conclusions, evaluate
arguments, and make recommendations. While the use of all types of technology is important, the emphasis on algebra readiness skills necessitates the implementation of graphing technology.
(4) Statements that contain the word "including" reference content that must be mastered, while those containing the phrase "such as" are intended as possible illustrative examples.
(b) Knowledge and skills.
(1) Mathematical process standards. The student uses mathematical processes to acquire and demonstrate mathematical understanding. The student is expected to:
(A) apply mathematics to problems arising in everyday life, society, and the workplace;
(B) use a problem-solving model that incorporates analyzing given information, formulating a plan or strategy, determining a solution, justifying the solution, and evaluating the problem-solving
process and the reasonableness of the solution;
(C) select tools, including real objects, manipulatives, paper and pencil, and technology as appropriate, and techniques, including mental math, estimation, and number sense as appropriate, to solve
(D) communicate mathematical ideas, reasoning, and their implications using multiple representations, including symbols, diagrams, graphs, and language as appropriate;
(E) create and use representations to organize, record, and communicate mathematical ideas;
(F) analyze mathematical relationships to connect and communicate mathematical ideas; and
(G) display, explain, and justify mathematical ideas and arguments using precise mathematical language in written or oral communication.
(2) Number and operations. The student applies mathematical process standards to represent and use rational numbers in a variety of forms. The student is expected to extend previous knowledge of sets
and subsets using a visual representation to describe relationships between sets of rational numbers.
(3) Number and operations. The student applies mathematical process standards to add, subtract, multiply, and divide while solving problems and justifying solutions. The student is expected to:
(A) add, subtract, multiply, and divide rational numbers fluently; and
(B) apply and extend previous understandings of operations to solve problems using addition, subtraction, multiplication, and division of rational numbers.
(4) Proportionality. The student applies mathematical process standards to represent and solve problems involving proportional relationships. The student is expected to:
(A) represent constant rates of change in mathematical and real-world problems given pictorial, tabular, verbal, numeric, graphical, and algebraic representations, including d = rt;
(B) calculate unit rates from rates in mathematical and real-world problems;
(C) determine the constant of proportionality (k = y/x) within mathematical and real-world problems;
(D) solve problems involving ratios, rates, and percents, including multi-step problems involving percent increase and percent decrease, and financial literacy problems; and
(E) convert between measurement systems, including the use of proportions and the use of unit rates.
(5) Proportionality. The student applies mathematical process standards to use geometry to describe or solve problems involving proportional relationships. The student is expected to:
(A) generalize the critical attributes of similarity, including ratios within and between similar shapes;
(B) describe π as the ratio of the circumference of a circle to its diameter; and
(C) solve mathematical and real-world problems involving similar shape and scale drawings.
(6) Proportionality. The student applies mathematical process standards to use probability and statistics to describe or solve problems involving proportional relationships. The student is expected
(A) represent sample spaces for simple and compound events using lists and tree diagrams;
(B) select and use different simulations to represent simple and compound events with and without technology;
(C) make predictions and determine solutions using experimental data for simple and compound events;
(D) make predictions and determine solutions using theoretical probability for simple and compound events;
(E) find the probabilities of a simple event and its complement and describe the relationship between the two;
(F) use data from a random sample to make inferences about a population;
(G) solve problems using data represented in bar graphs, dot plots, and circle graphs, including part-to-whole and part-to-part comparisons and equivalents;
(H) solve problems using qualitative and quantitative predictions and comparisons from simple experiments; and
(I) determine experimental and theoretical probabilities related to simple and compound events using data and sample spaces.
(7) Expressions, equations, and relationships. The student applies mathematical process standards to represent linear relationships using multiple representations. The student is expected to
represent linear relationships using verbal descriptions, tables, graphs, and equations that simplify to the form y = mx + b.
(8) Expressions, equations, and relationships. The student applies mathematical process standards to develop geometric relationships with volume. The student is expected to:
(A) model the relationship between the volume of a rectangular prism and a rectangular pyramid having both congruent bases and heights and connect that relationship to the formulas;
(B) explain verbally and symbolically the relationship between the volume of a triangular prism and a triangular pyramid having both congruent bases and heights and connect that relationship to the
formulas; and
(C) use models to determine the approximate formulas for the circumference and area of a circle and connect the models to the actual formulas.
(9) Expressions, equations, and relationships. The student applies mathematical process standards to solve geometric problems. The student is expected to:
(A) solve problems involving the volume of rectangular prisms, triangular prisms, rectangular pyramids, and triangular pyramids;
(B) determine the circumference and area of circles;
(C) determine the area of composite figures containing combinations of rectangles, squares, parallelograms, trapezoids, triangles, semicircles, and quarter circles; and
(D) solve problems involving the lateral and total surface area of a rectangular prism, rectangular pyramid, triangular prism, and triangular pyramid by determining the area of the shape's net.
(10) Expressions, equations, and relationships. The student applies mathematical process standards to use one-variable equations and inequalities to represent situations. The student is expected to:
(A) write one-variable, two-step equations and inequalities to represent constraints or conditions within problems;
(B) represent solutions for one-variable, two-step equations and inequalities on number lines; and
(C) write a corresponding real-world problem given a one-variable, two-step equation or inequality.
(11) Expressions, equations, and relationships. The student applies mathematical process standards to solve one-variable equations and inequalities. The student is expected to:
(A) model and solve one-variable, two-step equations and inequalities;
(B) determine if the given value(s) make(s) one-variable, two-step equations and inequalities true; and
(C) write and solve equations using geometry concepts, including the sum of the angles in a triangle, and angle relationships.
(12) Measurement and data. The student applies mathematical process standards to use statistical representations to analyze data. The student is expected to:
(A) compare two groups of numeric data using comparative dot plots or box plots by comparing their shapes, centers, and spreads;
(B) use data from a random sample to make inferences about a population; and
(C) compare two populations based on data in random samples from these populations, including informal comparative inferences about differences between the two populations.
(13) Personal financial literacy. The student applies mathematical process standards to develop an economic way of thinking and problem solving useful in one's life as a knowledgeable consumer and
investor. The student is expected to:
(A) calculate the sales tax for a given purchase and calculate income tax for earned wages;
(B) identify the components of a personal budget, including income; planned savings for college, retirement, and emergencies; taxes; and fixed and variable expenses, and calculate what percentage
each category comprises of the total budget;
(C) create and organize a financial assets and liabilities record and construct a net worth statement;
(D) use a family budget estimator to determine the minimum household budget and average hourly wage needed for a family to meet its basic needs in the student's city or another large city nearby;
(E) calculate and compare simple interest and compound interest earnings; and
(F) analyze and compare monetary incentives, including sales, rebates, and coupons.
Source: The provisions of this §111.27 adopted to be effective September 10, 2012, 37 TexReg 7109.
§111.28. Grade 8, Adopted 2012.
(a) Introduction.
(1) The desire to achieve educational excellence is the driving force behind the Texas essential knowledge and skills for mathematics, guided by the college and career readiness standards. By
embedding statistics, probability, and finance, while focusing on computational thinking, mathematical fluency, and solid understanding, Texas will lead the way in mathematics education and prepare
all Texas students for the challenges they will face in the 21st century.
(2) The process standards describe ways in which students are expected to engage in the content. The placement of the process standards at the beginning of the knowledge and skills listed for each
grade and course is intentional. The process standards weave the other knowledge and skills together so that students may be successful problem solvers and use mathematics efficiently and effectively
in daily life. The process standards are integrated at every grade level and course. When possible, students will apply mathematics to problems arising in everyday life, society, and the workplace.
Students will use a problem-solving model that incorporates analyzing given information, formulating a plan or strategy, determining a solution, justifying the solution, and evaluating the
problem-solving process and the reasonableness of the solution. Students will select appropriate tools such as real objects, manipulatives, algorithms, paper and pencil, and technology and techniques
such as mental math, estimation, number sense, and generalization and abstraction to solve problems. Students will effectively communicate mathematical ideas, reasoning, and their implications using
multiple representations such as symbols, diagrams, graphs, computer programs, and language. Students will use mathematical relationships to generate solutions and make connections and predictions.
Students will analyze mathematical relationships to connect and communicate mathematical ideas. Students will display, explain, or justify mathematical ideas and arguments using precise mathematical
language in written or oral communication.
(3) The primary focal areas in Grade 8 are proportionality; expressions, equations, relationships, and foundations of functions; and measurement and data. Students use concepts, algorithms, and
properties of real numbers to explore mathematical relationships and to describe increasingly complex situations. Students use concepts of proportionality to explore, develop, and communicate
mathematical relationships. Students use algebraic thinking to describe how a change in one quantity in a relationship results in a change in the other. Students connect verbal, numeric, graphic, and
symbolic representations of relationships, including equations and inequalities. Students begin to develop an understanding of functional relationships. Students use geometric properties and
relationships, as well as spatial reasoning, to model and analyze situations and solve problems. Students communicate information about geometric figures or situations by quantifying attributes,
generalize procedures from measurement experiences, and use the procedures to solve problems. Students use appropriate statistics, representations of data, and reasoning to draw conclusions, evaluate
arguments, and make recommendations. While the use of all types of technology is important, the emphasis on algebra readiness skills necessitates the implementation of graphing technology.
(4) Statements that contain the word "including" reference content that must be mastered, while those containing the phrase "such as" are intended as possible illustrative examples.
(b) Knowledge and skills.
(1) Mathematical process standards. The student uses mathematical processes to acquire and demonstrate mathematical understanding. The student is expected to:
(A) apply mathematics to problems arising in everyday life, society, and the workplace;
(B) use a problem-solving model that incorporates analyzing given information, formulating a plan or strategy, determining a solution, justifying the solution, and evaluating the problem-solving
process and the reasonableness of the solution;
(C) select tools, including real objects, manipulatives, paper and pencil, and technology as appropriate, and techniques, including mental math, estimation, and number sense as appropriate, to solve
(D) communicate mathematical ideas, reasoning, and their implications using multiple representations, including symbols, diagrams, graphs, and language as appropriate;
(E) create and use representations to organize, record, and communicate mathematical ideas;
(F) analyze mathematical relationships to connect and communicate mathematical ideas; and
(G) display, explain, and justify mathematical ideas and arguments using precise mathematical language in written or oral communication.
(2) Number and operations. The student applies mathematical process standards to represent and use real numbers in a variety of forms. The student is expected to:
(A) extend previous knowledge of sets and subsets using a visual representation to describe relationships between sets of real numbers;
(B) approximate the value of an irrational number, including π and square roots of numbers less than 225, and locate that rational number approximation on a number line;
(C) convert between standard decimal notation and scientific notation; and
(D) order a set of real numbers arising from mathematical and real-world contexts.
(3) Proportionality. The student applies mathematical process standards to use proportional relationships to describe dilations. The student is expected to:
(A) generalize that the ratio of corresponding sides of similar shapes are proportional, including a shape and its dilation;
(B) compare and contrast the attributes of a shape and its dilation(s) on a coordinate plane; and
(C) use an algebraic representation to explain the effect of a given positive rational scale factor applied to two-dimensional figures on a coordinate plane with the origin as the center of dilation.
(4) Proportionality. The student applies mathematical process standards to explain proportional and non-proportional relationships involving slope. The student is expected to:
(A) use similar right triangles to develop an understanding that slope, m, given as the rate comparing the change in y-values to the change in x-values, (y[2] - y[1])/ (x[2] - x[1]), is the same for
any two points (x[1], y[1]) and (x[2], y[2]) on the same line;
(B) graph proportional relationships, interpreting the unit rate as the slope of the line that models the relationship; and
(C) use data from a table or graph to determine the rate of change or slope and y-intercept in mathematical and real-world problems.
(5) Proportionality. The student applies mathematical process standards to use proportional and non-proportional relationships to develop foundational concepts of functions. The student is expected
(A) represent linear proportional situations with tables, graphs, and equations in the form of y = kx;
(B) represent linear non-proportional situations with tables, graphs, and equations in the form of y = mx + b, where b ≠ 0;
(C) contrast bivariate sets of data that suggest a linear relationship with bivariate sets of data that do not suggest a linear relationship from a graphical representation;
(D) use a trend line that approximates the linear relationship between bivariate sets of data to make predictions;
(E) solve problems involving direct variation;
(F) distinguish between proportional and non-proportional situations using tables, graphs, and equations in the form y = kx or y = mx + b, where b ≠ 0;
(G) identify functions using sets of ordered pairs, tables, mappings, and graphs;
(H) identify examples of proportional and non-proportional functions that arise from mathematical and real-world problems; and
(I) write an equation in the form y = mx + b to model a linear relationship between two quantities using verbal, numerical, tabular, and graphical representations.
(6) Expressions, equations, and relationships. The student applies mathematical process standards to develop mathematical relationships and make connections to geometric formulas. The student is
expected to:
(A) describe the volume formula V = Bh of a cylinder in terms of its base area and its height;
(B) model the relationship between the volume of a cylinder and a cone having both congruent bases and heights and connect that relationship to the formulas; and
(C) use models and diagrams to explain the Pythagorean theorem.
(7) Expressions, equations, and relationships. The student applies mathematical process standards to use geometry to solve problems. The student is expected to:
(A) solve problems involving the volume of cylinders, cones, and spheres;
(B) use previous knowledge of surface area to make connections to the formulas for lateral and total surface area and determine solutions for problems involving rectangular prisms, triangular prisms,
and cylinders;
(C) use the Pythagorean Theorem and its converse to solve problems; and
(D) determine the distance between two points on a coordinate plane using the Pythagorean Theorem.
(8) Expressions, equations, and relationships. The student applies mathematical process standards to use one-variable equations or inequalities in problem situations. The student is expected to:
(A) write one-variable equations or inequalities with variables on both sides that represent problems using rational number coefficients and constants;
(B) write a corresponding real-world problem when given a one-variable equation or inequality with variables on both sides of the equal sign using rational number coefficients and constants;
(C) model and solve one-variable equations with variables on both sides of the equal sign that represent mathematical and real-world problems using rational number coefficients and constants; and
(D) use informal arguments to establish facts about the angle sum and exterior angle of triangles, the angles created when parallel lines are cut by a transversal, and the angle-angle criterion for
similarity of triangles.
(9) Expressions, equations, and relationships. The student applies mathematical process standards to use multiple representations to develop foundational concepts of simultaneous linear equations.
The student is expected to identify and verify the values of x and y that simultaneously satisfy two linear equations in the form y = mx + b from the intersections of the graphed equations.
(10) Two-dimensional shapes. The student applies mathematical process standards to develop transformational geometry concepts. The student is expected to:
(A) generalize the properties of orientation and congruence of rotations, reflections, translations, and dilations of two-dimensional shapes on a coordinate plane;
(B) differentiate between transformations that preserve congruence and those that do not;
(C) explain the effect of translations, reflections over the x- or y-axis, and rotations limited to 90°, 180°, 270°, and 360° as applied to two-dimensional shapes on a coordinate plane using an
algebraic representation; and
(D) model the effect on linear and area measurements of dilated two-dimensional shapes.
(11) Measurement and data. The student applies mathematical process standards to use statistical procedures to describe data. The student is expected to:
(A) construct a scatterplot and describe the observed data to address questions of association such as linear, non-linear, and no association between bivariate data;
(B) determine the mean absolute deviation and use this quantity as a measure of the average distance data are from the mean using a data set of no more than 10 data points; and
(C) simulate generating random samples of the same size from a population with known characteristics to develop the notion of a random sample being representative of the population from which it was
(12) Personal financial literacy. The student applies mathematical process standards to develop an economic way of thinking and problem solving useful in one's life as a knowledgeable consumer and
investor. The student is expected to:
(A) solve real-world problems comparing how interest rate and loan length affect the cost of credit;
(B) calculate the total cost of repaying a loan, including credit cards and easy access loans, under various rates of interest and over different periods using an online calculator;
(C) explain how small amounts of money invested regularly, including money saved for college and retirement, grow over time;
(D) calculate and compare simple interest and compound interest earnings;
(E) identify and explain the advantages and disadvantages of different payment methods;
(F) analyze situations to determine if they represent financially responsible decisions and identify the benefits of financial responsibility and the costs of financial irresponsibility; and
(G) estimate the cost of a two-year and four-year college education, including family contribution, and devise a periodic savings plan for accumulating the money needed to contribute to the total
cost of attendance for at least the first year of college.
Source: The provisions of this §111.28 adopted to be effective September 10, 2012, 37 TexReg 7109.
For additional information, email rules@tea.state.tx.us.
|
{"url":"http://ritter.tea.state.tx.us/rules/tac/chapter111/ch111b.html","timestamp":"2014-04-20T06:21:34Z","content_type":null,"content_length":"111807","record_id":"<urn:uuid:9dda0e4a-3bd8-463b-9282-f7191ce95de1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probability problem
September 30th 2009, 07:22 PM #1
Here's a problem I'm stuck with:
A survey for brand recognition is done and it is determined that 70% of consumers have heard of Mike's Mechanic Shop. A survey of 900 randomly selected cosumers is to be conducted. For such
groups of 900, would it be unusual to get 527 consumers who recognized this shop? So I have to show all the statistics.....
So I'm not clear whether I divide 527/900? Or do I multiply (900)*(.70)?
And why would it be unusual? Do I have to draw a bell shaped curve as well (what would it look like)?
I need this explained step by step.
Thank you.
So there was an initial survey that gave the figure of 70% and now there is another survey that will have 900 participants?
$\frac{527}{900} \approx 59\%$
you mentioned a bell shaped curve, are you given a standard deviation?
Here's a problem I'm stuck with:
A survey for brand recognition is done and it is determined that 70% of consumers have heard of Mike's Mechanic Shop. A survey of 900 randomly selected cosumers is to be conducted. For such
groups of 900, would it be unusual to get 527 consumers who recognized this shop? So I have to show all the statistics.....
So I'm not clear whether I divide 527/900? Or do I multiply (900)*(.70)?
And why would it be unusual? Do I have to draw a bell shaped curve as well (what would it look like)?
I need this explained step by step.
Thank you.
Assuming the figure of 70% is correct, the number of customers in the survey who have heard of the shop should have a Binomial distribution with p = 0.70 and n = 900. The mean of the distribution
is np = 630. So I would interpret the question as asking what is the probability that 527 customers or fewer will recognize the shop. I.e., if X has a Binomial distribution with p = 0.70 and n =
900, what is the probability that $X \leq 630$? [Edit]Oops-- that should be 527.[/edit]
You certainly wouldn't want to calculate this probability by hand, but you might have a calculator that will do it for you. Or you could use a Normal approximation to the Binomial-- that would be
my choice.
Last edited by awkward; October 3rd 2009 at 09:55 AM. Reason: fixed error
I actually remember her mentioning standard deviation but I can't find my notes that go with it! I do clearly remember her saying to use binomial distribution though. =/
Please note that I wrote down a wrong number in my original post, which I went back and corrected.
If you don't know the formula for the standard deviation of a binomial distribution, it is $\sqrt{np(1-p)}$.
Binomial distribution - Wikipedia, the free encyclopedia
September 30th 2009, 08:18 PM #2
October 1st 2009, 02:14 PM #3
October 2nd 2009, 08:23 PM #4
October 3rd 2009, 09:58 AM #5
|
{"url":"http://mathhelpforum.com/statistics/105344-probability-problem.html","timestamp":"2014-04-17T01:03:47Z","content_type":null,"content_length":"46336","record_id":"<urn:uuid:0ad65b50-c73c-434c-b12b-763663bdb1c6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mixing math and metaphysics (for fun and profit)
Soror PhoenixAngel's recent post pleasantly reminded me about the real analysis and mathematical logic courses i took a number of years ago, and about Paul Halmos' lovely little textbook "Naive Set
Theory." (It's "naive" not because it's easy, but because Prof. Halmos didn't focus too much on avoiding the minefield of potential paradoxes when discussing the infinite. Take a look at the cover
for a hint at the easiest of the paradoxes.) These topics share in common with other branches of mathematics the intention to find the minimal set of axioms needed to construct everything -- or at
least, as much of mathematics as possible. The Peano Axioms are a good example; from them comes all the real numbers. It didn't occur to me at the time that they also define a concrete representation
of the integers, from which one can construct a machine to do arithmetic. The resulting "unary" arithmetic is slow compared with other number systems, like the binary numbers found in modern
computers, but it's still perfectly functional. When I took a one-semester course tracing the development ofGoedel's Incompleteness Theorem, it became more clear to me how a set of axioms or even the
process of deducing theorems can be a computational process. Goedel proved the theorem by translating mathematical statements into numbers, and operating on those numbers. Without realizing it, he
had defined a computer for mathematical logic: a terribly slow and space-inefficient computer, but plenty capable. That was a hard semester for me, but i saw computer science in an entirely different
way afterwards.
Essays like the respected Soror's remind me of the central role of computation in understanding the universe, whether that be mathematical understanding, as in Goedel's theorem, or metaphysical
understanding. Abstracting metaphysical processes as symbols and transformations on symbols has a long history in esoteric thought. Furthermore, both practicing mathematicians and practicing
metaphysicians realize that symbols aren't detached from that which they symbolize. This is because we pick symbols as tools to model reality. A hammer has the shape it has because it's useful for
pounding nails, and it lacks that which would interfere with that task. Similarly, good mathematical definitions (or good source code) are good because they point back to the concrete examples they
model, without including details that would interfere with their utility in making deductions. It's good mental exercise to make and refine definitions, to refactor source code, and to draw analogies
and pare away unnecessary details. It's always fun for me to read articles like Soror PhoenixAngel's and watch that process of drawing analogies in action.
2 comments:
PhoenixAngel said...
Thank you! You honor me. Though I am not learned as you in the field of mathematics, I do enjoy looking for a good analogy, especially one that us left brain types can really understand. The
layering of meanings is there, within the Knowledge, for anyone to seek, find, learn from and enjoy :)
HilbertAstronaut said...
i haven't forgotten your comment, dear Soror :) i hope you'll forgive me the post that follows. i come off like an awful racist, but i'm trying to say something about how an encounter with
another culture mades me question my own capacity for Will.
|
{"url":"http://hilberts-thoughts.blogspot.com/2012/06/mixing-math-and-metaphysics-for-fun-and.html","timestamp":"2014-04-18T20:42:58Z","content_type":null,"content_length":"51282","record_id":"<urn:uuid:89ebe730-4f75-4198-9c1c-b47f065fc720>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Contiguous flags not quite right yet.
Charles R Harris charlesr.harris@gmail....
Thu Mar 22 15:25:11 CDT 2007
In [18]:a = array([1,2,3])
In [19]:a.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
In [20]:a.shape = (1,3)
In [21]:a.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
In [22]:a.shape = (3,1)
In [23]:a.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
All three shapes are both C_CONTIGUOUS and F_CONTIGUOUS. I think ignoring
all 1's in the shape would give the right results for otherwise contiguous
arrays because in those positions the index can only take the value 0.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20070322/aca71cec/attachment.html
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-March/026633.html","timestamp":"2014-04-16T23:03:17Z","content_type":null,"content_length":"3614","record_id":"<urn:uuid:75399322-5053-4365-baac-66cc4fb23e32>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Chemical Reactions Described by the Lorenz Equations
This Demonstration analyzes the behavior of a chemical reaction scheme that is described by the Lorenz equations:
Parameters that lead to interesting behavior are ; the strange attractor that evolves from these equations spans both positive and negative values of and . If we interpret these symbols as
representing concentrations of chemical species, they cannot be negative; a shift of the and axes can give new variables that are always positive [1]. Thus if we choose , , with , the equations
describing the system become
where , , and represent concentrations of chemical species. Using the values for the constants given above, the equations are
The reactions required to give the Lorenz equations are shown in section (1.7) of [1]. You can vary the time and the initial values of the species to see the evolution of the system. There is an
unstable steady state at and an unstable fixed point at .
[1] D. Poland, "Cooperative Catalysis and Chemical Chaos: A Chemical Model for the Lorenz Equations,"
Physica D 65
(1–2), 1993 pp. 86–99.
|
{"url":"http://www.demonstrations.wolfram.com/ChemicalReactionsDescribedByTheLorenzEquations/","timestamp":"2014-04-17T12:28:59Z","content_type":null,"content_length":"46451","record_id":"<urn:uuid:f5e45ed5-06e1-49ee-b7c8-1835bf9664b2>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
|
parameters in arithmetic induction axiom schemas
up vote 5 down vote favorite
The induction schema of Peano Arithmetic is standardly given as the universal closure of $\phi(0)\land \forall x (\phi(x)\rightarrow \phi(x+1)) \rightarrow \forall x\phi(x)$. However, since the
language of arithmetic has a name for every standard number, it is not obvious (to a beginner like me) why parameters are necessary in the induction schema; why not restrict to the case where $x$ is
the only free variable in $\phi$? Does having parameters in the induction schema really make the system stronger, and, if so, how is that proven? Are there natural theorems that can only or most
easily be proven using the stronger system? Is the weaker system of any interest?
lo.logic peano-arithmetic arithmetic
add comment
4 Answers
active oldest votes
The two theories are equivalent. To see this, let's assume that we have the parameter-free induction, and suppose that $\phi(x,y)$ is a formula with two free variables. Suppose we have a
model $M$, satisfying the parameter-free induction, and there is a $b\in M$ such that $\phi(x,b)$ obeys the hypothesis of the induction scheme with parameter $b$, but not the conclusion. I
claim that there is a least such $b$ in $M$. The reason is that the collection of such $b$ violating the induction scheme for $\phi(x,b)$ with parameter $b$ is a parameter-free definable
subset of $M$, since this property is expressible, but the parameter-free induction scheme proves that every nonempty definable set $B$ has a least member, because otherwise the assertion
$\psi(x)$ expressing that $x$ is below all members of $B$ would be inductive and hence universal, contrary to $B$ being nonempty. So there is a least such $b$.
up vote 7
down vote In particular, the least such $b$ is actually definable, and so we do not actually need it as a parameter after all, and so the induction scheme for $\phi(x,b)$ follows by the
accepted parameter-free induction scheme. So there can be no such $b$ for which the parameter-induction scheme fails.
Thus, the theory of parameter-free induction implies the full theory you mention.
add comment
$\def\fii{\varphi}$While the answers above resolve the question, I will mention that IMHO the simplest way of deriving induction for a formula $\fii(x,\vec p)$ with parameters $\vec
p$ is to use parameter-free induction on the formula
$$\psi(x)=\forall\vec p\\,[\fii(0,\vec p)\land\forall y\\,(\fii(y,\vec p)\to\fii(y+1,\vec p))\to\fii(x,\vec p)].$$
up vote 7 down
vote In fact, this derives the induction schema with parameters from the (parameter-free) induction rule, since $\psi(0)$ and $\psi(x)\to\psi(x+1)$ are provable in Q without any
assumptions on $\fii$.
That's pretty tricky! – Marian Oct 27 '11 at 11:54
Thanks for the clear answer. I always wonder why logicians don't answer in a more formal way. This is clear positive exception. – Lucas K. Oct 27 '11 at 14:15
You’re welcome. – Emil Jeřábek Oct 27 '11 at 14:41
add comment
While parameter-free induction and parameterized induction are equivalent, there is an important subtlety which often justifies the addition of parameters.
Suppose that $\phi(x,p)$ is such that $$\exists p(\phi(0,p) \land \forall x(\phi(x,p) \to \phi(x+1,p)) \land \lnot \forall x \phi(x,p)).$$ Joel's trick is that $$p_0 = \min\lbrace p : \phi
(0,p) \land \forall x(\phi(x,p) \to \phi(x+1,p)) \land \lnot\forall x\phi(x,p)\rbrace$$ is definable without parameters. However, the existence of $p_0$ is another instance of induction
(in the guise of the least element principle).
up vote 5 The complexity of a formula in arithmetic is often measured by counting the number of quantifier alternations when put into prenex form (often ignoring bounded quantifiers). With this
down vote measure, the complexity of the induction that justifies the existence of $p_0$ is strictly greater than that of the original formula $\phi(x,p)$. So there is a price to pay for removing
Therefore, when considering atithmetical theories with limited forms of induction ($EFA$, $PRA$, $IOpen$, $I\Delta_n$, $I\Sigma_n$) it is necessary to include parameters. It is also
natural to think of $PA$ as the union of these restricted theories, which leads to the inclusion of parameters when formulating the induction scheme.
add comment
You can prove that the two theories are, in fact, equivalent. By induction (NB: 'meta-induction') on the number of parameters, we can reduce the claim to the case where you have a theory
$T$ with the usual induction schema and a theory $T'$ that is an extension of $T$ by a one-parameter induction schema.
So assume $T$ and $T'$ are not equivalent. This implies that there is a model $\mathcal{M} \vDash T'$ and a two-place open sentence $\phi(x,y)$ such that
$$\mathcal{M} \vDash \phi(0,\beta) \wedge (\forall x (\phi(x,\beta) \rightarrow \phi(x+1,\beta))) $$
but also,
up vote 2 $$\mathcal{M} \nvDash \forall x \phi(x,\beta)$$
down vote
for some $\beta \in \mathcal{M}$. But note now that the above is equivalent to
$$ \mathcal{M} \vDash \exists y (\phi(0,y) \wedge (\forall z (\phi(z,y) \rightarrow \phi(z+1,y))) \wedge \neg \forall z \phi(z,y) $$
and if you call this last sentence $S$ then you get that $S \wedge \phi ' (x)$ violates the usual induction-schema, where $\phi ' (x)$ is the one-place open sentence we get by by closing
$y$ under the existential quantifier in $S$. That is to say we have $S \wedge \phi ' (0)$ and $ \forall x (S \wedge \phi ' (x) \rightarrow S \wedge \phi ' (x+1))$ but not $\forall x S \
wedge \phi ' (x)$. But that is a contradiction since $\mathcal{M}$ is a model of $T$. Hence the two theories are equivalent.
Are some of your $b$'s supposed to be $\beta$'s? – David Speyer Oct 26 '11 at 19:19
corrected - - - – Chuck Oct 26 '11 at 19:24
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic peano-arithmetic arithmetic or ask your own question.
|
{"url":"http://mathoverflow.net/questions/79174/parameters-in-arithmetic-induction-axiom-schemas/79257","timestamp":"2014-04-19T18:01:46Z","content_type":null,"content_length":"70621","record_id":"<urn:uuid:863e33b5-79ab-4c0d-8d0c-f3b52328f698>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On symmetric signatures in holographic algorithms
- Electronic Colloquium on Computational Complexity Report , 2007
"... We develop the theory of holographic algorithms. We give characterizations of algebraic varieties of realizable symmetric generators and recognizers on the basis manifold, and a polynomial time
decision algorithm for the simultaneous realizability problem. Using the general machinery we are able to ..."
Cited by 22 (11 self)
Add to MetaCart
We develop the theory of holographic algorithms. We give characterizations of algebraic varieties of realizable symmetric generators and recognizers on the basis manifold, and a polynomial time
decision algorithm for the simultaneous realizability problem. Using the general machinery we are able to give unexpected holographic algorithms for some counting problems, modulo certain Mersenne
type integers. These counting problems are #P-complete without the moduli. Going beyond symmetric signatures, we define d-admissibility and d-realizability for general signatures, and give a
characterization of 2admissibility and some general constructions of admissible and realizable families. 1
- Electronic Colloquium on Computational Complexity Report , 2007
"... Holographic algorithms are a novel approach to design polynomial time computations using linear superpositions. Most holographic algorithms are designed with basis vectors of dimension 2.
Recently Valiant showed that a basis of dimension 4 can be used to solve in P an interesting (restrictive SAT) c ..."
Cited by 7 (1 self)
Add to MetaCart
Holographic algorithms are a novel approach to design polynomial time computations using linear superpositions. Most holographic algorithms are designed with basis vectors of dimension 2. Recently
Valiant showed that a basis of dimension 4 can be used to solve in P an interesting (restrictive SAT) counting problem mod 7. This problem without modulo 7 is #P-complete, and counting mod 2 is
NP-hard. We give a general collapse theorem for bases of dimension 4 to dimension 2 in the holographic algorithms framework. We also define an extension of holographic algorithms to allow more
general support vectors. Finally we give a Basis Folding Theorem showing that in a natural setting the support vectors can be simulated by bases of dimension 2. 1
"... We explore the intricate interdependent relationship among counting problems, considered from three frameworks for such problems: Holant Problems, counting CSP and weighted H-colorings. We
consider these problems for general complex valued functions that take boolean inputs. We show that results fro ..."
Cited by 6 (4 self)
Add to MetaCart
We explore the intricate interdependent relationship among counting problems, considered from three frameworks for such problems: Holant Problems, counting CSP and weighted H-colorings. We consider
these problems for general complex valued functions that take boolean inputs. We show that results from one framework can be used to derive results in another, and this happens in both directions.
Holographic reductions discover an underlying unity, which is only revealed when these counting problems are investigated in the complex domain C. We prove three complexity dichotomy theorems,
leading to a general theorem for Holant c problems. This is the natural class of Holant problems where one can assign constants 0 or 1. More specifically, given any signature grid on G = (V, E) over
a set F of symmetric functions, we completely classify the complexity to be in P or #P-hard, according to F, of X Y fv(σ |E(v)), σ:E→{0,1} v∈V where fv ∈ F ∪ {0, 1} (0, 1 are the unary constant 0, 1
functions). Not only is holographic reduction the main tool, but also the final dichotomy can be only naturally stated in the language of holographic transformations. The proof goes through another
dichotomy theorem on boolean complex weighted
"... Valiant introduced matchgate computation and holographic algorithms. A number of seemingly exponential time problems can be solved by this novel algorithmic paradigm in polynomial time. We show
that, in a very strong sense, matchgate computations and holographic algorithms based on them provide a un ..."
Cited by 4 (1 self)
Add to MetaCart
Valiant introduced matchgate computation and holographic algorithms. A number of seemingly exponential time problems can be solved by this novel algorithmic paradigm in polynomial time. We show that,
in a very strong sense, matchgate computations and holographic algorithms based on them provide a universal methodology to a broad class of counting problems studied in statistical physics community
for decades. They capture precisely those problems which are #P-hard on general graphs but computable in polynomial time on planar graphs. More precisely, we prove complexity dichotomy theorems in
the framework of counting CSP problems. The local constraint functions take Boolean inputs, and can be arbitrary real-valued symmetric functions. We prove that, every problem in this class belongs to
precisely three categories: (1) those which are tractable (i.e., polynomial time computable) on general graphs, or (2) those which are #P-hard on general graphs but tractable on planar graphs, or (3)
those which are #P-hard even on planar graphs. The classification criteria
"... Leslie Valiant recently proposed a theory of holographic algorithms. These novel algorithms achieve exponential speed-ups for certain computational problems compared to naive algorithms for the
same problems. The methodology uses Pfaffians and (planar) perfect matchings as basic computational primit ..."
Cited by 2 (2 self)
Add to MetaCart
Leslie Valiant recently proposed a theory of holographic algorithms. These novel algorithms achieve exponential speed-ups for certain computational problems compared to naive algorithms for the same
problems. The methodology uses Pfaffians and (planar) perfect matchings as basic computational primitives, and attempts to create exponential cancellations in computation. In this article we survey
this new theory of matchgate computations and holographic algorithms.
"... We prove a dichotomy theorem for a class of counting problems expressible by Boolean signatures. The proof methods are holographic reductions and interpolations. We show that interpolatability
provides a universal strategy to prove #P-hardness for this class of problems. For these problems whenever ..."
Cited by 1 (0 self)
Add to MetaCart
We prove a dichotomy theorem for a class of counting problems expressible by Boolean signatures. The proof methods are holographic reductions and interpolations. We show that interpolatability
provides a universal strategy to prove #P-hardness for this class of problems. For these problems whenever holographic reductions followed by interpolations fail to prove #P-hardness, we can show
that the problems are actually solvable in polynomial time. 1
"... 2011 To my parents and my wife ii iii Curriculum Vitae ..."
"... This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the
authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or sel ..."
Add to MetaCart
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors
institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are
prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further
information regarding Elsevier’s archiving and manuscript policies are encouraged to visit:
"... Abstract. We prove a complexity dichotomy theorem for symmetric complex-weighted Boolean #CSP when the constraint graph of the input must be planar. The problems that are #P-hard over general
graphs but tractable over planar graphs are precisely those with a holographic reduction to matchgates. This ..."
Add to MetaCart
Abstract. We prove a complexity dichotomy theorem for symmetric complex-weighted Boolean #CSP when the constraint graph of the input must be planar. The problems that are #P-hard over general graphs
but tractable over planar graphs are precisely those with a holographic reduction to matchgates. This generalizes a theorem of Cai, Lu, and Xia for the case of real weights. We also obtain a
dichotomy theorem for a symmetric arity 4 signature with complex weights in the planar Holant framework, which we use in the proof of our #CSP dichotomy. In particular, we reduce the problem of
evaluating the Tutte polynomial of a planar graph at the point (3, 3) to counting the number of Eulerian orientations over planar 4-regular graphs to show the latter is #P-hard. This strengthens a
theorem by Huang and Lu to the planar setting. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3960395","timestamp":"2014-04-16T14:29:16Z","content_type":null,"content_length":"32166","record_id":"<urn:uuid:d4dc4a90-4dcb-4d81-982b-6eb662d650f2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lynn, MA Algebra Tutor
Find a Lynn, MA Algebra Tutor
...I have been teaching at North Reading High School for 10 years, and I enjoy working with students. I have years of experience with tutoring one-on-one including honors students looking for that
top grade, and students just trying to pass the course. In addition, I am married with 3 children, aged 7, 5, and 18 months, so I have experience with a wide range of ages.
19 Subjects: including algebra 2, algebra 1, physics, SAT math
...I will travel throughout the area to meet in your home, library, or wherever is comfortable for you.Materials Physics Research Associate, Harvard, current Geophysics postdoctoral fellow, MIT,
2010-2012 Physics PhD, Brandeis University, 2010 -Includes experience teaching and lecturing Physics...
16 Subjects: including algebra 1, algebra 2, calculus, physics
...Precalculus topics include Polynomial functions, Rational functions, Inverse functions, Exponential functions, Logarithmic functions, Sequences and series, Binomial theorem , Vectors etc. I've
worked with students taking precalculus at Winchester High School and Medford High School. I am a native Chinese Speaker.
5 Subjects: including algebra 1, algebra 2, physics, precalculus
...I've tutored nearly all the students I've worked with for many years, and I've also frequently tutored their brothers and sisters - also for many years. I enjoy helping my students to
understand and realize that they can not only do the work - they can do it well and they can understand what they're doing. My references will gladly provide details about their own experiences.
11 Subjects: including algebra 1, algebra 2, geometry, precalculus
...I have tutored hundreds of students, helping them with standardized test preparation (SAT, ACT, SSAT, ICEE, CMT, MCAS, etc.), reading and writing skills, math (up to geometry) and homework
help. I have taught English language learners in classes and individually, assisted adults in regaining ski...
38 Subjects: including algebra 1, English, reading, writing
|
{"url":"http://www.purplemath.com/lynn_ma_algebra_tutors.php","timestamp":"2014-04-19T19:44:51Z","content_type":null,"content_length":"23898","record_id":"<urn:uuid:fc35c6f2-f6b9-4c12-a41a-1dc3828f90f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vineland Math Tutor
Find a Vineland Math Tutor
...In all cases, parents and students have been happy with the improvements that they have seen, and one student was able to raise her score from 610 to 710 after only 4 1-hour sessions. I focus
on both skills review and test-taking strategies. I require that students do additional practice outside of the instructional sessions, as practice is the single most important key to
8 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...Seeing students become more interested in chemistry and ask more questions when they saw physical demonstrations encouraged me to get better at explaining chemical principles on their level. I
enjoy tutoring and helping students, or anyone who will listen, better understand topics in chemistry a...
9 Subjects: including algebra 1, algebra 2, chemistry, geometry
...I now work as a college admissions consultant for a university prep firm and volunteer as a mentor to youth in Camden. After graduating Princeton I lived and worked for about two years in
Singapore, where I taught business IT (focusing on advanced MS Excel) in the business and accounting school ...
36 Subjects: including algebra 2, SAT math, prealgebra, geometry
I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching
because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun...
12 Subjects: including calculus, trigonometry, SAT math, ACT Math
...I have experience with after school tutoring from 2003-2006. I was an Enon Tabernacle after school ministry tutor for elementary and high school students 2011-2012. These are just a few
13 Subjects: including algebra 2, trigonometry, psychology, biochemistry
Nearby Cities With Math Tutor
Buena, NJ Math Tutors
Chester, PA Math Tutors
Egg Harbor Township Math Tutors
Fairfield Twp, NJ Math Tutors
Franklin Twp, NJ Math Tutors
Galloway Township, NJ Math Tutors
Hamilton Township, NJ Math Tutors
Landisville, NJ Math Tutors
Lawrence Township, NJ Math Tutors
Millville, NJ Math Tutors
Minotola Math Tutors
Monroe Township Math Tutors
Newfield, NJ Math Tutors
Norma Math Tutors
Pittsgrove Township, NJ Math Tutors
|
{"url":"http://www.purplemath.com/vineland_nj_math_tutors.php","timestamp":"2014-04-16T16:32:15Z","content_type":null,"content_length":"23828","record_id":"<urn:uuid:a9b70c61-0863-47b0-a06f-223932973343>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistical Results Associated With Increased Risk Of Exaggerating Risk
Statistical Results Associated With Increased Risk Of Exaggerating Risk
Our title, which is indistinguishable from a flood of others^1, might read, “Reading Articles About The Misuse Of Statistics Increases Risk Of Apoplexy.”
Yes, for every article you read like this one, your risk of becoming apoplectic over the improper use of statistics increases 2.0 times.
What does that “2.0-fold increase in risk” mean? Not just for this finding, but for any which reports results in the form of “increased risk” of suffering from a malady after being exposed to some
“risk factor.” In this study, “exposure” is reading this blog, which is the risk factor, and “non-exposure” is not reading.
Suppose (somehow) you knew the probability of developing the malady given you were not exposed to the “risk factor.” Call it prob[not exposed]. You also have to know (somehow) the probability of
developing the malady given you were exposed; called prob[exposed]. Relative risk is
RR = prob[exposed] / prob[not exposed].
You could also calculated the odds ratio. First know that odds are a one-to-one function of probability, viz:
Odds = prob / (1 – prob).
The odds ratio is like the risk ratio, but the ratio of the odds, not probabilities:
OR = odds[exposed] / odds[not exposed].
Now suppose that prob[not exposed] = 0.000001, which is a one in a million chance of developing the malady given you were not exposed. If you then hear that being exposed “increases the risk by 2.0
fold”, then this means the risk ratio must be 2.0. Back solving gives the probability of developing the malady after exposure as 0.000002. (Similar calculations can be done for odds ratios.)
In this case, exposure drove your risk from one in a million to just 2 in a million. We can already see that presenting results in raw probability will not be as pulse pounding as speaking in terms
of risk or odds. Information is also lost in giving the risk ratio: the customer has no idea what the risk is in the control group. So one fix would be to give emphasis to the actual probabilities of
suffering, and not just the risk ratio.
But even if that is done, something would still be wrong. Can you spot what?
For the apoplexy finding, we do not know what the probability of apoplexy is for this blog’s readers. Nor do we know what the probability of apoplexy is for non-readers. Therefore, we cannot know the
risk. We can, through statistical formula, estimate it. But that estimate will exaggerate the true risk.
For example, suppose that we witnessed 18 cases of sputtering apoplexy in 40 readers of this blog, but we only found it only 9 times in 40 non-readers. That gives an estimated “statistically
significant” risk ratio of 2.0. But this exaggerates risk in the following sense.
Now, we can guess prob[exposed] is about 0.45 and prob[not exposed] is about 0.225 for new groups of readers and non-readers “similar” to the ones sampled here. (Incidentally, those probabilities and
that RR are, however, exact for these 80 readers and non-readers.)
We are not interested in these folks anymore, but in new ones. That is the point, after all, of doing this study. The actual probability^2 that the next, new blog readers develops apoplexy is 0.452,
which is close to, but just over, the raw estimate of 0.45. And the actual probability that the next, new non-readers develops apoplexy is 0.232, which is also higher than the raw estimate of 0.225.
This puts the actual risk ratio at 1.95, which is under the raw estimate of 2.0.
Not a huge difference in this fictional example, to be sure, but the difference between the raw and actual difference will always be in the direction of exaggerating the risk. Taken over the tens of
thousands of studies reporting risk, the overall effect is large.
The reason these differences exists is because the traditional method reports parameter estimates, and not actual probabilities or actual risk ratios. Parameters are the internal, unobservable parts
of the probability models which are used to quantify uncertainty in the data. They are also the focus of nearly all statistical methods (because of inertia, custom, and lack of knowledge of
Reporting in terms of actual observables not only gives a true impression of the probabilities and risks, but allows us to answer more complicated questions about the data and to provide richer
information. For example, reporting on observables we can picture the probability that each of 0, 1, 2, …, 40 new readers/non-readers develop apoplexy. That’s done in the picture.
This kind of picture is extraordinarily important because it will give superior estimates for cost and benefit analyses, which are guaranteed to be exaggerated using parameter-based methods.
^1Search for the terms “increase(s)(ed) risk of”; millions of hits.
^2See the “modern stats” at this link for how to calculate these. The actual probabilities will always move closer to 0.5 than the raw parameter estimates.
Statistical Results Associated With Increased Risk Of Exaggerating Risk — 12 Comments
1. I think that the probabilities are reversed in the paragraph beginning, “We are not interested in these folks anymore … ”
In any case, I’ve learned that I can double the odds of winning the Mega Millions lottery by buying two tickets instead of one — from one in 175,711,536 to two in 175,711,536. And as they always
remind me, “If you don’t play, you can’t win.” The odds of winning are infinitely better if I buy a ticket that if I don’t.
2. Speed.
Thanks, fixed.
3. I like the updated graph too. Much more convincing.
4. I’m not apoplectic just yet, but I’m closely monitoring myself.
Please put me in the database as a “possible.”
I wonder what the statistics are for those who become apoplectic upon seeing Morgan Freeman proclaim Obama’s foes to be racists. These are the same racists who awarded Herman Cain 1st place in
Florida’s straw poll.
5. Dr. Briggs: Apoplexy hits me every time I hear a new heath related statistic on the news. There are a long string of alarms rendered by the news media reporting that the latest medical studies
show that I have x% increased chance of having a serious medical problem from using salt. Now, this information in this form is worthless because the conditions assumed are total unspecified
among other things. I know that salt has varuable minerals such iodine that helps prevent thyroid problems. I can’t do without it. I guess people just accept this statistical inference and change
their life styles accordingly. Because so many people are innumerate, especially when it comes to statistical inferences, they have no way of assessing the implications for themselves. Do you
have the same apoplectic response? Of course the field of journalism does not require numeracy. Do they ever check with someone who does?
6. Label on graph – New Apolplexies??
7. I don’t get apoplexy anymore since my doctor now has me on a regular diet of apoplexy medication. Apoplexy is a side effect of Statistical Questioning Syndrome and that’s really what I have. This
is a new disease recently voted in by the AMA who have begun to see how the APA methodology helps its practitioners (chiropractors have long used this methodology, too). One symptom of SQS is an
eyebrow raised at AMA dicta.
8. Tony,
Yes, out of the 40 new readers/non-readers, this is the probability for the number of apoplexies we might see. The most likely value of new apoplexies out of 40 for non-readers is 8, which will
happen with about a 12% chance, etc.
The 40 is arbitrary. I could have picked just 1, or 1000, or whatever was of interest to me.
This puts the actual risk ratio at 1.95, which is under the raw estimate of 2.0. … but the difference between the raw and actual difference will always be in the direction of exaggerating the
How do you know that 1.95 is the ACTUAL risk ratio? Maybe the actual risk ratio is 2.2, and then both numbers underestimate the risk ratio. Plus your calculations can be sensitive to the choice
of a prior distribution.
10. JH,
Prior sensitivity is not problematic—and in any case, in simple situations like this give the exact same answers as the frequentist solution, as you know. We know that 1.92 is the actual risk of
the observables, given the model is true and the data observed.
Of course, I do not claim the model is true, nor the data flawless. But given they are, then my statement is true.
11. Briggs,
re. New Apolplexies.
Seems to be an extra ‘L’.
Got me wondering about Steve Jobs
12. Of course we often read that stress is a major causative factor in cancer. So I used to bait my greenie friends with the observation that, if this is so, then their alarmism over chemicals
causing cancer (when there was little if any physical evidence that they did), then the increasing cancer rate was a result of their alarmism.
|
{"url":"http://wmbriggs.com/blog/?p=4390","timestamp":"2014-04-20T10:47:03Z","content_type":null,"content_length":"70343","record_id":"<urn:uuid:b9fc6917-2c56-49d9-b097-df338ec27e01>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PLASMA_Async interface
Dear all,
I have questions regarding PLASMA_Async interface.
How to call Async interface for two dependent routines, I mean by dependent that routine2 has to wait routine1 to start?
How this is differ if I have two independent routines?
Thanks inadvance
Re: PLASMA_Async interface
If you use the PLASMA Async interface to call two routines that are dependent,
than they will be scheduled such that the dependencies are preserved.
Some tasks of routine 2 can start before all the tasks of routine 1 are finished.
E.g., if you are solving a linear system of equations, you can call the factorization,
then the forward substitution, then the backward substitution. Forward / backward
substitution can start before the factorization is finished. The scheduler will preserve
dependencies at the level of individual tasks. You can also simply call PLASMA routine
that solves the system, in which case PLASMA will do just that, i.e., use the async mode
of operation to pipeline the stages. I hope it answers your question, but let me know if you
need more information. Also, it is a nice exercise to trace PLASMA, to see what really happens.
Re: PLASMA_Async interface
Also, you can use the Async interface to co-schedule completely independent routines,
e.g., two or more independent linear solves, etc. The benefit is that if one routine has
some load imbalance, the tasks from other routines will jump in to keep the core busy.
Re: PLASMA_Async interface
Thank you, this is really helpful.
At each call I use one sequence and one request, for what we use PLASMA_Sequence_Wait(sequence), I thought that this command is to schedule two or more routines.
Also I am trying to install PLASMA 2.6.0 as follow:
./setup.py --cc=icc --fc=ifort --blaslib=-L/opt/intel/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm --cflags=-O2 -I/opt/intel/composer_xe_2013.1.117/mkl/include -mkl=
sequential --fflags=-O2 -I/opt/intel/composer_xe_2013.1.117/mkl/include --ldflags_fc=-nofor_main --downall
but I got "option -l not recognized". What is wrong in my command??
Re: PLASMA_Async interface
To complete what Jakub said, you can indeed do that with sequence. every async call using a same sequence will be in the same pipeline. If you use different sequence, it will be independent calls.
You can have a look at time_zpotri_tile.c to see the different ways of using the sequence to pipeline different functions.
Regarding the problem with the installer. As said: "-I" is not an option of the installer. You have to protect the options you give to fflags et cflags between quote:
--cflags="-O2 -I/opt/intel/composer_xe_2013.1.117/mkl/include -mkl=sequential" --fflags="-O2 -I/opt/intel/composer_xe_2013.1.117/mkl/include"
Re: PLASMA_Async interface
Thank you.
Is the following warning affect the performance:
WARNING: the following environment variables have been
set to 1:
Last edited by waelali on Tue Jan 28, 2014 1:38 am, edited 3 times in total.
Re: PLASMA_Async interface
WARNING: the following environment variables have been
set to 1:
Depending on which routine you are using.
If you are using the eigenvalue and SVD routine that may affect your performance since these routine
require at some level a multi-thread BLAS.
Re: PLASMA_Async interface
How to modify the following to get rid of the warning:
./setup.py --cc=icc --fc=ifort --blaslib="-L/opt/intel/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm" --cflags="-O2 -I/opt/intel/composer_xe_2013.1.117/mkl/include -mkl=
sequential" --fflags="-O2 -I/opt/intel/composer_xe_2013.1.117/mkl/include" --ldflags_fc="-nofor_main" --downall
I am calling PLASMA_dsyevd_Tile as follow:
double *W = (double *)malloc(n*1*sizeof(double));
double *Q = (double *)malloc(n*n*sizeof(double));
PLASMA_desc *descT;
int vec = PlasmaVec;
PLASMA_Alloc_Workspace_dsyevd(n, n, &descT);
int INFO = PLASMA_dsyevd_Tile( vec, PlasmaUpper, descA, W, descT, Q, n);
The matrix A is symmetric. I did not get the correct eigenvalues even that INFO = 0 !! Testing PLASMA_dsyevd give the correct eigenvalues.
Could you please help me in this.
Re: PLASMA_Async interface
When you call the tile interface your matrix should be stored as tile-storage, is this your case?
otherwise if your matrix is stored as standard storage (Lapack storage, or columnwise storage)
then you should call the PLASMA_dsyevd interface.
are you checking the eigenvalue only or the eigenvalue/eigenvectors?
can you please send the main code.
For the compilation, replace -lmkl_sequential by -lmkl_intel_thread
add -openmp -DPLASMA_WITH_MKL to your compiling and linking flags (--cflags --ldflags_fc) or simply put --cc="icc -openmp -DPLASMA_WITH_MKL "
|
{"url":"http://icl.cs.utk.edu/plasma/forum/viewtopic.php?f=2&p=2459","timestamp":"2014-04-16T04:52:36Z","content_type":null,"content_length":"27025","record_id":"<urn:uuid:15eecfe8-1a89-4124-bd13-3d25e54df0ed>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|